scholarly journals Continuous Feature-Based Tracking of the Inner Ear for Robot-Assisted Microsurgery

2021 ◽  
Vol 8 ◽  
Author(s):  
Christian Marzi ◽  
Tom Prinzen ◽  
Julia Haag ◽  
Thomas Klenzner ◽  
Franziska Mathis-Ullrich

Robotic systems for surgery of the inner ear must enable highly precise movement in relation to the patient. To allow for a suitable collaboration between surgeon and robot, these systems should not interrupt the surgical workflow and integrate well in existing processes. As the surgical microscope is a standard tool, present in almost every microsurgical intervention and due to it being in close proximity to the situs, it is predestined to be extended by assistive robotic systems. For instance, a microscope-mounted laser for ablation. As both, patient and microscope are subject to movements during surgery, a well-integrated robotic system must be able to comply with these movements. To solve the problem of on-line registration of an assistance system to the situs, the standard of care often utilizes marker-based technologies, which require markers being rigidly attached to the patient. This not only requires time for preparation but also increases invasiveness of the procedure and the line of sight of the tracking system may not be obstructed. This work aims at utilizing the existing imaging system for detection of relative movements between the surgical microscope and the patient. The resulting data allows for maintaining registration. Hereby, no artificial markers or landmarks are considered but an approach for feature-based tracking with respect to the surgical environment in otology is presented. The images for tracking are obtained by a two-dimensional RGB stream of a surgical microscope. Due to the bony structure of the surgical site, the recorded cochleostomy scene moves nearly rigidly. The goal of the tracking algorithm is to estimate motion only from the given image stream. After preprocessing, features are detected in two subsequent images and their affine transformation is computed by a random sample consensus (RANSAC) algorithm. The proposed method can provide movement feedback with up to 93.2 μm precision without the need for any additional hardware in the operating room or attachment of fiducials to the situs. In long term tracking, an accumulative error occurs.

2021 ◽  
Vol 187 (1) ◽  
pp. 145-153
Author(s):  
Conor R. Lanahan ◽  
Bridget N. Kelly ◽  
Michele A. Gadd ◽  
Michelle C. Specht ◽  
Carson L. Brown ◽  
...  

Abstract Purpose Safe breast cancer lumpectomies require microscopically clear margins. Real-time margin assessment options are limited, and 20–40% of lumpectomies have positive margins requiring re-excision. The LUM Imaging System previously showed excellent sensitivity and specificity for tumor detection during lumpectomy surgery. We explored its impact on surgical workflow and performance across patient and tumor types. Methods We performed IRB-approved, prospective, non-randomized studies in breast cancer lumpectomy procedures. The LUM Imaging System uses LUM015, a protease-activated fluorescent imaging agent that identifies residual tumor in the surgical cavity walls. Fluorescent cavity images were collected in real-time and analyzed using system software. Results Cavity and specimen images were obtained in 55 patients injected with LUM015 at 0.5 or 1.0 mg/kg and in 5 patients who did not receive LUM015. All tumor types were distinguished from normal tissue, with mean tumor:normal (T:N) signal ratios of 3.81–5.69. T:N ratios were 4.45 in non-dense and 4.00 in dense breasts (p = 0.59) and 3.52 in premenopausal and 4.59 in postmenopausal women (p = 0.19). Histopathology and tumor receptor testing were not affected by LUM015. Falsely positive readings were more likely when tumor was present < 2 mm from the adjacent specimen margin. LUM015 signal was stable in vivo at least 6.5 h post injection, and ex vivo at least 4 h post excision. Conclusions Intraoperative use of the LUM Imaging System detected all breast cancer subtypes with robust performance independent of menopausal status and breast density. There was no significant impact on histopathology or receptor evaluation.


Robotica ◽  
1996 ◽  
Vol 14 (5) ◽  
pp. 575-582
Author(s):  
Jiming Liu

SUMMARYLearning in the age of information superhighway necessitates a properly-developed efficient vehicle that is not only powerful in directing users to the needed information or to situate in a reality through virtual settings, but also controllable at the various comfortable paces. The goal of this project is to explore a new on-line medium for users to navigate at their own pace in the structured cyberspace—knowledge space composed of concepts, systems design, application-oriented case studies, up-to-date industrial news (trends and product review), and on-line robotic systems, and to use it as a robotics work-bench for conducting controllable experiments/simulations. Through such an electronic learning medium, users will be able to acquire a global outlook as well as an integrated understanding of modern robotics in a manner that is low-cost, time-and-place-free, and student-centered.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2742 ◽  
Author(s):  
Wang ◽  
Walsh ◽  
Koirala

: Pre-harvest fruit yield estimation is useful to guide harvesting and marketing resourcing, but machine vision estimates based on a single view from each side of the tree (“dual-view”) underestimates the fruit yield as fruit can be hidden from view. A method is proposed involving deep learning, Kalman filter, and Hungarian algorithm for on-tree mango fruit detection, tracking, and counting from 10 frame-per-second videos captured of trees from a platform moving along the inter row at 5 km/h. The deep learning based mango fruit detection algorithm, MangoYOLO, was used to detect fruit in each frame. The Hungarian algorithm was used to correlate fruit between neighbouring frames, with the improvement of enabling multiple-to-one assignment. The Kalman filter was used to predict the position of fruit in following frames, to avoid multiple counts of a single fruit that is obscured or otherwise not detected with a frame series. A “borrow” concept was added to the Kalman filter to predict fruit position when its precise prediction model was absent, by borrowing the horizontal and vertical speed from neighbouring fruit. By comparison with human count for a video with 110 frames and 192 (human count) fruit, the method produced 9.9% double counts and 7.3% missing count errors, resulting in around 2.6% over count. In another test, a video (of 1162 frames, with 42 images centred on the tree trunk) was acquired of both sides of a row of 21 trees, for which the harvest fruit count was 3286 (i.e., average of 156 fruit/tree). The trees had thick canopies, such that the proportion of fruit hidden from view from any given perspective was high. The proposed method recorded 2050 fruit (62% of harvest) with a bias corrected Root Mean Square Error (RMSE) = 18.0 fruit/tree while the dual-view image method (also using MangoYOLO) recorded 1322 fruit (40%) with a bias corrected RMSE = 21.7 fruit/tree. The video tracking system is recommended over the dual-view imaging system for mango orchard fruit count.


2018 ◽  
Vol 25 (3) ◽  
pp. 717-728 ◽  
Author(s):  
Anders Filsøe Pedersen ◽  
Hugh Simons ◽  
Carsten Detlefs ◽  
Henning Friis Poulsen

The fractional Fourier transform (FrFT) is introduced as a tool for numerical simulations of X-ray wavefront propagation. By removing the strict sampling requirements encountered in typical Fourier optics, simulations using the FrFT can be carried out with much decreased detail, allowing, for example, on-line simulation during experiments. Moreover, the additive index property of the FrFT allows the propagation through multiple optical components to be simulated in a single step, which is particularly useful for compound refractive lenses (CRLs). It is shown that it is possible to model the attenuation from the entire CRL using one or two effective apertures without loss of accuracy, greatly accelerating simulations involving CRLs. To demonstrate the applicability and accuracy of the FrFT, the imaging resolution of a CRL-based imaging system is estimated, and the FrFT approach is shown to be significantly more precise than comparable approaches using geometrical optics. Secondly, it is shown that extensive FrFT simulations of complex systems involving coherence and/or non-monochromatic sources can be carried out in minutes. Specifically, the chromatic aberrations as a function of source bandwidth are estimated, and it is found that the geometric optics greatly overestimates the aberration for energy bandwidths of around 1%.


1993 ◽  
Vol 8 (12) ◽  
pp. 1038-1046
Author(s):  
William E. Crouse ◽  
J. Lindsay Cook ◽  
James D. Gerard ◽  
Denise A. Paschal

Author(s):  
J.W. Wong ◽  
W.R. Binns ◽  
A.Y. Chengl ◽  
L.Y. Geer ◽  
J.W. Epstein ◽  
...  

2016 ◽  
Vol 01 (03) ◽  
pp. 1640009 ◽  
Author(s):  
Caspar Gruijthuijsen ◽  
Benoît Rosa ◽  
Phuong Toan Tran ◽  
Jos Vander Sloten ◽  
Emmanuel Vander Poorten ◽  
...  

Catheter navigation is typically based on fluoroscopy. This implies exposure to harmful radiation, lack of depth perception and limited soft-tissue contrast. Catheter navigation would benefit from guidance that makes better use of detailed pre-operatively acquired MR/CT images, while reducing radiation exposure and improving spatial awareness of the catheter pose and shape. A prerequisite for such guidance is an accurate registration between the catheter tracking system and the MR/CT scans. Existing registration methods are lengthy and cumbersome as they require a lot of user interaction. This forms a major obstacle for their adoption into clinical practice. This paper proposes a radiation-free registration method that minimizes the impact on the surgical workflow and avoids most user interaction. The method relies on catheters with embedded sensors that provide intra-operative data that can either belong to the vessel wall or to the lumen of the vessel. Based on the acquired surface and lumen points an accurate registration is computed automatically, with minimal user interaction. Validation of the proposed method is performed on a synthetic yet realistic aorta phantom. Input from electromagnetic tracking, force sensing, and intra-vascular ultrasound are used as intra-operative sensory data.


2018 ◽  
Vol 39 (2) ◽  
pp. 24-28
Author(s):  
Zhang Baoyi ◽  
Mu Wei ◽  
Wang Hu ◽  
Yao Linhai ◽  
Liu Tong

Sign in / Sign up

Export Citation Format

Share Document