target registration error
Recently Published Documents


TOTAL DOCUMENTS

79
(FIVE YEARS 25)

H-INDEX

13
(FIVE YEARS 3)

Life ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 747
Author(s):  
Cheng Xue ◽  
Fuk-Hay Tang ◽  
Christopher W. K. Lai ◽  
Lars J. Grimm ◽  
Joseph Y. Lo

Background: The strategy to combat the problem associated with large deformations in the breast due to the difference in the medical imaging of patient posture plays a vital role in multimodal medical image registration with artificial intelligence (AI) initiatives. How to build a breast biomechanical model simulating the large-scale deformation of soft tissue remains a challenge but is highly desirable. Methods: This study proposed a hybrid individual-specific registration model of the breast combining finite element analysis, property optimization, and affine transformation to register breast images. During the registration process, the mechanical properties of the breast tissues were individually assigned using an optimization process, which allowed the model to become patient specific. Evaluation and results: The proposed method has been extensively tested on two datasets collected from two independent institutions, one from America and another from Hong Kong. Conclusions: Our method can accurately predict the deformation of breasts from the supine to prone position for both the Hong Kong and American samples, with a small target registration error of lesions.


Micromachines ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 844
Author(s):  
Zhou An ◽  
Honghai Ma ◽  
Lilu Liu ◽  
Yue Wang ◽  
Haojian Lu ◽  
...  

Intra-operative target pose estimation is fundamental in minimally invasive surgery (MIS) to guiding surgical robots. This task can be fulfilled by the 2-D/3-D rigid registration, which aligns the anatomical structures between intra-operative 2-D fluoroscopy and the pre-operative 3-D computed tomography (CT) with annotated target information. Although this technique has been researched for decades, it is still challenging to achieve accuracy, robustness and efficiency simultaneously. In this paper, a novel orthogonal-view 2-D/3-D rigid registration framework is proposed which combines the dense reconstruction based on deep learning and the GPU-accelerated 3-D/3-D rigid registration. First, we employ the X2CT-GAN to reconstruct a target CT from two orthogonal fluoroscopy images. After that, the generated target CT and pre-operative CT are input into the 3-D/3-D rigid registration part, which potentially needs a few iterations to converge the global optima. For further efficiency improvement, we make the 3-D/3-D registration algorithm parallel and apply a GPU to accelerate this part. For evaluation, a novel tool is employed to preprocess the public head CT dataset CQ500 and a CT-DRR dataset is presented as the benchmark. The proposed method achieves 1.65 ± 1.41 mm in mean target registration error(mTRE), 20% in the gross failure rate(GFR) and 1.8 s in running time. Our method outperforms the state-of-the-art methods in most test cases. It is promising to apply the proposed method in localization and nano manipulation of micro surgical robot for highly precise MIS.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4085
Author(s):  
Marek Wodzinski ◽  
Izabela Ciepiela ◽  
Tomasz Kuszewski ◽  
Piotr Kedzierawski ◽  
Andrzej Skalski

Breast-conserving surgery requires supportive radiotherapy to prevent cancer recurrence. However, the task of localizing the tumor bed to be irradiated is not trivial. The automatic image registration could significantly aid the tumor bed localization and lower the radiation dose delivered to the surrounding healthy tissues. This study proposes a novel image registration method dedicated to breast tumor bed localization addressing the problem of missing data due to tumor resection that may be applied to real-time radiotherapy planning. We propose a deep learning-based nonrigid image registration method based on a modified U-Net architecture. The algorithm works simultaneously on several image resolutions to handle large deformations. Moreover, we propose a dedicated volume penalty that introduces the medical knowledge about tumor resection into the registration process. The proposed method may be useful for improving real-time radiation therapy planning after the tumor resection and, thus, lower the surrounding healthy tissues’ irradiation. The data used in this study consist of 30 computed tomography scans acquired in patients with diagnosed breast cancer, before and after tumor surgery. The method is evaluated using the target registration error between manually annotated landmarks, the ratio of tumor volume, and the subjective visual assessment. We compare the proposed method to several other approaches and show that both the multilevel approach and the volume regularization improve the registration results. The mean target registration error is below 6.5 mm, and the relative volume ratio is close to zero. The registration time below 1 s enables the real-time processing. These results show improvements compared to the classical, iterative methods or other learning-based approaches that do not introduce the knowledge about tumor resection into the registration process. In future research, we plan to propose a method dedicated to automatic localization of missing regions that may be used to automatically segment tumors in the source image and scars in the target image.


Author(s):  
Fabian Joeres ◽  
Tonia Mielke ◽  
Christian Hansen

Abstract Purpose Resection site repair during laparoscopic oncological surgery (e.g. laparoscopic partial nephrectomy) poses some unique challenges and opportunities for augmented reality (AR) navigation support. This work introduces an AR registration workflow that addresses the time pressure that is present during resection site repair. Methods We propose a two-step registration process: the AR content is registered as accurately as possible prior to the tumour resection (the primary registration). This accurate registration is used to apply artificial fiducials to the physical organ and the virtual model. After the resection, these fiducials can be used for rapid re-registration (the secondary registration). We tested this pipeline in a simulated-use study with $$N=18$$ N = 18 participants. We compared the registration accuracy and speed for our method and for landmark-based registration as a reference. Results Acquisition of and, thereby, registration with the artificial fiducials were significantly faster than the initial use of anatomical landmarks. Our method also had a trend to be more accurate in cases in which the primary registration was successful. The accuracy loss between the elaborate primary registration and the rapid secondary registration could be quantified with a mean target registration error increase of 2.35 mm. Conclusion This work introduces a registration pipeline for AR navigation support during laparoscopic resection site repair and provides a successful proof-of-concept evaluation thereof. Our results indicate that the concept is better suited than landmark-based registration during this phase, but further work is required to demonstrate clinical suitability and applicability.


2021 ◽  
Vol 11 ◽  
Author(s):  
Houssem-Eddine Gueziri ◽  
Oded Rabau ◽  
Carlo Santaguida ◽  
D. Louis Collins

BackgroundWith the growing incidence of patients receiving surgical treatment for spinal metastatic tumours, there is a need for developing cost-efficient and radiation-free alternatives for spinal interventions. In this paper, we evaluate the capabilities and limitations of an image-guided neurosurgery (IGNS) system that uses intraoperative ultrasound (iUS) imaging for guidance.MethodsUsing a lumbosacral section of a porcine cadaver, we explored the impact of CT image resolution, ultrasound depth and ultrasound frequency on system accuracy, robustness and effectiveness. Preoperative CT images with an isotropic resolution of , and were acquired. During surgery, vertebrae L1 to L6 were exposed. For each vertebra, five iUS scans were acquired using two depth parameters (5 cm and 7 cm) and two frequencies (6 MHz and 12 MHz). A total of 120 acquisition trials were evaluated. Ultrasound-based registration performance is compared to the standard alignment procedure using intraoperative CT. We report target registration error (TRE) and computation time. In addition, the scans’ trajectories were analyzed to identify vertebral regions that provide the most relevant features for the alignment.ResultsFor all acquisitions, the median TRE ranged from 1.42 mm to 1.58 mm and the overall computation time was 9.04 s ± 1.58 s. Fourteen out of 120 iUS acquisitions (11.66%) yielded a level-to-level mismatch (and these are included in the accuracy measurements reported). No significant effect on accuracy was found with CT resolution (F(2,10) = 1.70, p = 0.232), depth (F(1,5) = 0.22, p= 0.659) nor frequency (F(1,5) = 1.02, p = 0.359). While misalignment increases linearly with the distance from the imaged vertebra, accuracy was satisfactory for directly adjacent levels. A significant relationship was found between iUS scan coverage of laminae and articular processes, and accuracy.ConclusionIntraoperative ultrasound can be used for spine surgery neuronavigation. We demonstrated that the IGNS system yield acceptable accuracy and high efficiency compared to the standard CT-based navigation procedure. The flexibility of the iUS acquisitions can have repercussions on the system performance, which are not fully identified. Further investigation is needed to understand the relationship between iUS acquisition and alignment performance.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Michiro Yamamoto ◽  
Shintaro Oyama ◽  
Syuto Otsuka ◽  
Yukimi Murakami ◽  
Hideo Yokota ◽  
...  

AbstractThe purpose of this study was to develop and evaluate a novel elbow arthroscopy system with superimposed bone and nerve visualization using preoperative computed tomography (CT) and magnetic resonance imaging (MRI) data. We obtained bone and nerve segmentation data by CT and MRI, respectively, of the elbow of a healthy human volunteer and cadaveric Japanese monkey. A life size 3-dimensional (3D) model of human organs and frame was constructed using a stereo-lithographic 3D printer. Elbow arthroscopy was performed using the elbow of a cadaveric Japanese monkey. The augmented reality (AR) range of error during rotation of arthroscopy was examined at 20 mm scope–object distances. We successfully performed AR arthroscopy using the life-size 3D elbow model and the elbow of the cadaveric Japanese monkey by making anteromedial and posterior portals. The target registration error was 1.63 ± 0.49 mm (range 1–2.7 mm) with respect to the rotation angle of the lens cylinder from 40° to − 40°. We attained reasonable accuracy and demonstrated the operation of the designed system. Given the multiple applications of AR-enhanced arthroscopic visualization, it has the potential to be a next-generation technology for arthroscopy. This technique will contribute to the reduction of serious complications associated with elbow arthroscopy.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
S. G. Brouwer de Koning ◽  
F. Geldof ◽  
R. L. P. van Veen ◽  
M. J. A. van Alphen ◽  
L. H. E. Karssemakers ◽  
...  

AbstractThe purpose of this study was to evaluate the feasibility of electromagnetic (EM) navigation for guidance on osteotomies in patients undergoing oncologic mandibular surgery. Preoperatively, a 3D rendered model of the mandible was constructed from diagnostic computed tomography (CT) images. Cutting guides and patient specific reconstruction plates were designed and printed for intraoperative use. Intraoperative patient registration was performed using a cone beam CT scan (CBCT). The location of the mandible was tracked with an EM sensor fixated to the mandible. The real-time location of both the mandible and a pointer were displayed on the navigation system. Accuracy measurements were performed by pinpointing four anatomical landmarks and four landmarks on the cutting guide using the pointer on the patient and comparing these locations to the corresponding locations on the CBCT. Differences between actual and virtual locations were expressed as target registration error (TRE). The procedure was performed in eleven patients. TREs were 3.2 ± 1.1 mm and 2.6 ± 1.5 mm using anatomical landmarks and landmarks on the cutting guide, respectively. The navigation procedure added on average half an hour to the duration of the surgery. This is the first study that reports on the accuracy of EM navigation in patients undergoing mandibular surgery.


2021 ◽  
Vol 11 (4) ◽  
pp. 1892
Author(s):  
Ludovic Venet ◽  
Sarthak Pati ◽  
Michael D. Feldman ◽  
MacLean P. Nasrallah ◽  
Paul Yushkevich ◽  
...  

Histopathologic assessment routinely provides rich microscopic information about tissue structure and disease process. However, the sections used are very thin, and essentially capture only 2D representations of a certain tissue sample. Accurate and robust alignment of sequentially cut 2D slices should contribute to more comprehensive assessment accounting for surrounding 3D information. Towards this end, we here propose a two-step diffeomorphic registration approach that aligns differently stained histology slides to each other, starting with an initial affine step followed by estimating a deformation field. It was quantitatively evaluated on ample (n = 481) and diverse data from the automatic non-rigid histological image registration challenge, where it was awarded the second rank. The obtained results demonstrate the ability of the proposed approach to robustly (average robustness = 0.9898) and accurately (average relative target registration error = 0.2%) align differently stained histology slices of various anatomical sites while maintaining reasonable computational efficiency (<1 min per registration). The method was developed by adapting a general-purpose registration algorithm designed for 3D radiographic scans and achieved consistently accurate results for aligning high-resolution 2D histologic images. Accurate alignment of histologic images can contribute to a better understanding of the spatial arrangement and growth patterns of cells, vessels, matrix, nerves, and immune cell interactions.


Sign in / Sign up

Export Citation Format

Share Document