registration accuracy
Recently Published Documents


TOTAL DOCUMENTS

396
(FIVE YEARS 142)

H-INDEX

23
(FIVE YEARS 5)

Author(s):  
Minki Lee ◽  
Sajjan Parajuli ◽  
Hyeokgyun Moon ◽  
Ryungeun Song ◽  
Saebom Lee ◽  
...  

Abstract The rheological properties of silver inks are analyzed, and the printing results are presented based on the inks and roll-to-roll printing speed. The shear viscosity, shear modulus, and extensional viscosity of the inks are measured using rotational and extensional rheometers. The inks exhibit the shear thinning power law fluids because the concentration of dispersed nanoparticles in the solvent is sufficiently low, which minimizes elasticity. After the inks are printed on a flexible substrate through gravure printing, the optical images, surface profiles, and electric resistances of the printed pattern are obtained. The width and height of the printed pattern change depending on the ink viscosity, whereas the printing speed does not significantly affect the widening. The drag-out tail is reduced at high ink viscosities and fast printing speeds, thereby improving the printed pattern quality in the roll-to-roll process. Based on the results obtained, we suggest ink and printing conditions that result in high printing quality for complicated printings, such as overlay printing registration accuracy, which imposes pattern widening and drag-out tails in printed patterns.


2022 ◽  
Vol 11 ◽  
Author(s):  
Laura Cercenelli ◽  
Federico Babini ◽  
Giovanni Badiali ◽  
Salvatore Battaglia ◽  
Achille Tarsitano ◽  
...  

BackgroundAugmented Reality (AR) represents an evolution of navigation-assisted surgery, providing surgeons with a virtual aid contextually merged with the real surgical field. We recently reported a case series of AR-assisted fibular flap harvesting for mandibular reconstruction. However, the registration accuracy between the real and the virtual content needs to be systematically evaluated before widely promoting this tool in clinical practice. In this paper, after description of the AR based protocol implemented for both tablet and HoloLens 2 smart glasses, we evaluated in a first test session the achievable registration accuracy with the two display solutions, and in a second test session the success rate in executing the AR-guided skin paddle incision task on a 3D printed leg phantom.MethodsFrom a real computed tomography dataset, 3D virtual models of a human leg, including fibula, arteries and skin with planned paddle profile for harvesting, were obtained. All virtual models were imported into Unity software to develop a marker-less AR application suitable to be used both via tablet and via HoloLens 2 headset. The registration accuracy for both solutions was verified on a 3D printed leg phantom obtained from the virtual models, by repeatedly applying the tracking function and computing pose deviations between the AR-projected virtual skin paddle profile and the real one transferred to the phantom via a CAD/CAM cutting guide. The success rate in completing the AR-guided task of skin paddle harvesting was evaluated using CAD/CAM templates positioned on the phantom model surface.ResultsOn average, the marker-less AR protocol showed comparable registration errors (ranging within 1-5 mm) for tablet-based and HoloLens-based solution. Registration accuracy seems to be quite sensitive to ambient light conditions. We found a good success rate in completing the AR-guided task within an error margin of 4 mm (97% and 100% for tablet and HoloLens, respectively). All subjects reported greater usability and ergonomics for HoloLens 2 solution.ConclusionsResults revealed that the proposed marker-less AR based protocol may guarantee a registration error within 1-5 mm for assisting skin paddle harvesting in the clinical setting. Optimal lightening conditions and further improvement of marker-less tracking technologies have the potential to increase the efficiency and precision of this AR-assisted reconstructive surgery.


2022 ◽  
Vol 21 ◽  
pp. 153303382110673
Author(s):  
Hayate Washio ◽  
Shingo Ohira ◽  
Yoshinori Funama ◽  
Yoshihiro Ueda ◽  
Masahiro Morimoto ◽  
...  

Introduction: Several studies have reported the relation between the imaging dose and secondary cancer risk and have emphasized the need to minimize the additional imaging dose as low as reasonably achievable. The iterative cone-beam computed tomography (iCBCT) algorithm can improve the image quality by utilizing scatter correction and statistical reconstruction. We investigate the use of a novel iCBCT reconstruction algorithm to reduce the patient dose while maintaining low-contrast detectability and registration accuracy. Methods: Catphan and anthropomorphic phantoms were analyzed. All CBCT images were acquired with varying dose levels and reconstructed with a Feldkamp–Davis–Kress algorithm-based CBCT (FDK-CBCT) and iCBCT. The low-contrast detectability was subjectively assessed using a 9-point scale by 4 reviewers and objectively assessed using structure similarity index (SSIM). The soft tissue-based registration error was analyzed for each dose level and reconstruction technique. Results: The results of subjective low-contrast detectability found that the iCBCT acquired at two-thirds of a dose was superior to the FDK-CBCT acquired at a full dose (6.4 vs 5.4). Relative to FDK-CBCT acquired at full dose, SSIM was higher for iCBCT acquired at one-sixth dose in head and head and neck region while equivalent with iCBCT acquired at two-thirds dose in pelvis region. The soft tissue-based registration was 2.2 and 0.6 mm for FDK-CBCT and iCBCT, respectively. Conclusion: Use of iCBCT reconstruction algorithm can generally reduce the patient dose by approximately two-thirds compared to conventional reconstruction methods while maintaining low-contrast detectability and accuracy of registration.


2021 ◽  
Vol 8 ◽  
Author(s):  
Xiaojie Huang ◽  
Lizhao Mao ◽  
Xiaoyan Wang ◽  
Zhongzhao Teng ◽  
Minghan Shao ◽  
...  

Cardiovascular disease (CVD) is a common disease with high mortality rate, and carotid atherosclerosis (CAS) is one of the leading causes of cardiovascular disease. Multisequence carotid MRI can not only identify carotid atherosclerotic plaque constituents with high sensitivity and specificity, but also obtain different morphological features, which can effectively help doctors improve the accuracy of diagnosis. However, it is difficult to evaluate the accurate evolution of local changes in carotid atherosclerosis in multi-sequence MRI due to the inconsistent parameters of different sequence images and the geometric space mismatch caused by the motion deviation of tissues and organs. To solve these problems, we propose a cross-scale multi-modal image registration method based on the Siamese U-Net. The network uses sub-networks with image inputs of different sizes to extract various features, and a special padding module is designed to make the network available for training on cross-scale features. In addition, to improve the registration performance, a multi-scale loss function under Gaussian smoothing is applied for optimization. For the experiments, we have collected a multi-sequence MRI image dataset from 11 patients with carotid atherosclerosis for a retrospective study. We evaluate our overall architectures by cross-validation on our carotid dataset. The experimental results show that our method can generate precise and reliable results with cross-scale multi-sequence inputs and the registration accuracy can be greatly improved by using the Gaussian smoothing loss function. The DSC of our Siamese structure can reach 84.1% on the carotid data set with cross-size input. With the use of GDSC loss, the average DSC can be improved by 5.23%, while the average distance between fixed landmarks and moving landmarks can be decreased by 6.46%.Our code is made publicly available at: https://github.com/MingHan98/Cross-scale-Siamese-Unet.


2021 ◽  
Author(s):  
Li Ding ◽  
Tony Kang ◽  
Ajay E. Kuriyan ◽  
Rajeev S. Ramchandran ◽  
Charles C. Wykoff ◽  
...  

<div>We propose a novel hybrid framework for registering retinal images in the presence of extreme geometric distortions that are commonly encountered in ultra-widefield (UWF) fluorescein angiography. Our approach consists of two stages: a feature-based global registration and a vessel-based local refinement. For the global registration, we introduce a modified RANSAC algorithm that jointly identifies robust matches between feature keypoints in reference and target images and estimates a polynomial geometric transformation consistent with the identified correspondences. Our RANSAC modification particularly improves feature point matching and the registration in peripheral regions that are most severely impacted by the geometric distortions. The second local refinement stage is formulated in our framework as a parametric chamfer alignment for vessel maps obtained using a deep neural network. Because the complete vessel maps contribute to the chamfer alignment, this approach not only improves registration accuracy but also aligns with clinical practice, where vessels are typically a key focus of examinations. We validate the effectiveness of the proposed framework on a new UWF fluorescein angiography (FA) dataset and on the existing narrow-field FIRE (fundus image registration) dataset and demonstrate that it significantly outperforms prior retinal image registration methods. The proposed approach enhances the utility of large sets of longitudinal UWF images by enabling: (a) automatic computation of vessel change metrics and (b) standardized and co-registered examination that can better highlight changes of clinical interest to physicians.</div>


2021 ◽  
Author(s):  
Li Ding ◽  
Tony Kang ◽  
Ajay E. Kuriyan ◽  
Rajeev S. Ramchandran ◽  
Charles C. Wykoff ◽  
...  

<div>We propose a novel hybrid framework for registering retinal images in the presence of extreme geometric distortions that are commonly encountered in ultra-widefield (UWF) fluorescein angiography. Our approach consists of two stages: a feature-based global registration and a vessel-based local refinement. For the global registration, we introduce a modified RANSAC algorithm that jointly identifies robust matches between feature keypoints in reference and target images and estimates a polynomial geometric transformation consistent with the identified correspondences. Our RANSAC modification particularly improves feature point matching and the registration in peripheral regions that are most severely impacted by the geometric distortions. The second local refinement stage is formulated in our framework as a parametric chamfer alignment for vessel maps obtained using a deep neural network. Because the complete vessel maps contribute to the chamfer alignment, this approach not only improves registration accuracy but also aligns with clinical practice, where vessels are typically a key focus of examinations. We validate the effectiveness of the proposed framework on a new UWF fluorescein angiography (FA) dataset and on the existing narrow-field FIRE (fundus image registration) dataset and demonstrate that it significantly outperforms prior retinal image registration methods. The proposed approach enhances the utility of large sets of longitudinal UWF images by enabling: (a) automatic computation of vessel change metrics and (b) standardized and co-registered examination that can better highlight changes of clinical interest to physicians.</div>


Author(s):  
Anna L. Roethe ◽  
Judith Rösler ◽  
Martin Misch ◽  
Peter Vajkoczy ◽  
Thomas Picht

Abstract Background Augmented reality (AR) has the potential to support complex neurosurgical interventions by including visual information seamlessly. This study examines intraoperative visualization parameters and clinical impact of AR in brain tumor surgery. Methods Fifty-five intracranial lesions, operated either with AR-navigated microscope (n = 39) or conventional neuronavigation (n = 16) after randomization, have been included prospectively. Surgical resection time, duration/type/mode of AR, displayed objects (n, type), pointer-based navigation checks (n), usability of control, quality indicators, and overall surgical usefulness of AR have been assessed. Results AR display has been used in 44.4% of resection time. Predominant AR type was navigation view (75.7%), followed by target volumes (20.1%). Predominant AR mode was picture-in-picture (PiP) (72.5%), followed by 23.3% overlay display. In 43.6% of cases, vision of important anatomical structures has been partially or entirely blocked by AR information. A total of 7.7% of cases used MRI navigation only, 30.8% used one, 23.1% used two, and 38.5% used three or more object segmentations in AR navigation. A total of 66.7% of surgeons found AR visualization helpful in the individual surgical case. AR depth information and accuracy have been rated acceptable (median 3.0 vs. median 5.0 in conventional neuronavigation). The mean utilization of the navigation pointer was 2.6 × /resection hour (AR) vs. 9.7 × /resection hour (neuronavigation); navigation effort was significantly reduced in AR (P < 0.001). Conclusions The main benefit of HUD-based AR visualization in brain tumor surgery is the integrated continuous display allowing for pointer-less navigation. Navigation view (PiP) provides the highest usability while blocking the operative field less frequently. Visualization quality will benefit from improvements in registration accuracy and depth impression. German clinical trials registration number. DRKS00016955.


2021 ◽  
Author(s):  
Wei Wei ◽  
Xu Haishan ◽  
Marko Rak ◽  
Christian Hansen

Abstract Background and Objective: Ultrasound (US) devices are often used in percutanous interventions. Due to their low image quality, the US image slices are aligned with pre-operative Computed Tomography/Magnetic Resonance Imaging (CT/MRI) images to enable better visibilities of anatomies during the intervention. This work aims at improving the deep learning one shot registration by using less loops through deep learning networks.Methods: We propose two cascade networks which aim at improving registration accuracy by less loops. The InitNet-Regression-LoopNet (IRL) network applies the plane regression method to detect the orientation of the predicted plane derived from the previous loop, then corrects input CT/MRI volume orientation and improves the prediction iteratively. The InitNet-LoopNet-MultiChannel (ILM) comprises two cascade networks, where an InitNet is trained with low resolution images toperform coarse registration. Then, a LoopNet wraps the high resolution images and result of the previous loop into a three channel input and trained to improve prediction accuracy in every loop. Results: We benchmark the two cascade networks on 1035 clinical images from 52 patients , yielding an improved registration accuracy with LoopNet. The IRL achieved an average angle error of 13.3° and an average distance error of 4.5 millimieter. It out-performs the ILM network with angle error 17.4° and distance error 4.9 millimeter and the InitNet with angle error 18.6° and distance error 4.9 millimeter. Our results show the efficiency of the proposed registration networks, which have the potential to improve the robustness and accuracy of intraoperative patient registration.


2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Hongmei Yuan ◽  
Minglei Yang ◽  
Shan Qian ◽  
Wenxin Wang ◽  
Xiaotian Jia ◽  
...  

Abstract Background Image registration is an essential step in the automated interpretation of the brain computed tomography (CT) images of patients with acute cerebrovascular disease (ACVD). However, performing brain CT registration accurately and rapidly remains greatly challenging due to the large intersubject anatomical variations, low resolution of soft tissues, and heavy computation costs. To this end, the HSCN-Net, a hybrid supervised convolutional neural network, was developed for precise and fast brain CT registration. Method HSCN-Net generated synthetic deformation fields using a simulator as one supervision for one reference–moving image pair to address the problem of lack of gold standards. Furthermore, the simulator was designed to generate multiscale affine and elastic deformation fields to overcome the registration challenge posed by large intersubject anatomical deformation. Finally, HSCN-Net adopted a hybrid loss function constituted by deformation field and image similarity to improve registration accuracy and generalization capability. In this work, 101 CT images of patients were collected for model construction (57), evaluation (14), and testing (30). HSCN-Net was compared with the classical Demons and VoxelMorph models. Qualitative analysis through the visual evaluation of critical brain tissues and quantitative analysis by determining the endpoint error (EPE) between the predicted sparse deformation vectors and gold-standard sparse deformation vectors, image normalized mutual information (NMI), and the Dice coefficient of the middle cerebral artery (MCA) blood supply area were carried out to assess model performance comprehensively. Results HSCN-Net and Demons had a better visual spatial matching performance than VoxelMorph, and HSCN-Net was more competent for smooth and large intersubject deformations than Demons. The mean EPE of HSCN-Net (3.29 mm) was less than that of Demons (3.47 mm) and VoxelMorph (5.12 mm); the mean Dice of HSCN-Net was 0.96, which was higher than that of Demons (0.90) and VoxelMorph (0.87); and the mean NMI of HSCN-Net (0.83) was slightly lower than that of Demons (0.84), but higher than that of VoxelMorph (0.81). Moreover, the mean registration time of HSCN-Net (17.86 s) was shorter than that of VoxelMorph (18.53 s) and Demons (147.21 s). Conclusion The proposed HSCN-Net could achieve accurate and rapid intersubject brain CT registration.


Sign in / Sign up

Export Citation Format

Share Document