scholarly journals Patient-Specific Simulation of Pneumoperitoneum for Laparoscopic Surgical Planning

2019 ◽  
Vol 43 (10) ◽  
Author(s):  
Shivali Dawda ◽  
Mafalda Camara ◽  
Philip Pratt ◽  
Justin Vale ◽  
Ara Darzi ◽  
...  

Abstract Gas insufflation in laparoscopy deforms the abdomen and stretches the overlying skin. This limits the use of surgical image-guidance technologies and challenges the appropriate placement of trocars, which influences the operative ease and potential quality of laparoscopic surgery. This work describes the development of a platform that simulates pneumoperitoneum in a patient-specific manner, using preoperative CT scans as input data. This aims to provide a more realistic representation of the intraoperative scenario and guide trocar positioning to optimize the ergonomics of laparoscopic instrumentation. The simulation was developed by generating 3D reconstructions of insufflated and deflated porcine CT scans and simulating an artificial pneumoperitoneum on the deflated model. Simulation parameters were optimized by minimizing the discrepancy between the simulated pneumoperitoneum and the ground truth model extracted from insufflated porcine scans. Insufflation modeling in humans was investigated by correlating the simulation’s output to real post-insufflation measurements obtained from patients in theatre. The simulation returned an average error of 7.26 mm and 10.5 mm in the most and least accurate datasets respectively. In context of the initial discrepancy without simulation (23.8 mm and 19.6 mm), the methods proposed here provide a significantly improved picture of the intraoperative scenario. The framework was also demonstrated capable of simulating pneumoperitoneum in humans. This study proposes a method for realistically simulating pneumoperitoneum to achieve optimal ergonomics during laparoscopy. Although further studies to validate the simulation in humans are needed, there is the opportunity to provide a more realistic, interactive simulation platform for future image-guided minimally invasive surgery.

2010 ◽  
Author(s):  
Jan hendrik Moltz ◽  
Jan Rühaak ◽  
Christiane Engel ◽  
Ulrike Kayser ◽  
Heinz-Otto Peitgen

The development of segmentation algorithms for liver tumors in CT scans has found growing attention in recent years. The validation of these methods, however, is often treated as a subordinate task. In this article, we review existing approaches and present rst steps towards a new methodology that evaluates the quality of an algorithm in relation to the variability of manual delineations. We obtained three manual segmentations for 50 liver lesions and computed the results of a segmentation algorithm. We compared all four masks with each other and with different ground truth estimates and calculated scores according to the validation framework from the MICCAI challenge 2008. Our results show some cases where this more elaborate evaluation reflects the segmentation quality in a more adequate way than traditional approaches. The concepts can also be extended to other similar segmentation problems.


Author(s):  
Erika Kollitz ◽  
Haegin Han ◽  
Chan Hyeong Kim ◽  
Marco Pinto ◽  
Marco Schwarz ◽  
...  

Abstract Objective: As cancer survivorship increases, there is growing interest in minimizing the late effects of radiation therapy such as radiogenic second cancer, which may occur anywhere in the body. Assessing the risk of late effects requires knowledge of the dose distribution throughout the whole body, including regions far from the treatment field, beyond the typical anatomical extent of clinical CT scans. Approach: A hybrid phantom was developed which consists of in-field patient CT images extracted from ground truth whole-body CT (WBCT) scans, out-of-field mesh phantoms scaled to basic patient measurements, and a blended transition region. Four of these hybrid phantoms were created, representing male and female patients receiving proton therapy treatment in pelvic and cranial sites. To assess the performance of the hybrid approach, we simulated treatments using the hybrid phantoms, the scaled and unscaled mesh phantoms, and the ground truth whole-body CTs. We calculated absorbed dose and equivalent dose in and outside of the treatment field, with a focus on neutrons induced in the patient by proton therapy. Proton and neutron dose was calculated using a general purpose Monte Carlo code. Main Results: The hybrid phantom provided equal or superior accuracy in calculated organ dose and equivalent dose values relative to those obtained using the mesh phantoms in 78% in all selected organs and calculated dose quantities. Comparatively the default mesh and scaled mesh were equal or superior to the other phantoms in 21% and 28% of cases respectively. Significance: The proposed methodology for hybrid synthesis provides a tool for whole-body organ dose estimation for individual patients without requiring CT scans of their entire body. Such a capability would be useful for personalized assessment of late effects and risk-optimization of treatment plans.


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Niksa Mohammadi Bagheri ◽  
Mahmoud Kadkhodaei ◽  
Shiva Pirhadi ◽  
Peiman Mosaddegh

AbstractThe implementation of intracorneal ring segments (ICRS) is one of the successfully applied refractive operations for the treatment of keratoconus (kc) progression. The different selection of ICRS types along with the surgical implementation techniques can significantly affect surgical outcomes. Thus, this study aimed to investigate the influence of ICRS implementation techniques and design on the postoperative biomechanical state and keratometry results. The clinical data of three patients with different stages and patterns of keratoconus were assessed to develop a three-dimensional (3D) patient-specific finite-element model (FEM) of the keratoconic cornea. For each patient, the exact surgery procedure definitions were interpreted in the step-by-step FEM. Then, seven surgical scenarios, including different ICRS designs (complete and incomplete segment), with two surgical implementation methods (tunnel incision and lamellar pocket cut), were simulated. The pre- and postoperative predicted results of FEM were validated with the corresponding clinical data. For the pre- and postoperative results, the average error of 0.4% and 3.7% for the mean keratometry value ($$\text {K}_{\text{mean}}$$ K mean ) were predicted. Furthermore, the difference in induced flattening effects was negligible for three ICRS types (KeraRing segment with arc-length of 355, 320, and two separate 160) of equal thickness. In contrast, the single and double progressive thickness of KeraRing 160 caused a significantly lower flattening effect compared to the same type with constant thickness. The observations indicated that the greater the segment thickness and arc-length, the lower the induced mean keratometry values. While the application of the tunnel incision method resulted in a lower $$\text {K}_{\text{mean}}$$ K mean value for moderate and advanced KC, the induced maximum Von Mises stress on the postoperative cornea exceeded the induced maximum stress on the cornea more than two to five times compared to the pocket incision and the preoperative state of the cornea. In particular, an asymmetric regional Von Mises stress on the corneal surface was generated with a progressive ICRS thickness. These findings could be an early biomechanical sign for a later corneal instability and ICRS migration. The developed methodology provided a platform to personalize ICRS refractive surgery with regard to the patient’s keratoconus stage in order to facilitate the efficiency and biomechanical stability of the surgery.


2020 ◽  
Vol 6 (3) ◽  
pp. 284-287
Author(s):  
Jannis Hagenah ◽  
Mohamad Mehdi ◽  
Floris Ernst

AbstractAortic root aneurysm is treated by replacing the dilated root by a grafted prosthesis which mimics the native root morphology of the individual patient. The challenge in predicting the optimal prosthesis size rises from the highly patient-specific geometry as well as the absence of the original information on the healthy root. Therefore, the estimation is only possible based on the available pathological data. In this paper, we show that representation learning with Conditional Variational Autoencoders is capable of turning the distorted geometry of the aortic root into smoother shapes while the information on the individual anatomy is preserved. We evaluated this method using ultrasound images of the porcine aortic root alongside their labels. The observed results show highly realistic resemblance in shape and size to the ground truth images. Furthermore, the similarity index has noticeably improved compared to the pathological images. This provides a promising technique in planning individual aortic root replacement.


2021 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
D Zhao ◽  
E Ferdian ◽  
GD Maso Talou ◽  
GM Quill ◽  
K Gilbert ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): National Heart Foundation (NHF) of New Zealand Health Research Council (HRC) of New Zealand Artificial intelligence shows considerable promise for automated analysis and interpretation of medical images, particularly in the domain of cardiovascular imaging. While application to cardiac magnetic resonance (CMR) has demonstrated excellent results, automated analysis of 3D echocardiography (3D-echo) remains challenging, due to the lower signal-to-noise ratio (SNR), signal dropout, and greater interobserver variability in manual annotations. As 3D-echo is becoming increasingly widespread, robust analysis methods will substantially benefit patient evaluation.  We sought to leverage the high SNR of CMR to provide training data for a convolutional neural network (CNN) capable of analysing 3D-echo. We imaged 73 participants (53 healthy volunteers, 20 patients with non-ischaemic cardiac disease) under both CMR and 3D-echo (<1 hour between scans). 3D models of the left ventricle (LV) were independently constructed from CMR and 3D-echo, and used to spatially align the image volumes using least squares fitting to a cardiac template. The resultant transformation was used to map the CMR mesh to the 3D-echo image. Alignment of mesh and image was verified through volume slicing and visual inspection (Fig. 1) for 120 paired datasets (including 47 rescans) each at end-diastole and end-systole. 100 datasets (80 for training, 20 for validation) were used to train a shallow CNN for mesh extraction from 3D-echo, optimised with a composite loss function consisting of normalised Euclidian distance (for 290 mesh points) and volume. Data augmentation was applied in the form of rotations and tilts (<15 degrees) about the long axis. The network was tested on the remaining 20 datasets (different participants) of varying image quality (Tab. I). For comparison, corresponding LV measurements from conventional manual analysis of 3D-echo and associated interobserver variability (for two observers) were also estimated. Initial results indicate that the use of embedded CMR meshes as training data for 3D-echo analysis is a promising alternative to manual analysis, with improved accuracy and precision compared with conventional methods. Further optimisations and a larger dataset are expected to improve network performance. (n = 20) LV EDV (ml) LV ESV (ml) LV EF (%) LV mass (g) Ground truth CMR 150.5 ± 29.5 57.9 ± 12.7 61.5 ± 3.4 128.1 ± 29.8 Algorithm error -13.3 ± 15.7 -1.4 ± 7.6 -2.8 ± 5.5 0.1 ± 20.9 Manual error -30.1 ± 21.0 -15.1 ± 12.4 3.0 ± 5.0 Not available Interobserver error 19.1 ± 14.3 14.4 ± 7.6 -6.4 ± 4.8 Not available Tab. 1. LV mass and volume differences (means ± standard deviations) for 20 test cases. Algorithm: CNN – CMR (as ground truth). Abstract Figure. Fig 1. CMR mesh registered to 3D-echo.


2021 ◽  
Vol 11 (3) ◽  
pp. 1263
Author(s):  
Mateusz Wójcik ◽  
Dariusz Skaba ◽  
Małgorzata Skucha-Nowak ◽  
Marta Tanasiewicz ◽  
Rafał Wiench

Background: There exists few scientific reports on the quality of digitally reproduced dental arches, even though digital devices have been used in dentistry for many years. This study assesses the accuracy of the standard dental arch model reproduction using both traditional and digital methods. Methods: The quality of the full upper dental arch standard model reproduction by physical models obtained through traditional and digital methods was compared: gypsum models (SGM) and models printed from data obtained using an intraoral scanner (TPM) (n = 20). All models were scanned with a reference scanner. Comparisons were made using Geomagic Control X program by measuring deviations of the models relative to the standard model and analyzing linear dimensions deviations. Results: The average error of reproduction accuracy of the standard model ranged from 0.0424 ± 0.0102 millimeters (mm) (SGM) to 0.1059 ± 0.0041 mm (TPM). In digital methods, all analyzed linear dimensions were shortened to a statistically significantly degree compared to traditional. The SGM method provided the smallest deviations to a significant degree of linear dimensions from the pattern, and TPM the largest. The intercanine dimension was reproduced with the lowest accuracy, and the intermolar the highest in each method. Conclusions: Traditional methods provided the highest reproduction trueness of the full dental arch and all analyzed linear dimensions. The intercanine dimension was reproduced with the lowest accuracy, and the intermolar the highest in each method, where digital methods shortened all analyzed linear dimensions.


2014 ◽  
Vol 41 (6Part1) ◽  
pp. 061910 ◽  
Author(s):  
Uros Stankovic ◽  
Marcel van Herk ◽  
Lennert S. Ploeger ◽  
Jan-Jakob Sonke

2020 ◽  
Vol 36 (10) ◽  
pp. 3011-3017 ◽  
Author(s):  
Olga Mineeva ◽  
Mateo Rojas-Carulla ◽  
Ruth E Ley ◽  
Bernhard Schölkopf ◽  
Nicholas D Youngblut

Abstract Motivation Methodological advances in metagenome assembly are rapidly increasing in the number of published metagenome assemblies. However, identifying misassemblies is challenging due to a lack of closely related reference genomes that can act as pseudo ground truth. Existing reference-free methods are no longer maintained, can make strong assumptions that may not hold across a diversity of research projects, and have not been validated on large-scale metagenome assemblies. Results We present DeepMAsED, a deep learning approach for identifying misassembled contigs without the need for reference genomes. Moreover, we provide an in silico pipeline for generating large-scale, realistic metagenome assemblies for comprehensive model training and testing. DeepMAsED accuracy substantially exceeds the state-of-the-art when applied to large and complex metagenome assemblies. Our model estimates a 1% contig misassembly rate in two recent large-scale metagenome assembly publications. Conclusions DeepMAsED accurately identifies misassemblies in metagenome-assembled contigs from a broad diversity of bacteria and archaea without the need for reference genomes or strong modeling assumptions. Running DeepMAsED is straight-forward, as well as is model re-training with our dataset generation pipeline. Therefore, DeepMAsED is a flexible misassembly classifier that can be applied to a wide range of metagenome assembly projects. Availability and implementation DeepMAsED is available from GitHub at https://github.com/leylabmpi/DeepMAsED. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document