scholarly journals Pulmonary Dynamics of Anatomical Structures of Interest in 4DCT Images

2017 ◽  
Author(s):  
◽  
S. Hernández Juárez

This paper presents an application of the Chan-Vese algorithm for a semi-automatic segmentation of anatomical structures of interest (lungs and lung tumor) in thorax 4DCT images, as well as its threedimensional reconstruction. Segmentations and reconstructions were performed in 10 CT images, which conform an inspiration-expiration cycle. The maximum displacement of the lung tumor was calculated using the reconstructions of the beginning of inspiration, beginning of expiration, and the voxel size information. The proposed method was able to succesfully segment the studied structures regardless of their size and shape. The threedimensional reconstruction allow us to visualize the dynamics of the structures of interest throughout the respiratory cycle. In the near future, we are expecting to be able to have more evidence of the good performance of the proposed segmentation approach, and to have feedback from a clinical expert, giving the fact that the knowledge of anatomical structures characteristics, such as their size and spatial location, may help in the planning of radiotherapy treatments (RT), optimizing the radiation dose to cancer cells and minimizing it in healthy organs. Therefore, the information found in this work maybe of interest for the planning of RT treatments.

2017 ◽  
Author(s):  
◽  
S. Hernández Juárez

This paper presents an application of the Chan-Vese algorithm for a semi-automatic segmentation of anatomical structures of interest (lungs and lung tumor) in thorax 4DCT images, as well as its threedimensional reconstruction. Segmentations and reconstructions were performed in 10 CT images, which conform an inspiration-expiration cycle. The maximum displacement of the lung tumor was calculated using the reconstructions of the beginning of inspiration, beginning of expiration, and the voxel size information. The proposed method was able to succesfully segment the studied structures regardless of their size and shape. The threedimensional reconstruction allow us to visualize the dynamics of the structures of interest throughout the respiratory cycle. In the near future, we are expecting to be able to have more evidence of the good performance of the proposed segmentation approach, and to have feedback from a clinical expert, giving the fact that the knowledge of anatomical structures characteristics, such as their size and spatial location, may help in the planning of radiotherapy treatments (RT), optimizing the radiation dose to cancer cells and minimizing it in healthy organs. Therefore, the information found in this work maybe of interest for the planning of RT treatments.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2962 ◽  
Author(s):  
Santiago González Izard ◽  
Ramiro Sánchez Torres ◽  
Óscar Alonso Plaza ◽  
Juan Antonio Juanes Méndez ◽  
Francisco José García-Peñalvo

The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization.


2016 ◽  
Vol 35 (1) ◽  
pp. 337-353 ◽  
Author(s):  
Jiangdian Song ◽  
Caiyun Yang ◽  
Li Fan ◽  
Kun Wang ◽  
Feng Yang ◽  
...  

Author(s):  
D. L. Collins ◽  
A. C. Evans

Magnetic resonance imaging (MRI) has become the modality of choice for neuro-anatomical imaging. Quantitative analysis requires the accurate and reproducible labeling of all voxels in any given structure within the brain. Since manual labeling is prohibitively time-consuming and error-prone we have designed an automated procedure called ANIMAL (Automatic Nonlinear Image Matching and Anatomical Labeling) to objectively segment gross anatomical structures from 3D MRIs of normal brains. The procedure is based on nonlinear registration with a previously labeled target brain, followed by numerical inverse transformation of the labels to the native MRI space. Besides segmentation, ANIMAL has been applied to non-rigid registration and to the analysis of morphometric variability. In this paper, the nonlinear registration approach is validated on five test volumes, produced with simulated deformations. Experiments show that the ANIMAL recovers 64% of the nonlinear residual variability remaining after linear registration. Segmentations of the same test data are presented as well. The paper concludes with two applications of ANIMAL using real data. In the first, one MRI volume is nonlinearly matched to a second and is automatically segmented using labels, predefined on the second MRI volume. The automatic segmentation compares well with manual labeling of the same structures. In the second application, ANIMAL is applied to seventeen MRI data sets, and a 3D map of anatomical variability estimates is produced. The automatic variability estimates correlate well (r =0.867, p = 0.01) with manual estimates of inter-subject variability.


2015 ◽  
Vol 1 (1) ◽  
Author(s):  
Ameneh Boroomand ◽  
Alexander Wong ◽  
Kostadinka Bizheva

<p>Keratocytes are vital for maintaining the overall health of human<br />cornea as they preserve the corneal transparency and help in healing<br />corneal injuries. Manual segmentation of keratocytes is challenging,<br />time consuming and also needs an expert. Here, we propose<br />a novel semi-automatic segmentation framework, called Conditional<br />Random FieldWeakly Supervised Segmentation (CRF-WSS)<br />to perform the keratocytes cell segmentation. The proposed framework<br />exploits the concept of dictionary learning in a sparse model<br />along with the Conditional Random Field (CRF) modeling to segment<br />keratocytes cells in Ultra High Resolution Optical Coherence<br />Tomography (UHR-OCT) images of human cornea. The results<br />show higher accuracy for the proposed CRF-WSS framework compare<br />to the other tested Supervised Segmentation (SS) andWeakly<br />Supervised Segmentation (WSS) methods.</p>


2021 ◽  
Vol 162 (16) ◽  
pp. 623-628
Author(s):  
Ádám Perényi ◽  
Bálint Posta ◽  
Linda Szabó ◽  
Zoltán Tóbiás ◽  
Balázs Dimák ◽  
...  

Összefoglaló. Bevezetés: Az emberi sziklacsont a halántékcsont része, egy bonyolult és változatos anatómiai felépítésű struktúra. A sziklacsonton végzett beavatkozások előtt, a műtéti szövődmények megelőzése érdekében, nélkülözhetetlen a biztos anatómiai tudás és kézügyesség megszerzése, valamint az egyes műtéti lépések és mozdulatok begyakorlása. A VOXEL-MAN Tempo 3D fül-orr-gégészeti szimulátor a virtuális valóság és a robotika alkalmazásával nyújt gyakorlási lehetőséget. Célkitűzés: A Szegedi Tudományegyetem 2019-ben VOXEL-MAN fül-orr-gégészeti szimulátort helyezett üzembe az Orvosi Készségfejlesztési Központban. A cikk fül-orr-gégész szakorvos szerzői a VOXEL-MAN Tempo szimulátor megismerését követően bemutatják a készüléket, és megfogalmazzák a szimulátorral végzett beavatkozásokkal szemben támasztott igényüket. Módszer: A szerzők a megfogalmazott szempontoknak megfelelően értékelik a VOXEL-MAN Tempo szimulátort, és meghatározzák, milyen szerepet szánnak neki a gyakorlati képzésben. Eredmények: A szimulátor virtuálisan, mégis valósághűen mutatja meg a sziklacsont anatómiai viszonyait, a fontos anatómiai struktúrák valós térbeli elhelyezkedését és egymástól, illetve a sebészi eszköztől mért távolságát. A rendszer lehetővé teszi a fülműtétek valósághű elvégzését (kétkezes csontmunka fúróval és szívóval, vérzés szimulálása) taktilis visszacsatolással. Az egy- vagy kétkezes feladatokkal fejleszthetjük a sebészi készségeket. A fülműtétek csontmunkája reprodukálható módon elvégezhető valódi beteg halántékcsontjáról készített rutin, nagy felbontású komputertomográfiás vizsgálat anyagából. Következtetés: Tapasztalataink alapján a szimulátor kiválóan alkalmas az egyes műtéti lépesek begyakorlására. A jövőben fontos szerepet szánunk a virtuális rendszernek a fül-orr-gégészeti graduális és a fülsebészeti posztgraduális képzésben. Orv Hetil. 2021; 162(16): 623–628. Summary. Introduction: The pars petrosa of the human temporal bone is a structure of complex and diverse anatomy. Prior to surgical interventions, in order to prevent surgical complications, it is essential to acquire sound anatomical knowledge and dexterity as well as to practice each surgical step and movement. The VOXEL-MAN Tempo 3D simulator uses virtual reality and robotics to provide an opportunity to practice. Objective: In 2019, the University of Szeged installed a VOXEL-MAN Virtual Reality simulator at the Medical Skills Development Center. After learning about the VOXEL-MAN Tempo simulator, the authors present the device and articulate their need for interventions with the simulator. Method: The VOXEL-MAN Tempo simulator is evaluated according to the formulated criteria and the role assigned to it in the practical training is determined. Results: The simulator shows the anatomical structure of the temporal bone virtually, yet realistically, the real spatial location of the important anatomical structures and their distance from each other and from the surgical instrument. The system allows ear surgery to be performed realistically (two-handed bone work with a drill and suction) with tactile (vibration) and visual (bleeding) feedback. One can improve surgical skills with one- or two-handed tasks. Bone work in ear surgeries can be performed in a reproducible manner from routine, high-resolution computer tomography of the temporal bone of a real patient. Conclusion: With reference to our experience, the simulator is excellent for practicing each surgical step. In the future, we intend to use this virtual system in undergraduate and postgraduate training in otolaryngology. Orv Hetil. 2021; 162(16): 623–628.


2020 ◽  
Author(s):  
Elisabeth Pfaehler ◽  
Liesbet Mesotten ◽  
Gem Kramer ◽  
Michiel Thomeer ◽  
Karolien Vanhove ◽  
...  

Abstract Background: Positron Emission Tomography (PET) is routinely used for cancer staging and treatment follow up. Metabolic active tumor volume (MATV) as well as total MATV (TMATV - including primary tumor, lymph nodes and metastasis) and/or total lesion glycolysis (TLG) derived from PET images have been identified as prognostic factor or for the evaluation of treatment efficacy in cancer patients. To this end, a segmentation approach with high precision and repeatability is important. However, the implementation of a repeatable and accurate segmentation algorithm remains an ongoing challenge. Methods: In this study, we compare two semi-automatic artificial intelligence (AI) based segmentation methods with conventional semi-automatic segmentation approaches in terms of repeatability. One segmentation approach is based on a textural feature (TF) segmentation approach designed for accurate and repeatable segmentation of primary tumors and metastasis. Moreover, a Convolutional Neural Network (CNN) is trained. The algorithms are trained, validated and tested using a lung cancer PET dataset. The segmentation accuracy of both segmentation approaches is compared using the Jaccard Coefficient (JC). Additionally, the approaches are externally tested on a fully independent test-retest dataset. The repeatability of the methods is compared with those of two majority vote (MV2, MV3) approaches, 41%SUVMAX, and a SUV>4 segmentation (SUV4). Repeatability is assessed with test-retest coefficients (TRT%) and intraclass correlation coefficient (ICC). An ICC>0.9 was regarded as representing excellent repeatability.Results: The accuracy of the segmentations with the reference segmentation was good (JC median TF: 0.7, CNN: 0.73) Both segmentation approaches outperformed most other conventional segmentation methods in terms of test-retest coefficient (TRT% mean: TF: 13.0%, CNN: 13.9%, MV2: 14.1%, MV3: 28.1%, 41%SUVMAX: 28.1%, SUV4: 18.1% ) and ICC (TF: 0.98, MV2: 0.97, CNN: 0.99, MV3: 0.73, SUV4: 0.81, and 41%SUVMAX: 0.68).Conclusion: The semi-automatic AI based segmentation approaches used in this study provided better repeatability than conventional segmentation approaches. Moreover, both algorithms lead to accurate segmentations for both primary tumors as well as metastasis and are therefore good candidates for PET tumor segmentation.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Jörg Sander ◽  
Bob D. de Vos ◽  
Ivana Išgum

AbstractSegmentation of cardiac anatomical structures in cardiac magnetic resonance images (CMRI) is a prerequisite for automatic diagnosis and prognosis of cardiovascular diseases. To increase robustness and performance of segmentation methods this study combines automatic segmentation and assessment of segmentation uncertainty in CMRI to detect image regions containing local segmentation failures. Three existing state-of-the-art convolutional neural networks (CNN) were trained to automatically segment cardiac anatomical structures and obtain two measures of predictive uncertainty: entropy and a measure derived by MC-dropout. Thereafter, using the uncertainties another CNN was trained to detect local segmentation failures that potentially need correction by an expert. Finally, manual correction of the detected regions was simulated in the complete set of scans of 100 patients and manually performed in a random subset of scans of 50 patients. Using publicly available CMR scans from the MICCAI 2017 ACDC challenge, the impact of CNN architecture and loss function for segmentation, and the uncertainty measure was investigated. Performance was evaluated using the Dice coefficient, 3D Hausdorff distance and clinical metrics between manual and (corrected) automatic segmentation. The experiments reveal that combining automatic segmentation with manual correction of detected segmentation failures results in improved segmentation and to 10-fold reduction of expert time compared to manual expert segmentation.


Sign in / Sign up

Export Citation Format

Share Document