computer assisted surgery
Recently Published Documents


TOTAL DOCUMENTS

515
(FIVE YEARS 96)

H-INDEX

32
(FIVE YEARS 4)

2021 ◽  
Vol 10 (23) ◽  
pp. 5584
Author(s):  
Achille Tarsitano ◽  
Francesco Ricotta ◽  
Paolo Spinnato ◽  
Anna Maria Chiesa ◽  
Maddalena Di Carlo ◽  
...  

An osteoma is a benign bone lesion with no clear pathogenesis, almost exclusive to the craniofacial area. Osteomas show very slow continuous growth, even in adulthood, unlike other bony lesions. Since these lesions are frequently asymptomatic, the diagnosis is usually made by plain radiography or by a computed tomography (CT) scan performed for other reasons. Rarely, the extensive growth could determine aesthetic or functional problems that vary according to different locations. Radiographically, osteomas appear as radiopaque lesions similar to bone cortex, and may determine bone expansion. Cone beam CT is the optimal imaging modality for assessing the relationship between osteomas and adjacent structures, and for surgical planning. The differential diagnosis includes several inflammatory and tumoral pathologies, but the typical craniofacial location may aid in the diagnosis. Due to the benign nature of osteomas, surgical treatment is limited to symptomatic lesions. Radical surgical resection is the gold standard therapy; it is based on a minimally invasive surgical approach with the aim of achieving an optimal cosmetic result. Reconstructive surgery for an osteoma is quite infrequent and reserved for patients with large central osteomas, such as big mandibular or maxillary lesions. In this regard, computer-assisted surgery guarantees better outcomes, providing the possibility of preoperative simulation of demolitive and reconstructive surgery.


2021 ◽  
Vol 11 ◽  
Author(s):  
Henriette L. Möllmann ◽  
Laura Apeltrath ◽  
Nadia Karnatz ◽  
Max Wilkat ◽  
Erik Riedel ◽  
...  

ObjectivesThis retrospective study compared two mandibular reconstruction procedures—conventional reconstruction plates (CR) and patient-specific implants (PSI)—and evaluated their accuracy of reconstruction and clinical outcome.MethodsOverall, 94 patients had undergone mandibular reconstruction with CR (n = 48) and PSI (n = 46). Six detectable and replicable anatomical reference points, identified via computer tomography, were used for defining the mandibular dimensions. The accuracy of reconstruction was assessed using pre- and postoperative differences.ResultsIn the CR group, the largest difference was at the lateral point of the condyle mandibulae (D2) -1.56 mm (SD = 3.8). In the PSI group, the largest difference between preoperative and postoperative measurement was shown at the processus coronoid (D5) with +1.86 mm (SD = 6.0). Significant differences within the groups in pre- and postoperative measurements were identified at the gonion (D6) [t(56) = -2.217; p = .031 <.05]. In the CR group, the difference was 1.5 (SD = 3.9) and in the PSI group -1.04 (SD = 4.9). CR did not demonstrate a higher risk of plate fractures and post-operative complications compared to PSI.ConclusionFor reconstructing mandibular defects, CR and PSI are eligible. In each case, the advantages and disadvantages of these approaches must be assessed. The functional and esthetic outcome of mandibular reconstruction significantly improves with the experience of the surgeon in conducting microvascular grafts and familiarity with computer-assisted surgery. Interoperator variability can be reduced, and training of younger surgeons involved in planning can be reaching better outcomes in the future.


2021 ◽  
Author(s):  
Alexander Studier-Fischer ◽  
Silvia Seidlitz ◽  
Jan Sellner ◽  
Manuel Wiesenfarth ◽  
Leonardo Ayala ◽  
...  

Visual discrimination of tissue during surgery can be challenging since different tissues appear similar to the human eye. Hyperspectral imaging (HSI) removes this limitation by associating each pixel with high-dimensional spectral information. While previous work has shown its general potential to discriminate tissue, clinical translation has been limited due to the method's current lack of robustness and generalizability. Specifically, it had been unknown whether variability in spectral reflectance is primarily explained by tissue type rather than the recorded individual or specific acquisition conditions. The contribution of this work is threefold: (1) Based on an annotated medical HSI data set (9,059 images from 46 pigs), we present a tissue atlas featuring spectral fingerprints of 20 different porcine organs and tissue types. (2) Using the principle of mixed model analysis, we show that the greatest source of variability related to HSI images is the organ under observation. (3) We show that HSI-based fully-automatic tissue differentiation of 20 organ classes with deep neural networks is possible with high accuracy (> 95 %). We conclude from our study that automatic tissue discrimination based on HSI data is feasible and could thus aid in intraoperative decision making and pave the way for context-aware computer-assisted surgery systems and autonomous robotics.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Pengfei Cheng ◽  
Yusheng Yang ◽  
Huiqiang Yu ◽  
Yongyi He

AbstractAutomatic vertebrae localization and segmentation in computed tomography (CT) are fundamental for spinal image analysis and spine surgery with computer-assisted surgery systems. But they remain challenging due to high variation in spinal anatomy among patients. In this paper, we proposed a deep-learning approach for automatic CT vertebrae localization and segmentation with a two-stage Dense-U-Net. The first stage used a 2D-Dense-U-Net to localize vertebrae by detecting the vertebrae centroids with dense labels and 2D slices. The second stage segmented the specific vertebra within a region-of-interest identified based on the centroid using 3D-Dense-U-Net. Finally, each segmented vertebra was merged into a complete spine and resampled to original resolution. We evaluated our method on the dataset from the CSI 2014 Workshop with 6 metrics: location error (1.69 ± 0.78 mm), detection rate (100%) for vertebrae localization; the dice coefficient (0.953 ± 0.014), intersection over union (0.911 ± 0.025), Hausdorff distance (4.013 ± 2.128 mm), pixel accuracy (0.998 ± 0.001) for vertebrae segmentation. The experimental results demonstrated the efficiency of the proposed method. Furthermore, evaluation on the dataset from the xVertSeg challenge with location error (4.12 ± 2.31), detection rate (100%), dice coefficient (0.877 ± 0.035) shows the generalizability of our method. In summary, our solution localized the vertebrae successfully by detecting the centroids of vertebrae and implemented instance segmentation of vertebrae in the whole spine.


2021 ◽  
Vol 11 ◽  
Author(s):  
Max Wilkat ◽  
Norbert Kübler ◽  
Majeed Rana

Curatively intended oncologic surgery is based on a residual-free tumor excision. Since decades, the surgeon’s goal of R0-resection has led to radical resections in the anatomical region of the midface because of the three-dimensionally complex anatomy where aesthetically and functionally crucial structures are in close relation. In some cases, this implied aggressive overtreatment with loss of the eye globe. In contrast, undertreatment followed by repeated re-resections can also not be an option. Therefore, the evaluation of the true three-dimensional tumor extent and the intraoperative availability of this information seem critical for a precise, yet substance-sparing tumor removal. Computer assisted surgery (CAS) can provide the framework in this context. The present study evaluated the beneficial use of CAS in the treatment of midfacial tumors with special regard to tumor resection and reconstruction. Therefore, 60 patients diagnosed with a malignancy of the upper jaw has been treated, 31 with the use of CAS and 29 conventionally. Comparison of the two groups showed a higher rate of residual-free resections in cases of CAS application. Furthermore, we demonstrate the use of navigated specimen taking called tumor mapping. This procedure enables the transparent, yet precise documentation of three-dimensional tumor borders which paves the way to a more feasible interdisciplinary exchange leading e.g. to a much more focused radiation therapy. Moreover, we evaluated the possibilities of primary midface reconstructions seizing CAS, especially in cases of infiltrated orbital floors. These cases needed reduction of intra-orbital volume due to the tissue loss after resection which could be precisely achieved by CAS. These benefits of CAS in midface reconstruction found expression in positive changes in quality of life. The present work was able to demonstrate that the area of oncological surgery of the midface is a prime example of interface optimization based on the sensible use of computer assistance. The fact that the system makes the patient transparent for the surgeon and the procedure controllable facilitates a more precise and safer treatment oriented to a better outcome.


Author(s):  
Bokai Zhang ◽  
Amer Ghanem ◽  
Alexander Simes ◽  
Henry Choi ◽  
Andrew Yoo

Abstract Purpose Surgical workflow recognition is a crucial and challenging problem when building a computer-assisted surgery system. Current techniques focus on utilizing a convolutional neural network and a recurrent neural network (CNN–RNN) to solve the surgical workflow recognition problem. In this paper, we attempt to use a deep 3DCNN to solve this problem. Methods In order to tackle the surgical workflow recognition problem and the imbalanced data problem, we implement a 3DCNN workflow referred to as I3D-FL-PKF. We utilize focal loss (FL) to train a 3DCNN architecture known as Inflated 3D ConvNet (I3D) for surgical workflow recognition. We use prior knowledge filtering (PKF) to filter the recognition results. Results We evaluate our proposed workflow on a large sleeve gastrectomy surgical video dataset. We show that focal loss can help to address the imbalanced data problem. We show that our PKF can be used to generate smoothed prediction results and improve the overall accuracy. We show that the proposed workflow achieves 84.16% frame-level accuracy and reaches a weighted Jaccard score of 0.7327 which outperforms traditional CNN–RNN design. Conclusion The proposed workflow can obtain consistent and smooth predictions not only within the surgical phases but also for phase transitions. By utilizing focal loss and prior knowledge filtering, our implementation of deep 3DCNN has great potential to solve surgical workflow recognition problems for clinical practice.


Author(s):  
Umar Shaikhaevich Tasukhanov ◽  
Natalya Aleksandrovna Kuzbetsova ◽  
Rukiyat Shamilevna Abdulaeva ◽  
Diana Tamerlanovna Kachmazova ◽  
Artem Evgenevich Mishvelov ◽  
...  

The purpose of this scientific article is to show the possibility of using the HoloDoctor software and hardware complex, which includes the following modules: the anatomical atlas module, the data analysis module and the surgical intervention simulation module. HoloDoctor allows us to perform surgical operations in real time. The time of operations using the HoloDoctor is reduced by 20-30% compared to traditional methods. The developed complex provides diagnostics of treatment planning, correction of the treatment plan, minimizing the risk of medical error. The developed product can be used in educational practice by introducing it into educational programs of medical specialties.


Author(s):  
Florian Aspart ◽  
Jon L. Bolmgren ◽  
Joël L. Lavanchy ◽  
Guido Beldi ◽  
Michael S. Woods ◽  
...  

Abstract Purpose Cholecystectomy is one of the most common laparoscopic procedures. A critical phase of laparoscopic cholecystectomy consists in clipping the cystic duct and artery before cutting them. Surgeons can improve the clipping safety by ensuring full visibility of the clipper, while enclosing the artery or the duct with the clip applier jaws. This can prevent unintentional interaction with neighboring tissues or clip misplacement. In this article, we present a novel real-time feedback to ensure safe visibility of the instrument during this critical phase. This feedback incites surgeons to keep the tip of their clip applier visible while operating. Methods We present a new dataset of 300 laparoscopic cholecystectomy videos with frame-wise annotation of clipper tip visibility. We further present ClipAssistNet, a neural network-based image classifier which detects the clipper tip visibility in single frames. ClipAssistNet ensembles predictions from 5 neural networks trained on different subsets of the dataset. Results Our model learns to classify the clipper tip visibility by detecting its presence in the image. Measured on a separate test set, ClipAssistNet classifies the clipper tip visibility with an AUROC of 0.9107, and 66.15% specificity at 95% sensitivity. Additionally, it can perform real-time inference (16 FPS) on an embedded computing board; this enables its deployment in operating room settings. Conclusion This work presents a new application of computer-assisted surgery for laparoscopic cholecystectomy, namely real-time feedback on adequate visibility of the clip applier. We believe this feedback can increase surgeons’ attentiveness when departing from safe visibility during the critical clipping of the cystic duct and artery.


Sign in / Sign up

Export Citation Format

Share Document