scholarly journals Deep-Learning-Based Cerebral Artery Semantic Segmentation in Neurosurgical Operating Microscope Vision Using Indocyanine Green Fluorescence Videoangiography

2022 ◽  
Vol 15 ◽  
Author(s):  
Min-seok Kim ◽  
Joon Hyuk Cha ◽  
Seonhwa Lee ◽  
Lihong Han ◽  
Wonhyoung Park ◽  
...  

There have been few anatomical structure segmentation studies using deep learning. Numbers of training and ground truth images applied were small and the accuracies of which were low or inconsistent. For a surgical video anatomy analysis, various obstacles, including a variable fast-changing view, large deformations, occlusions, low illumination, and inadequate focus occur. In addition, it is difficult and costly to obtain a large and accurate dataset on operational video anatomical structures, including arteries. In this study, we investigated cerebral artery segmentation using an automatic ground-truth generation method. Indocyanine green (ICG) fluorescence intraoperative cerebral videoangiography was used to create a ground-truth dataset mainly for cerebral arteries and partly for cerebral blood vessels, including veins. Four different neural network models were trained using the dataset and compared. Before augmentation, 35,975 training images and 11,266 validation images were used. After augmentation, 260,499 training and 90,129 validation images were used. A Dice score of 79% for cerebral artery segmentation was achieved using the DeepLabv3+ model trained using an automatically generated dataset. Strict validation in different patient groups was conducted. Arteries were also discerned from the veins using the ICG videoangiography phase. We achieved fair accuracy, which demonstrated the appropriateness of the methodology. This study proved the feasibility of operating field view of the cerebral artery segmentation using deep learning, and the effectiveness of the automatic blood vessel ground truth generation method using ICG fluorescence videoangiography. Using this method, computer vision can discern blood vessels and arteries from veins in a neurosurgical microscope field of view. Thus, this technique is essential for neurosurgical field vessel anatomy-based navigation. In addition, surgical assistance, safety, and autonomous surgery neurorobotics that can detect or manipulate cerebral vessels would require computer vision to identify blood vessels and arteries.

2021 ◽  
Vol 11 (22) ◽  
pp. 10966
Author(s):  
Hsiang-Chieh Chen ◽  
Zheng-Ting Li

This article introduces an automated data-labeling approach for generating crack ground truths (GTs) within concrete images. The main algorithm includes generating first-round GTs, pre-training a deep learning-based model, and generating second-round GTs. On the basis of the generated second-round GTs of the training data, a learning-based crack detection model can be trained in a self-supervised manner. The pre-trained deep learning-based model is effective for crack detection after it is re-trained using the second-round GTs. The main contribution of this study is the proposal of an automated GT generation process for training a crack detection model at the pixel level. Experimental results show that the second-round GTs are similar to manually marked labels. Accordingly, the cost of implementing learning-based methods is reduced significantly because data labeling by humans is not necessitated.


2021 ◽  
Vol 14 (1) ◽  
pp. 416
Author(s):  
Mostofa Ahsan ◽  
Sulaymon Eshkabilov ◽  
Bilal Cemek ◽  
Erdem Küçüktopcu ◽  
Chiwon W. Lee ◽  
...  

Deep learning (DL) and computer vision applications in precision agriculture have great potential to identify and classify plant and vegetation species. This study presents the applicability of DL modeling with computer vision techniques to analyze the nutrient levels of hydroponically grown four lettuce cultivars (Lactuca sativa L.), namely Black Seed, Flandria, Rex, and Tacitus. Four different nutrient concentrations (0, 50, 200, 300 ppm nitrogen solutions) were prepared and utilized to grow these lettuce cultivars in the greenhouse. RGB images of lettuce leaves were captured. The results showed that the developed DL’s visual geometry group 16 (VGG16) and VGG19 architectures identified the nutrient levels of lettuces with 87.5 to 100% accuracy for four lettuce cultivars, respectively. Convolution neural network models were also implemented to identify the nutrient levels of the studied lettuces for comparison purposes. The developed modeling techniques can be applied not only to collect real-time nutrient data from other lettuce type cultivars grown in greenhouses but also in fields. Moreover, these modeling approaches can be applied for remote sensing purposes to various lettuce crops. To the best knowledge of the authors, this is a novel study applying the DL technique to determine the nutrient concentrations in lettuce cultivars.


Now-a-days diabetics are affecting many people and it causes an eye disease called “diabetics retinopathy” but many are not aware of that, so it causes blindness. Diabetes aimed at protracted time harms the blood vessels of retina in addition to thereby affecting seeing ability of an individual in addition to leading to diabetic retinopathy. Diabetic retinopathy is classified hooked on twofold classes, non-proliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR). Finding of diabetic retinopathy in fundus imaginary is done by computer vision and deep learning methods using artificial neural networks. The images of the diabetic retinopathy datasets are trained in neural networks. And based on the training datasets we can detect whether the person has (i)no diabetic retinopathy, (ii) mild non-proliferative diabetic retinopathy, (iii) severe non-proliferative diabetic retinopathy and (iv) proliferative diabetic retinopathy.


Author(s):  
M. Cournet ◽  
E. Sarrazin ◽  
L. Dumas ◽  
J. Michel ◽  
J. Guinet ◽  
...  

Abstract. Several 3D reconstruction pipelines are being developed around the world for satellite imagery. Most of them implement their own versions of Semi-Global Matching, as an option for the matching step. However, deep learning based solutions already outperform every SGM derived algorithms on Kitti and Middlebury stereo datasets. But these deep learning based solutions need huge quantities of ground truths for training. This implies that the generation of ground truth stereo datasets, from satellite imagery and lidar, seems to be of great interest for the scientific community. It will aim at reducing the potential transfer learning difficulties, that could arise from a training done on datasets such as Middlebury or Kitti. In this work, we present a new ground truth generation pipeline. It produces stereo-rectified images and ground truth disparity maps, from satellite imagery and lidar. We also assess the rectification and the disparity accuracies of these outputs. We finally train a deep learning network on our preliminary ground truth dataset.


2021 ◽  
Vol 109 (5) ◽  
pp. 863-890
Author(s):  
Yannis Panagakis ◽  
Jean Kossaifi ◽  
Grigorios G. Chrysos ◽  
James Oldfield ◽  
Mihalis A. Nicolaou ◽  
...  

BJS Open ◽  
2021 ◽  
Vol 5 (2) ◽  
Author(s):  
M D Slooter ◽  
M S E Mansvelders ◽  
P R Bloemen ◽  
S S Gisbertz ◽  
W A Bemelman ◽  
...  

Abstract Background The aim of this systematic review was to identify all methods to quantify intraoperative fluorescence angiography (FA) of the gastrointestinal anastomosis, and to find potential thresholds to predict patient outcomes, including anastomotic leakage and necrosis. Methods This systematic review adhered to the PRISMA guidelines. A PubMed and Embase literature search was performed. Articles were included when FA with indocyanine green was performed to assess gastrointestinal perfusion in human or animals, and the fluorescence signal was analysed using quantitative parameters. A parameter was defined as quantitative when a diagnostic numeral threshold for patient outcomes could potentially be produced. Results Some 1317 articles were identified, of which 23 were included. Fourteen studies were done in patients and nine in animals. Eight studies applied FA during upper and 15 during lower gastrointestinal surgery. The quantitative parameters were divided into four categories: time to fluorescence (20 studies); contrast-to-background ratio (3); pixel intensity (2); and numeric classification score (2). The first category was subdivided into manually assessed time (7 studies) and software-derived fluorescence–time curves (13). Cut-off values were derived for manually assessed time (speed in gastric conduit wall) and derivatives of the fluorescence–time curves (Fmax, T1/2, TR and slope) to predict patient outcomes. Conclusion Time to fluorescence seems the most promising category for quantitation of FA. Future research might focus on fluorescence–time curves, as many different parameters can be derived and the fluorescence intensity can be bypassed. However, consensus on study set-up, calibration of fluorescence imaging systems, and validation of software programs is mandatory to allow future data comparison.


Sign in / Sign up

Export Citation Format

Share Document