Quantitative Comparison of Surgical Device Usage in Laparoscopic Gastrectomy Between Surgeons’ Skill Levels: an Automated Analysis Using a Neural Network

Author(s):  
Yuta Yamazaki ◽  
Shingo Kanaji ◽  
Takuya Kudo ◽  
Gosuke Takiguchi ◽  
Naoki Urakawa ◽  
...  
2019 ◽  
Vol 177 ◽  
pp. 1-8 ◽  
Author(s):  
Xuan Anh Nguyen ◽  
Damir Ljuhar ◽  
Maurizio Pacilli ◽  
Ramesh Mark Nataraja ◽  
Sunita Chauhan

2019 ◽  
Vol 33 (10) ◽  
pp. 755-765 ◽  
Author(s):  
Anri Inaki ◽  
Kenichi Nakajima ◽  
Hiroshi Wakabayashi ◽  
Takafumi Mochizuki ◽  
Seigo Kinuya

2006 ◽  
Vol 321-323 ◽  
pp. 1266-1269 ◽  
Author(s):  
J. Kim ◽  
P. Ramuhalli ◽  
L. Udpa ◽  
S. Udpa

A key requirement in most ultrasonic weld inspection systems is the ability for rapid automated analysis to identify the type of flaw. Incorporation of spatial correlation information from adjacent A-scans can improve performance of the analysis system. This paper describes two neural network based classification techniques that use correlation of adjacent A-scans. The first method relies on differences in individual A-scans to classify signals using a trained neural network, with a post-processing mechanism to incorporate spatial correlation information. The second technique transforms a group of spatially localized signals using a 2-dimensional transform, and principal component analysis is applied to the transform coefficients to generate a reduced dimensional feature vectors for classification. Results of applying the proposed techniques to data obtained from weld inspection are presented, and the performances of the two approaches are compared.


2021 ◽  
Author(s):  
Daichi Kitaguchi ◽  
Toru Fujino ◽  
Nobuyoshi Takeshita ◽  
Hiro Hasegawa ◽  
Kensaku Mori ◽  
...  

Abstract Clarifying the scalability of deep-learning-based surgical instrument segmentation networks in diverse surgical environments is important in recognizing the challenges of overfitting in surgical device development. This study comprehensively evaluated deep neural network scalability for surgical instrument segmentation, using 5238 images randomly extracted from 128 intraoperative videos. The video dataset contained 112 laparoscopic colorectal resection, 5 laparoscopic distal gastrectomy, 5 laparoscopic cholecystectomy, and 6 laparoscopic partial hepatectomy cases. Deep-learning-based surgical instrument segmentation was performed for test sets with 1) the same conditions as the training set; 2) the same recognition target surgical instrument and surgery type but different laparoscopic recording systems; 3) the same laparoscopic recording system and surgery type but slightly different recognition target laparoscopic surgical forceps; 4) the same laparoscopic recording system and recognition target surgical instrument but different surgery types. The mean average precision and mean intersection over union for test sets 1, 2, 3, and 4 were 0.941 and 0.887, 0.866 and 0.671, 0.772 and 0.676, and 0.588 and 0.395, respectively. Therefore, the recognition accuracy decreased even under slightly different conditions. To enhance the generalization of deep neural networks in surgery, constructing a training set that considers diverse surgical environments under real-world conditions is crucial. Trial Registration Number: 2020–315, date of registration: October 5, 2020


Author(s):  
James P. Howard ◽  
Sameer Zaman ◽  
Aaraby Ragavan ◽  
Kerry Hall ◽  
Greg Leonard ◽  
...  

Abstract The large number of available MRI sequences means patients cannot realistically undergo them all, so the range of sequences to be acquired during a scan are protocolled based on clinical details. Adapting this to unexpected findings identified early on in the scan requires experience and vigilance. We investigated whether deep learning of the images acquired in the first few minutes of a scan could provide an automated early alert of abnormal features. Anatomy sequences from 375 CMR scans were used as a training set. From these, we annotated 1500 individual slices and used these to train a convolutional neural network to perform automatic segmentation of the cardiac chambers, great vessels and any pleural effusions. 200 scans were used as a testing set. The system then assembled a 3D model of the thorax from which it made clinical measurements to identify important abnormalities. The system was successful in segmenting the anatomy slices (Dice 0.910) and identified multiple features which may guide further image acquisition. Diagnostic accuracy was 90.5% and 85.5% for left and right ventricular dilatation, 85% for left ventricular hypertrophy and 94.4% for ascending aorta dilatation. The area under ROC curve for diagnosing pleural effusions was 0.91. We present proof-of-concept that a neural network can segment and derive accurate clinical measurements from a 3D model of the thorax made from transaxial anatomy images acquired in the first few minutes of a scan. This early information could lead to dynamic adaptive scanning protocols, and by focusing scanner time appropriately and prioritizing cases for supervision and early reporting, improve patient experience and efficiency.


Author(s):  
Shanbin Zhang ◽  
◽  
Guangyuan Liu ◽  
Xiangwei Lai ◽  

Most automated analysis methods related to biosignalbased human Emotions collect their data using multiple physiological signals, long-term physiological signals, or both. However, this restricts their ability to identify Emotions in an efficient manner. This study classifies evoked Emotions based on two types of single, short-term physiological signals: electrocardiograms (ECGs) and galvanic skin responses (GSRs) respectively. Estimated recognition times are also recorded and analyzed. First, we perform experiments using film excerpts selected to elicit target Emotions that include anger, grief, fear, happiness, and calmness; ECG and GSR signals are collected during these experiments. Next, a wavelet transform is applied to process the truncated ECG data, and a Butterworth filter is applied to process the truncated GSR signals, in order to extract the required features. Finally, the five different Emotion types are classified by employing an artificial neural network (ANN) based on the two signals. Average classification accuracy rates of 89.14% and 82.29% were achieved in the experiments using ECG data and GSR data, respectively. In addition, the total time required for feature extraction and emotional classification did not exceed 0.15 s for either ECG or GSR signals.


Author(s):  
Dilbag Singh ◽  
Vijay Kumar ◽  
Vaishali Yadav ◽  
Manjit Kaur

There are limited coronavirus disease 2019 (COVID-19) testing kits, therefore, development of other diagnosis approaches is desirable. The doctors generally utilize chest X-rays and Computed Tomography (CT) scans to diagnose pneumonia, lung inflammation, abscesses, and/or enlarged lymph nodes. Since COVID-19 attacks the epithelial cells that line our respiratory tract, therefore, X-ray images are utilized in this paper, to classify the patients with infected (COVID-19 [Formula: see text]ve) and uninfected (COVID-19 [Formula: see text]ve) lungs. Almost all hospitals have X-ray imaging machines, therefore, the chest X-ray images can be used to test for COVID-19 without utilizing any kind of dedicated test kits. However, the chest X-ray-based COVID-19 classification requires a radiology expert and significant time, which is precious when COVID-19 infection is increasing at a rapid rate. Therefore, the development of an automated analysis approach is desirable to save the medical professionals’ valuable time. In this paper, a deep convolutional neural network (CNN) approach is designed and implemented. Besides, the hyper-parameters of CNN are tuned using Multi-objective Adaptive Differential Evolution (MADE). Extensive experiments are performed by considering the benchmark COVID-19 dataset. Comparative analysis reveals that the proposed technique outperforms the competitive machine learning models in terms of various performance metrics.


2018 ◽  
Vol 2018 ◽  
pp. 1-6 ◽  
Author(s):  
Munenori Uemura ◽  
Morimasa Tomikawa ◽  
Tiejun Miao ◽  
Ryota Souzaki ◽  
Satoshi Ieiri ◽  
...  

This study investigated whether parameters derived from hand motions of expert and novice surgeons accurately and objectively reflect laparoscopic surgical skill levels using an artificial intelligence system consisting of a three-layer chaos neural network. Sixty-seven surgeons (23 experts and 44 novices) performed a laparoscopic skill assessment task while their hand motions were recorded using a magnetic tracking sensor. Eight parameters evaluated as measures of skill in a previous study were used as inputs to the neural network. Optimization of the neural network was achieved after seven trials with a training dataset of 38 surgeons, with a correct judgment ratio of 0.99. The neural network that prospectively worked with the remaining 29 surgeons had a correct judgment rate of 79% for distinguishing between expert and novice surgeons. In conclusion, our artificial intelligence system distinguished between expert and novice surgeons among surgeons with unknown skill levels.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7441
Author(s):  
Sajid Ullah ◽  
Michael Henke ◽  
Narendra Narisetti ◽  
Klára Panzarová ◽  
Martin Trtílek ◽  
...  

Automated analysis of small and optically variable plant organs, such as grain spikes, is highly demanded in quantitative plant science and breeding. Previous works primarily focused on the detection of prominently visible spikes emerging on the top of the grain plants growing in field conditions. However, accurate and automated analysis of all fully and partially visible spikes in greenhouse images renders a more challenging task, which was rarely addressed in the past. A particular difficulty for image analysis is represented by leaf-covered, occluded but also matured spikes of bushy crop cultivars that can hardly be differentiated from the remaining plant biomass. To address the challenge of automated analysis of arbitrary spike phenotypes in different grain crops and optical setups, here, we performed a comparative investigation of six neural network methods for pattern detection and segmentation in RGB images, including five deep and one shallow neural network. Our experimental results demonstrate that advanced deep learning methods show superior performance, achieving over 90% accuracy by detection and segmentation of spikes in wheat, barley and rye images. However, spike detection in new crop phenotypes can be performed more accurately than segmentation. Furthermore, the detection and segmentation of matured, partially visible and occluded spikes, for which phenotypes substantially deviate from the training set of regular spikes, still represent a challenge to neural network models trained on a limited set of a few hundreds of manually labeled ground truth images. Limitations and further potential improvements of the presented algorithmic frameworks for spike image analysis are discussed. Besides theoretical and experimental investigations, we provide a GUI-based tool (SpikeApp), which shows the application of pre-trained neural networks to fully automate spike detection, segmentation and phenotyping in images of greenhouse-grown plants.


Author(s):  
Pengcheng Zhou ◽  
Jacob Reimer ◽  
Ding Zhou ◽  
Amol Pasarkar ◽  
Ian Kinsella ◽  
...  

AbstractCombining two-photon calcium imaging (2PCI) and electron microscopy (EM) provides arguably the most powerful current approach for connecting function to structure in neural circuits. Recent years have seen dramatic advances in obtaining and processing CI and EM data separately. In addition, several joint CI-EM datasets (with CI performed in vivo, followed by EM reconstruction of the same volume) have been collected. However, no automated analysis tools yet exist that can match each signal extracted from the CI data to a cell segment extracted from EM; previous efforts have been largely manual and focused on analyzing calcium activity in cell bodies, neglecting potentially rich functional information from axons and dendrites. There are two major roadblocks to solving this matching problem: first, dense EM reconstruction extracts orders of magnitude more segments than are visible in the corresponding CI field of view, and second, due to optical constraints and non-uniform brightness of the calcium indicator in each cell, direct matching of EM and CI spatial components is nontrivial.In this work we develop a pipeline for fusing CI and densely-reconstructed EM data. We model the observed CI data using a constrained nonnegative matrix factorization (CNMF) framework, in which segments extracted from the EM reconstruction serve to initialize and constrain the spatial components of the matrix factorization. We develop an efficient iterative procedure for solving the resulting combined matching and matrix factorization problem and apply this procedure to joint CI-EM data from mouse visual cortex. The method recovers hundreds of dendritic components from the CI data, visible across multiple functional scans at different depths, matched with densely-reconstructed three-dimensional neural segments recovered from the EM volume. We publicly release the output of this analysis as a new gold standard dataset that can be used to score algorithms for demixing signals from 2PCI data. Finally, we show that this database can be exploited to (1) learn a mapping from 3d EM segmentations to predict the corresponding 2d spatial components estimated from CI data, and (2) train a neural network to denoise these estimated spatial components. This neural network denoiser is a stand-alone module that can be dropped in to enhance any existing 2PCI analysis pipeline.


Sign in / Sign up

Export Citation Format

Share Document