scholarly journals Deep Learning-Based Three-dimensional Transvaginal Ultrasound in Diagnosis of Intrauterine Adhesion

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Ji Li ◽  
Dan Liu ◽  
Xiaofeng Qing ◽  
Lanlan Yu ◽  
Huizhen Xiang

This study was aimed to enhance and detect the characteristics of three-dimensional transvaginal ultrasound images based on the partial differential algorithm and HSegNet algorithm under deep learning. Thereby, the effect of quantitative parameter values of optimized three-dimensional ultrasound image was analyzed on the diagnosis and evaluation of intrauterine adhesions. Specifically, 75 patients with suspected intrauterine adhesion in hospital who underwent the hysteroscopic diagnosis were selected as the research subjects. The three-dimensional transvaginal ultrasound image was enhanced and optimized by the partial differential equation algorithm and processed by the deep learning algorithm. Subsequently, three-dimensional transvaginal ultrasound examinations were performed on the study subjects that met the standards. The March classification method was used to classify the patients with intrauterine adhesion. Finally, the results by the three-dimensional transvaginal ultrasound were compared with the diagnosis results in hysteroscope surgery. The results showed that the HSegNet algorithm model realized the automatic labeling of intrauterine adhesion in the transvaginal ultrasound image and the final accuracy coefficient was 97.3%. It suggested that the three-dimensional transvaginal ultrasound diagnosis based on deep learning was efficient and accurate. The accuracy of the three-dimensional transvaginal ultrasound was 97.14%, the sensitivity was 96.6%, and the specificity was 72%. In conclusion, the three-dimensional transvaginal examination can effectively improve the diagnostic efficiency of intrauterine adhesion, providing theoretical support for the subsequent diagnosis and grading of intrauterine adhesion.

2019 ◽  
Vol 46 (7) ◽  
pp. 3180-3193 ◽  
Author(s):  
Ran Zhou ◽  
Aaron Fenster ◽  
Yujiao Xia ◽  
J. David Spence ◽  
Mingyue Ding

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Liding Yao ◽  
Xiaojun Guan ◽  
Xiaowei Song ◽  
Yanbin Tan ◽  
Chun Wang ◽  
...  

AbstractRib fracture detection is time-consuming and demanding work for radiologists. This study aimed to introduce a novel rib fracture detection system based on deep learning which can help radiologists to diagnose rib fractures in chest computer tomography (CT) images conveniently and accurately. A total of 1707 patients were included in this study from a single center. We developed a novel rib fracture detection system on chest CT using a three-step algorithm. According to the examination time, 1507, 100 and 100 patients were allocated to the training set, the validation set and the testing set, respectively. Free Response ROC analysis was performed to evaluate the sensitivity and false positivity of the deep learning algorithm. Precision, recall, F1-score, negative predictive value (NPV) and detection and diagnosis were selected as evaluation metrics to compare the diagnostic efficiency of this system with radiologists. The radiologist-only study was used as a benchmark and the radiologist-model collaboration study was evaluated to assess the model’s clinical applicability. A total of 50,170,399 blocks (fracture blocks, 91,574; normal blocks, 50,078,825) were labelled for training. The F1-score of the Rib Fracture Detection System was 0.890 and the precision, recall and NPV values were 0.869, 0.913 and 0.969, respectively. By interacting with this detection system, the F1-score of the junior and the experienced radiologists had improved from 0.796 to 0.925 and 0.889 to 0.970, respectively; the recall scores had increased from 0.693 to 0.920 and 0.853 to 0.972, respectively. On average, the diagnosis time of radiologist assisted with this detection system was reduced by 65.3 s. The constructed Rib Fracture Detection System has a comparable performance with the experienced radiologist and is readily available to automatically detect rib fracture in the clinical setting with high efficacy, which could reduce diagnosis time and radiologists’ workload in the clinical practice.


Water ◽  
2021 ◽  
Vol 13 (19) ◽  
pp. 2633
Author(s):  
Jie Yu ◽  
Yitong Cao ◽  
Fei Shi ◽  
Jiegen Shi ◽  
Dibo Hou ◽  
...  

Three dimensional fluorescence spectroscopy has become increasingly useful in the detection of organic pollutants. However, this approach is limited by decreased accuracy in identifying low concentration pollutants. In this research, a new identification method for organic pollutants in drinking water is accordingly proposed using three-dimensional fluorescence spectroscopy data and a deep learning algorithm. A novel application of a convolutional autoencoder was designed to process high-dimensional fluorescence data and extract multi-scale features from the spectrum of drinking water samples containing organic pollutants. Extreme Gradient Boosting (XGBoost), an implementation of gradient-boosted decision trees, was used to identify the organic pollutants based on the obtained features. Method identification performance was validated on three typical organic pollutants in different concentrations for the scenario of accidental pollution. Results showed that the proposed method achieved increasing accuracy, in the case of both high-(>10 μg/L) and low-(≤10 μg/L) concentration pollutant samples. Compared to traditional spectrum processing techniques, the convolutional autoencoder-based approach enabled obtaining features of enhanced detail from fluorescence spectral data. Moreover, evidence indicated that the proposed method maintained the detection ability in conditions whereby the background water changes. It can effectively reduce the rate of misjudgments associated with the fluctuation of drinking water quality. This study demonstrates the possibility of using deep learning algorithms for spectral processing and contamination detection in drinking water.


2021 ◽  
pp. 29-42
Author(s):  
admin admin ◽  
◽  
◽  
Adnan Mohsin Abdulazeez

With the development of technology and smart devices in the medical field, the computer system has become an essential part of this development to learn devices in the medical field. One of the learning methods is deep learning (DL), which is a branch of machine learning (ML). The deep learning approach has been used in this field because it is one of the modern methods of obtaining accurate results through its algorithms, and among these algorithms that are used in this field are convolutional neural networks (CNN) and recurrent neural networks (RNN). In this paper we reviewed what have researchers have done in their researches to solve fetal problems, then summarize and carefully discuss the applications in different tasks identified for segmentation and classification of ultrasound images. Finally, this study discussed the potential challenges and directions for applying deep learning in ultrasound image analysis.


1999 ◽  
Vol 6 (3) ◽  
pp. E7 ◽  
Author(s):  
Alexander Hartov ◽  
Symma D. Eisner ◽  
W. Roberts ◽  
Keith D. Paulsen ◽  
Leah A. Platenik ◽  
...  

Image-guided neurosurgery that is directed by a preoperative imaging study, such as magnetic resonance (MR) imaging or computerized tomography (CT) scanning, can be very accurate provided no significant changes occur during surgery. A variety of factors known to affect brain tissue movement are not reflected in the preoperative images used for guidance. To update the information on which neuronavigation is based, the authors propose the use of three-dimensional (3-D) ultrasound images in conjunction with a finite-element computational model of the deformation of the brain. The 3-D ultrasound system will provide real-time information on the displacement of deep structures to guide the mathematical model. This paper has two goals: first, to present an outline of steps necessary to compute the location of a feature appearing in an ultrasound image in an arbitrary coordinate system; and second, to present an extensive evaluation of this system's accuracy. The authors have found that by using a stylus rigidly coupled to the 3-D tracker's sensor, they were able to locate a point with an overall error of 1.36 ± 1.67 mm (based on 39 points). When coupling the tracker to an ultrasound scanhead, they found that they could locate features appearing on ultrasound images with an error of 2.96 ± 1.85 mm (total 58 features). They also found that when registering a skull phantom to coordinates that were defined by MR imaging or CT scanning, they could do so with an error of 0.86 ± 0.61 mm (based on 20 coordinates). Based on their previous finding of brain shifts on the order of 1 cm during surgery, the accuracy of their system warrants its use in updating neuronavigation imaging data.


Sign in / Sign up

Export Citation Format

Share Document