scholarly journals Development and validation of a deep-learning model for scoring of radiographic finger joint destruction in rheumatoid arthritis

2019 ◽  
Vol 3 (2) ◽  
Author(s):  
Toru Hirano ◽  
Masayuki Nishide ◽  
Naoki Nonaka ◽  
Jun Seita ◽  
Kosuke Ebina ◽  
...  

Abstract Objective The purpose of this research was to develop a deep-learning model to assess radiographic finger joint destruction in RA. Methods The model comprises two steps: a joint-detection step and a joint-evaluation step. Among 216 radiographs of 108 patients with RA, 186 radiographs were assigned to the training/validation dataset and 30 to the test dataset. In the training/validation dataset, images of PIP joints, the IP joint of the thumb or MCP joints were manually clipped and scored for joint space narrowing (JSN) and bone erosion by clinicians, and then these images were augmented. As a result, 11 160 images were used to train and validate a deep convolutional neural network for joint evaluation. Three thousand seven hundred and twenty selected images were used to train machine learning for joint detection. These steps were combined as the assessment model for radiographic finger joint destruction. Performance of the model was examined using the test dataset, which was not included in the training/validation process, by comparing the scores assigned by the model and clinicians. Results The model detected PIP joints, the IP joint of the thumb and MCP joints with a sensitivity of 95.3% and assigned scores for JSN and erosion. Accuracy (percentage of exact agreement) reached 49.3–65.4% for JSN and 70.6–74.1% for erosion. The correlation coefficient between scores by the model and clinicians per image was 0.72–0.88 for JSN and 0.54–0.75 for erosion. Conclusion Image processing with the trained convolutional neural network model is promising to assess radiographs in RA.

Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1672
Author(s):  
Luya Lian ◽  
Tianer Zhu ◽  
Fudong Zhu ◽  
Haihua Zhu

Objectives: Deep learning methods have achieved impressive diagnostic performance in the field of radiology. The current study aimed to use deep learning methods to detect caries lesions, classify different radiographic extensions on panoramic films, and compare the classification results with those of expert dentists. Methods: A total of 1160 dental panoramic films were evaluated by three expert dentists. All caries lesions in the films were marked with circles, whose combination was defined as the reference dataset. A training and validation dataset (1071) and a test dataset (89) were then established from the reference dataset. A convolutional neural network, called nnU-Net, was applied to detect caries lesions, and DenseNet121 was applied to classify the lesions according to their depths (dentin lesions in the outer, middle, or inner third D1/2/3 of dentin). The performance of the test dataset in the trained nnU-Net and DenseNet121 models was compared with the results of six expert dentists in terms of the intersection over union (IoU), Dice coefficient, accuracy, precision, recall, negative predictive value (NPV), and F1-score metrics. Results: nnU-Net yielded caries lesion segmentation IoU and Dice coefficient values of 0.785 and 0.663, respectively, and the accuracy and recall rate of nnU-Net were 0.986 and 0.821, respectively. The results of the expert dentists and the neural network were shown to be no different in terms of accuracy, precision, recall, NPV, and F1-score. For caries depth classification, DenseNet121 showed an overall accuracy of 0.957 for D1 lesions, 0.832 for D2 lesions, and 0.863 for D3 lesions. The recall results of the D1/D2/D3 lesions were 0.765, 0.652, and 0.918, respectively. All metric values, including accuracy, precision, recall, NPV, and F1-score values, were proven to be no different from those of the experienced dentists. Conclusion: In detecting and classifying caries lesions on dental panoramic radiographs, the performance of deep learning methods was similar to that of expert dentists. The impact of applying these well-trained neural networks for disease diagnosis and treatment decision making should be explored.


2020 ◽  
Vol 17 (9) ◽  
pp. 4660-4665
Author(s):  
L. Megalan Leo ◽  
T. Kalpalatha Reddy

In the modern times, Dental caries is one of the most prevalent diseases of the teeth in the whole world. Almost 90% of the people get affected by cavity. Dental caries is the cavity which occurs due to the remnant food and bacteria. Dental Caries are curable and preventable diseases when it is identified at earlier stage. Dentist uses the radiographic examination in addition with visual tactile inspection to identify the caries. Dentist finds difficult to identify the occlusal, pit and fissure caries. It may lead to sever problem if the cavity left untreated and not identified at the earliest stage. Machine learning can be applied to solve this issue by applying the labelled dataset given by the experienced dentist. In this paper, convolutional based deep learning method is applied to identify the cavity presence in the image. 480 Bite viewing radiography images are collected from the Elsevier standard database. All the input images are resized to 128–128 matrixes. In preprocessing, selective median filter is used to reduce the noise in the image. Pre-processed inputs are given to deep learning model where convolutional neural network with Google Net inception v3 architecture algorithm is implemented. ReLu activation function is used with Google Net to identify the caries that provide the dentists with the precise and optimized results about caries and the area affected. Proposed technique achieves 86.7% accuracy on the testing dataset.


2020 ◽  
Author(s):  
Zicheng Hu ◽  
Alice Tang ◽  
Jaiveer Singh ◽  
Sanchita Bhattacharya ◽  
Atul J. Butte

AbstractCytometry technologies are essential tools for immunology research, providing high-throughput measurements of the immune cells at the single-cell level. Traditional approaches in interpreting and using cytometry measurements include manual or automated gating to identify cell subsets from the cytometry data, providing highly intuitive results but may lead to significant information loss, in that additional details in measured or correlated cell signals might be missed. In this study, we propose and test a deep convolutional neural network for analyzing cytometry data in an end-to-end fashion, allowing a direct association between raw cytometry data and the clinical outcome of interest. Using nine large CyTOF studies from the open-access ImmPort database, we demonstrated that the deep convolutional neural network model can accurately diagnose the latent cytomegalovirus (CMV) in healthy individuals, even when using highly heterogeneous data from different studies. In addition, we developed a permutation-based method for interpreting the deep convolutional neural network model and identified a CD27-CD94+ CD8+ T cell population significantly associated with latent CMV infection. Finally, we provide a tutorial for creating, training and interpreting the tailored deep learning model for cytometry data using Keras and TensorFlow (github.com/hzc363/DeepLearningCyTOF).


Author(s):  
Kannuru Padmaja

Abstract: In this paper, we present the implementation of Devanagari handwritten character recognition using deep learning. Hand written character recognition gaining more importance due to its major contribution in automation system. Devanagari script is one of various languages script in India. It consists of 12 vowels and 36 consonants. Here we implemented the deep learning model to recognize the characters. The character recognition mainly five steps: pre-processing, segmentation, feature extraction, prediction, post-processing. The model will use convolutional neural network to train the model and image processing techniques to use the character recognition and predict the accuracy of rcognition. Keywords: convolutional neural network, character recognition, Devanagari script, deep learning.


2020 ◽  
Vol 9 (05) ◽  
pp. 25052-25056
Author(s):  
Abhi Kadam ◽  
Anupama Mhatre ◽  
Sayali Redasani ◽  
Amit Nerurkar

Current lighting technologies extend the options for changing the appearance of rooms and closed spaces, as such creating ambiences with an affective meaning. Using intelligence, these ambiences may instantly be adapted to the needs of the room’s occupant(s), possibly improving their well-being. In this paper, we set actuate lighting in our surrounding using mood detection. We analyze the mood of the person by Facial Emotion Recognition using deep learning model such as Convolutional Neural Network (CNN). On recognizing this emotion, we will actuate lighting in our surrounding in accordance with the mood. Based on implementation results, the system needs to be developed further by adding more specific data class and training data.


2021 ◽  
Vol 11 (12) ◽  
pp. 3199-3208
Author(s):  
K. Ganapriya ◽  
N. Uma Maheswari ◽  
R. Venkatesh

Prediction of occurrence of a seizure would be of greater help to make necessary precaution for taking care of the patient. A Deep learning model, recurrent neural network (RNN), is designed for predicting the upcoming values in the EEG values. A deep data analysis is made to find the parameter that could best differentiate the normal values and seizure values. Next a recurrent neural network model is built for predicting the values earlier. Four different variants of recurrent neural networks are designed in terms of number of time stamps and the number of LSTM layers and the best model is identified. The best identified RNN model is used for predicting the values. The performance of the model is evaluated in terms of explained variance score and R2 score. The model founds to perform well number of elements in the test dataset is minimal and so this model can predict the seizure values only a few seconds earlier.


Endoscopy ◽  
2019 ◽  
Vol 51 (12) ◽  
pp. 1121-1129 ◽  
Author(s):  
Bum-Joo Cho ◽  
Chang Seok Bang ◽  
Se Woo Park ◽  
Young Joo Yang ◽  
Seung In Seo ◽  
...  

Abstract Background Visual inspection, lesion detection, and differentiation between malignant and benign features are key aspects of an endoscopist’s role. The use of machine learning for the recognition and differentiation of images has been increasingly adopted in clinical practice. This study aimed to establish convolutional neural network (CNN) models to automatically classify gastric neoplasms based on endoscopic images. Methods Endoscopic white-light images of pathologically confirmed gastric lesions were collected and classified into five categories: advanced gastric cancer, early gastric cancer, high grade dysplasia, low grade dysplasia, and non-neoplasm. Three pretrained CNN models were fine-tuned using a training dataset. The classifying performance of the models was evaluated using a test dataset and a prospective validation dataset. Results A total of 5017 images were collected from 1269 patients, among which 812 images from 212 patients were used as the test dataset. An additional 200 images from 200 patients were collected and used for prospective validation. For the five-category classification, the weighted average accuracy of the Inception-Resnet-v2 model reached 84.6 %. The mean area under the curve (AUC) of the model for differentiating gastric cancer and neoplasm was 0.877 and 0.927, respectively. In prospective validation, the Inception-Resnet-v2 model showed lower performance compared with the endoscopist with the best performance (five-category accuracy 76.4 % vs. 87.6 %; cancer 76.0 % vs. 97.5 %; neoplasm 73.5 % vs. 96.5 %; P  < 0.001). However, there was no statistical difference between the Inception-Resnet-v2 model and the endoscopist with the worst performance in the differentiation of gastric cancer (accuracy 76.0 % vs. 82.0 %) and neoplasm (AUC 0.776 vs. 0.865). Conclusion The evaluated deep-learning models have the potential for clinical application in classifying gastric cancer or neoplasm on endoscopic white-light images.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2012
Author(s):  
Jiameng Gao ◽  
Chengzhong Liu ◽  
Junying Han ◽  
Qinglin Lu ◽  
Hengxing Wang ◽  
...  

Wheat is a very important food crop for mankind. Many new varieties are bred every year. The accurate judgment of wheat varieties can promote the development of the wheat industry and the protection of breeding property rights. Although gene analysis technology can be used to accurately determine wheat varieties, it is costly, time-consuming, and inconvenient. Traditional machine learning methods can significantly reduce the cost and time of wheat cultivars identification, but the accuracy is not high. In recent years, the relatively popular deep learning methods have further improved the accuracy on the basis of traditional machine learning, whereas it is quite difficult to continue to improve the identification accuracy after the convergence of the deep learning model. Based on the ResNet and SENet models, this paper draws on the idea of the bagging-based ensemble estimator algorithm, and proposes a deep learning model for wheat classification, CMPNet, which is coupled with the tillering period, flowering period, and seed image. This convolutional neural network (CNN) model has a symmetrical structure along the direction of the tensor flow. The model uses collected images of different types of wheat in multiple growth periods. First, it uses the transfer learning method of the ResNet-50, SE-ResNet, and SE-ResNeXt models, and then trains the collected images of 30 kinds of wheat in different growth periods. It then uses the concat layer to connect the output layers of the three models, and finally obtains the wheat classification results through the softmax function. The accuracy of wheat variety identification increased from 92.07% at the seed stage, 95.16% at the tillering stage, and 97.38% at the flowering stage to 99.51%. The model’s single inference time was only 0.0212 s. The model not only significantly improves the classification accuracy of wheat varieties, but also achieves low cost and high efficiency, which makes it a novel and important technology reference for wheat producers, managers, and law enforcement supervisors in the practice of wheat production.


10.2196/24762 ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. e24762
Author(s):  
Hyun-Lim Yang ◽  
Chul-Woo Jung ◽  
Seong Mi Yang ◽  
Min-Soo Kim ◽  
Sungho Shim ◽  
...  

Background Arterial pressure-based cardiac output (APCO) is a less invasive method for estimating cardiac output without concerns about complications from the pulmonary artery catheter (PAC). However, inaccuracies of currently available APCO devices have been reported. Improvements to the algorithm by researchers are impossible, as only a subset of the algorithm has been released. Objective In this study, an open-source algorithm was developed and validated using a convolutional neural network and a transfer learning technique. Methods A retrospective study was performed using data from a prospective cohort registry of intraoperative bio-signal data from a university hospital. The convolutional neural network model was trained using the arterial pressure waveform as input and the stroke volume (SV) value as the output. The model parameters were pretrained using the SV values from a commercial APCO device (Vigileo or EV1000 with the FloTrac algorithm) and adjusted with a transfer learning technique using SV values from the PAC. The performance of the model was evaluated using absolute error for the PAC on the testing dataset from separate periods. Finally, we compared the performance of the deep learning model and the FloTrac with the SV values from the PAC. Results A total of 2057 surgical cases (1958 training and 99 testing cases) were used in the registry. In the deep learning model, the absolute errors of SV were 14.5 (SD 13.4) mL (10.2 [SD 8.4] mL in cardiac surgery and 17.4 [SD 15.3] mL in liver transplantation). Compared with FloTrac, the absolute errors of the deep learning model were significantly smaller (16.5 [SD 15.4] and 18.3 [SD 15.1], P<.001). Conclusions The deep learning–based APCO algorithm showed better performance than the commercial APCO device. Further improvement of the algorithm developed in this study may be helpful for estimating cardiac output accurately in clinical practice and optimizing high-risk patient care.


Sign in / Sign up

Export Citation Format

Share Document