Dental Caries Classification System Using Deep Learning Based Convolutional Neural Network

2020 ◽  
Vol 17 (9) ◽  
pp. 4660-4665
Author(s):  
L. Megalan Leo ◽  
T. Kalpalatha Reddy

In the modern times, Dental caries is one of the most prevalent diseases of the teeth in the whole world. Almost 90% of the people get affected by cavity. Dental caries is the cavity which occurs due to the remnant food and bacteria. Dental Caries are curable and preventable diseases when it is identified at earlier stage. Dentist uses the radiographic examination in addition with visual tactile inspection to identify the caries. Dentist finds difficult to identify the occlusal, pit and fissure caries. It may lead to sever problem if the cavity left untreated and not identified at the earliest stage. Machine learning can be applied to solve this issue by applying the labelled dataset given by the experienced dentist. In this paper, convolutional based deep learning method is applied to identify the cavity presence in the image. 480 Bite viewing radiography images are collected from the Elsevier standard database. All the input images are resized to 128–128 matrixes. In preprocessing, selective median filter is used to reduce the noise in the image. Pre-processed inputs are given to deep learning model where convolutional neural network with Google Net inception v3 architecture algorithm is implemented. ReLu activation function is used with Google Net to identify the caries that provide the dentists with the precise and optimized results about caries and the area affected. Proposed technique achieves 86.7% accuracy on the testing dataset.

2020 ◽  
Author(s):  
Zicheng Hu ◽  
Alice Tang ◽  
Jaiveer Singh ◽  
Sanchita Bhattacharya ◽  
Atul J. Butte

AbstractCytometry technologies are essential tools for immunology research, providing high-throughput measurements of the immune cells at the single-cell level. Traditional approaches in interpreting and using cytometry measurements include manual or automated gating to identify cell subsets from the cytometry data, providing highly intuitive results but may lead to significant information loss, in that additional details in measured or correlated cell signals might be missed. In this study, we propose and test a deep convolutional neural network for analyzing cytometry data in an end-to-end fashion, allowing a direct association between raw cytometry data and the clinical outcome of interest. Using nine large CyTOF studies from the open-access ImmPort database, we demonstrated that the deep convolutional neural network model can accurately diagnose the latent cytomegalovirus (CMV) in healthy individuals, even when using highly heterogeneous data from different studies. In addition, we developed a permutation-based method for interpreting the deep convolutional neural network model and identified a CD27-CD94+ CD8+ T cell population significantly associated with latent CMV infection. Finally, we provide a tutorial for creating, training and interpreting the tailored deep learning model for cytometry data using Keras and TensorFlow (github.com/hzc363/DeepLearningCyTOF).


2021 ◽  
Author(s):  
P. Golda Jeyasheeli ◽  
N. Indumathi

In Indian Population there is about 1 percent of the people are deaf and dumb. Deaf and dumb people use gestures to interact with each other. Ordinary humans fail to grasp the significance of gestures, which makes interaction between deaf and mute people hard. In attempt for ordinary citizens to understand the signs, an automated sign language identification system is proposed. A smart wearable hand device is designed by attaching different sensors to the gloves to perform the gestures. Each gesture has unique sensor values and those values are collected as an excel data. The characteristics of movements are extracted and categorized with the aid of a convolutional neural network (CNN). The data from the test set is identified by the CNN according to the classification. The objective of this system is to bridge the interaction gap between people who are deaf or hard of hearing and the rest of society.


Author(s):  
Kannuru Padmaja

Abstract: In this paper, we present the implementation of Devanagari handwritten character recognition using deep learning. Hand written character recognition gaining more importance due to its major contribution in automation system. Devanagari script is one of various languages script in India. It consists of 12 vowels and 36 consonants. Here we implemented the deep learning model to recognize the characters. The character recognition mainly five steps: pre-processing, segmentation, feature extraction, prediction, post-processing. The model will use convolutional neural network to train the model and image processing techniques to use the character recognition and predict the accuracy of rcognition. Keywords: convolutional neural network, character recognition, Devanagari script, deep learning.


2020 ◽  
Vol 9 (05) ◽  
pp. 25052-25056
Author(s):  
Abhi Kadam ◽  
Anupama Mhatre ◽  
Sayali Redasani ◽  
Amit Nerurkar

Current lighting technologies extend the options for changing the appearance of rooms and closed spaces, as such creating ambiences with an affective meaning. Using intelligence, these ambiences may instantly be adapted to the needs of the room’s occupant(s), possibly improving their well-being. In this paper, we set actuate lighting in our surrounding using mood detection. We analyze the mood of the person by Facial Emotion Recognition using deep learning model such as Convolutional Neural Network (CNN). On recognizing this emotion, we will actuate lighting in our surrounding in accordance with the mood. Based on implementation results, the system needs to be developed further by adding more specific data class and training data.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2012
Author(s):  
Jiameng Gao ◽  
Chengzhong Liu ◽  
Junying Han ◽  
Qinglin Lu ◽  
Hengxing Wang ◽  
...  

Wheat is a very important food crop for mankind. Many new varieties are bred every year. The accurate judgment of wheat varieties can promote the development of the wheat industry and the protection of breeding property rights. Although gene analysis technology can be used to accurately determine wheat varieties, it is costly, time-consuming, and inconvenient. Traditional machine learning methods can significantly reduce the cost and time of wheat cultivars identification, but the accuracy is not high. In recent years, the relatively popular deep learning methods have further improved the accuracy on the basis of traditional machine learning, whereas it is quite difficult to continue to improve the identification accuracy after the convergence of the deep learning model. Based on the ResNet and SENet models, this paper draws on the idea of the bagging-based ensemble estimator algorithm, and proposes a deep learning model for wheat classification, CMPNet, which is coupled with the tillering period, flowering period, and seed image. This convolutional neural network (CNN) model has a symmetrical structure along the direction of the tensor flow. The model uses collected images of different types of wheat in multiple growth periods. First, it uses the transfer learning method of the ResNet-50, SE-ResNet, and SE-ResNeXt models, and then trains the collected images of 30 kinds of wheat in different growth periods. It then uses the concat layer to connect the output layers of the three models, and finally obtains the wheat classification results through the softmax function. The accuracy of wheat variety identification increased from 92.07% at the seed stage, 95.16% at the tillering stage, and 97.38% at the flowering stage to 99.51%. The model’s single inference time was only 0.0212 s. The model not only significantly improves the classification accuracy of wheat varieties, but also achieves low cost and high efficiency, which makes it a novel and important technology reference for wheat producers, managers, and law enforcement supervisors in the practice of wheat production.


10.2196/24762 ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. e24762
Author(s):  
Hyun-Lim Yang ◽  
Chul-Woo Jung ◽  
Seong Mi Yang ◽  
Min-Soo Kim ◽  
Sungho Shim ◽  
...  

Background Arterial pressure-based cardiac output (APCO) is a less invasive method for estimating cardiac output without concerns about complications from the pulmonary artery catheter (PAC). However, inaccuracies of currently available APCO devices have been reported. Improvements to the algorithm by researchers are impossible, as only a subset of the algorithm has been released. Objective In this study, an open-source algorithm was developed and validated using a convolutional neural network and a transfer learning technique. Methods A retrospective study was performed using data from a prospective cohort registry of intraoperative bio-signal data from a university hospital. The convolutional neural network model was trained using the arterial pressure waveform as input and the stroke volume (SV) value as the output. The model parameters were pretrained using the SV values from a commercial APCO device (Vigileo or EV1000 with the FloTrac algorithm) and adjusted with a transfer learning technique using SV values from the PAC. The performance of the model was evaluated using absolute error for the PAC on the testing dataset from separate periods. Finally, we compared the performance of the deep learning model and the FloTrac with the SV values from the PAC. Results A total of 2057 surgical cases (1958 training and 99 testing cases) were used in the registry. In the deep learning model, the absolute errors of SV were 14.5 (SD 13.4) mL (10.2 [SD 8.4] mL in cardiac surgery and 17.4 [SD 15.3] mL in liver transplantation). Compared with FloTrac, the absolute errors of the deep learning model were significantly smaller (16.5 [SD 15.4] and 18.3 [SD 15.1], P<.001). Conclusions The deep learning–based APCO algorithm showed better performance than the commercial APCO device. Further improvement of the algorithm developed in this study may be helpful for estimating cardiac output accurately in clinical practice and optimizing high-risk patient care.


Author(s):  
Syed Farhan Hyder Abidi

India accounts for the world’s largest number of cases in TB, with 2.8 million cases annually, and accounts for more than a quarter of the global TB burden. Tuberculosis (TB) is caused by the bacterium (Mycobacterium tuberculosis) which most commonly affects the lungs. TB is transmitted from person to person through the air. When people with TB cough, sneeze or spit, the germs are propelled into the air. This paper showcases a methodology which uses a Deep Learning Model (dCNN) for the detection of Tuberculosis in the lungs. The accuracy obtained by the methods for the model is desirable and dependable, which is increasingly productive in contrast to the accuracy shown by other neural networks.


Author(s):  
Jebaveerasingh Jebadurai ◽  
Immanuel Johnraja Jebadurai ◽  
Getzi Jeba Leelipushpam Paulraj ◽  
Sushen Vallabh Vangeepuram

Sign in / Sign up

Export Citation Format

Share Document