Burnt Human Skin Segmentation and Depth Classification Using Deep Convolutional Neural Network (DCNN)

2020 ◽  
Vol 10 (10) ◽  
pp. 2421-2429
Author(s):  
Fakhri Alam Khan ◽  
Ateeq Ur Rehman Butt ◽  
Muhammad Asif ◽  
Hanan Aljuaid ◽  
Awais Adnan ◽  
...  

World Health Organization (WHO) manage health-related statistics all around the world by taking the necessary measures. What could be better for health and what may be the leading causes of deaths, all these statistics are well organized by WHO. Burn Injuries are mostly viewed in middle and low-income countries due to lack of resources, the result may come in the form of deaths by serious injuries caused by burning. Due to the non-accessibility of specialists and burn surgeons, simple and basic health care units situated at tribble areas as well as in small cities are facing the problem to diagnose the burn depths accurately. The primary goals and objectives of this research task are to segment the burnt region of skin from the normal skin and to diagnose the burn depths as per the level of burn. The dataset contains the 600 images of burnt patients and has been taken in a real-time environment from the Allied Burn and Reconstructive Surgery Unit (ABRSU) Faisalabad, Pakistan. Burnt human skin segmentation was carried by the use of Otsu's method and the image feature vector was obtained by using statistical calculations such as mean and median. A classifier Deep Convolutional Neural Network based on deep learning was used to classify the burnt human skin as per the level of burn into different depths. Almost 60 percent of images have been taken to train the classifier and the rest of the 40 percent burnt skin images were used to estimate the average accuracy of the classifier. The average accuracy of the DCNN classifier was noted as 83.4 percent and these are the best results yet. By the obtained results of this research task, young physicians and practitioners may be able to diagnose the burn depths and start the proper medication.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Pei Yang ◽  
Yong Pi ◽  
Tao He ◽  
Jiangming Sun ◽  
Jianan Wei ◽  
...  

Abstract Background 99mTc-pertechnetate thyroid scintigraphy is a valid complementary avenue for evaluating thyroid disease in the clinic, the image feature of thyroid scintigram is relatively simple but the interpretation still has a moderate consistency among physicians. Thus, we aimed to develop an artificial intelligence (AI) system to automatically classify the four patterns of thyroid scintigram. Methods We collected 3087 thyroid scintigrams from center 1 to construct the training dataset (n = 2468) and internal validating dataset (n = 619), and another 302 cases from center 2 as external validating datasets. Four pre-trained neural networks that included ResNet50, DenseNet169, InceptionV3, and InceptionResNetV2 were implemented to construct AI models. The models were trained separately with transfer learning. We evaluated each model’s performance with metrics as following: accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), recall, precision, and F1-score. Results The overall accuracy of four pre-trained neural networks in classifying four common uptake patterns of thyroid scintigrams all exceeded 90%, and the InceptionV3 stands out from others. It reached the highest performance with an overall accuracy of 92.73% for internal validation and 87.75% for external validation, respectively. As for each category of thyroid scintigrams, the area under the receiver operator characteristic curve (AUC) was 0.986 for ‘diffusely increased,’ 0.997 for ‘diffusely decreased,’ 0.998 for ‘focal increased,’ and 0.945 for ‘heterogeneous uptake’ in internal validation, respectively. Accordingly, the corresponding performances also obtained an ideal result of 0.939, 1.000, 0.974, and 0.915 in external validation, respectively. Conclusions Deep convolutional neural network-based AI model represented considerable performance in the classification of thyroid scintigrams, which may help physicians improve the interpretation of thyroid scintigrams more consistently and efficiently.


2020 ◽  
Vol 10 (8) ◽  
pp. 1943-1948
Author(s):  
Ran Hui ◽  
Jiaxing Chen ◽  
Yu Liu ◽  
Lin Shi ◽  
Chao Fu ◽  
...  

Objective: To explore the application of deep convolutional neural network theory in thyroid ultrasound image system analysis and eigenvalue extraction to help medically predict the patient’s condition. Methods: The thyroid color ultrasound image dataset of our hospital was selected as the training and test samples. The comparison experiment was designed in the deep convolutional neural network learning framework to test the feasibility of the method. Results: Image information classification based on deep neural network algorithm can predict thyroid nodule lesions well, and has good accuracy in the classification test of benign and malignant nodules. Conclusion: The clinical application of deep learning method and thyroid ultrasound image feature value extraction and system analysis can improve the accuracy of clinical thyroid benign and malignant classification.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Ying Ren ◽  
Yu He ◽  
Linghua Cong

Objective. To investigate the application value of a deep convolutional neural network (CNN) model for cytological assessment of thyroid nodules. Methods. 117 patients with thyroid nodules who underwent thyroid cytology examination in the Affiliated People’s Hospital of Ningbo University between January 2017 and December 2019 were included in this study. 100 papillary thyroid cancer samples and 100 nonmalignant samples were collected respectively. The sample images were translated vertically and horizontally. Thus, 900 images were separately created in the vertical and horizontal directions. The sample images were randomly divided into training samples (n = 1260) and test samples (n = 540) at the ratio of 7 : 3 per the training sample to test sample. According to the training samples, the pretrained deep convolutional neural network architecture Resnet50 was trained and fine-tuned. A convolutional neural network-based computer-aided detection (CNN-CAD) system was constructed to perform full-length scan of the test sample slices. The ability of CNN-CAD to screen malignant tumors was analyzed using the threshold setting method. Eighty pathological images were collected from patients who received treatment between January 2020 and May 2020 and used to verify the value of CNN in the screening of malignant thyroid nodules as verification set. Results. With the number of iterations increasing, the training and verification loss of CNN model gradually decreased and tended to be stable, and the training and verification accuracy of CNN model gradually increased and tended to be stable. The average loss rate of training samples determined by the CNN model was (22.35 ± 0.62) %, and the average loss rate of test samples determined by the CNN model was (26.41 ± 3.37) %. The average accuracy rate of training samples determined by the CNN model was (91.04 ± 2.11) %, and the average accuracy rate of test samples determined by the CNN model was (91.26 ± 1.02)%. Conclusion. A CNN model exhibits a high value in the cytological diagnosis of thyroid diseases which can be used for the cytological diagnosis of malignant thyroid tumor in the clinic.


2020 ◽  
Vol 8 (4) ◽  
pp. 78-95
Author(s):  
Neeru Jindal ◽  
Harpreet Kaur

Doctored video generation with easily accessible editing software has proven to be a major problem in maintaining its authenticity. This article is focused on a highly efficient method for the exposure of inter-frame tampering in the videos by means of deep convolutional neural network (DCNN). The proposed algorithm will detect forgery without requiring additional pre-embedded information of the frame. The other significance from pre-existing learning techniques is that the algorithm classifies the forged frames on the basis of the correlation between the frames and the observed abnormalities using DCNN. The decoders used for batch normalization of input improve the training swiftness. Simulation results obtained on REWIND and GRIP video dataset with an average accuracy of 98% shows the superiority of the proposed algorithm as compared to the existing one. The proposed algorithm is capable of detecting the forged content in You Tube compressed video with an accuracy reaching up to 100% for GRIP dataset and 98.99% for REWIND dataset.


2019 ◽  
Vol 14 ◽  
pp. 155892501989739 ◽  
Author(s):  
Zhoufeng Liu ◽  
Chi Zhang ◽  
Chunlei Li ◽  
Shumin Ding ◽  
Yan Dong ◽  
...  

Fabric defect recognition is an important measure for quality control in a textile factory. This article utilizes a deep convolutional neural network to recognize defects in fabrics that have complicated textures. Although convolutional neural networks are very powerful, a large number of parameters consume considerable computation time and memory bandwidth. In real-world applications, however, the fabric defect recognition task needs to be carried out in a timely fashion on a computation-limited platform. To optimize a deep convolutional neural network, a novel method is introduced to reveal the input pattern that originally caused a specific activation in the network feature maps. Using this visualization technique, this study visualizes the features in a fully trained convolutional model and attempts to change the architecture of original neural network to reduce computational load. After a series of improvements, a new convolutional network is acquired that is more efficient to the fabric image feature extraction, and the computation load and the total number of parameters in the new network is 23% and 8.9%, respectively, of the original model. The proposed neural network is specifically tailored for fabric defect recognition in resource-constrained environments. All of the source code and pretrained models are available online at https://github.com/ZCmeteor .


2019 ◽  
Vol 8 (4) ◽  
pp. 6159-6163 ◽  

According to the World Health Organization (WHO), over 1.3 million deaths occur worldwide each year due to traffic accidents alone. This figure elevates traffic mishaps to be the eight leading cause of death. According to another study the United States National Highway Traffic Safety Administration (NHTSA), the major cause of road deaths and injury is distracted drivers. Motivated by recent advancement of deep learning and computer vision in predicting drivers’ behaviour, this paper attempts to investigate the optimal deep learning network architecture to accurately detect distracted drivers over visual feed. Specifically, a thorough evaluation and detailed benchmark comparisons of pretrained deep convolutional neural network is carried out. Results indicate that the proposed VGG16network architecture is capable of achieving 96% accuracy on the test dataset images.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Author(s):  
Liang Kim Meng ◽  
Azira Khalil ◽  
Muhamad Hanif Ahmad Nizar ◽  
Maryam Kamarun Nisham ◽  
Belinda Pingguan-Murphy ◽  
...  

Background: Bone Age Assessment (BAA) refers to a clinical procedure that aims to identify a discrepancy between biological and chronological age of an individual by assessing the bone age growth. Currently, there are two main methods of executing BAA which are known as Greulich-Pyle and Tanner-Whitehouse techniques. Both techniques involve a manual and qualitative assessment of hand and wrist radiographs, resulting in intra and inter-operator variability accuracy and time-consuming. An automatic segmentation can be applied to the radiographs, providing the physician with more accurate delineation of the carpal bone and accurate quantitative analysis. Methods: In this study, we proposed an image feature extraction technique based on image segmentation with the fully convolutional neural network with eight stride pixel (FCN-8). A total of 290 radiographic images including both female and the male subject of age ranging from 0 to 18 were manually segmented and trained using FCN-8. Results and Conclusion: The results exhibit a high training accuracy value of 99.68% and a loss rate of 0.008619 for 50 epochs of training. The experiments compared 58 images against the gold standard ground truth images. The accuracy of our fully automated segmentation technique is 0.78 ± 0.06, 1.56 ±0.30 mm and 98.02% in terms of Dice Coefficient, Hausdorff Distance, and overall qualitative carpal recognition accuracy, respectively.


Sign in / Sign up

Export Citation Format

Share Document