scholarly journals Hyperspectral imaging and artificial intelligence to detect oral malignancy – part 1 - automated tissue classification of oral muscle, fat and mucosa using a light-weight 6-layer deep neural network

2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Daniel G. E. Thiem ◽  
Paul Römer ◽  
Matthias Gielisch ◽  
Bilal Al-Nawas ◽  
Martin Schlüter ◽  
...  

Abstract Background Hyperspectral imaging (HSI) is a promising non-contact approach to tissue diagnostics, generating large amounts of raw data for whose processing computer vision (i.e. deep learning) is particularly suitable. Aim of this proof of principle study was the classification of hyperspectral (HS)-reflectance values into the human-oral tissue types fat, muscle and mucosa using deep learning methods. Furthermore, the tissue-specific hyperspectral signatures collected will serve as a representative reference for the future assessment of oral pathological changes in the sense of a HS-library. Methods A total of about 316 samples of healthy human-oral fat, muscle and oral mucosa was collected from 174 different patients and imaged using a HS-camera, covering the wavelength range from 500 nm to 1000 nm. HS-raw data were further labelled and processed for tissue classification using a light-weight 6-layer deep neural network (DNN). Results The reflectance values differed significantly (p < .001) for fat, muscle and oral mucosa at almost all wavelengths, with the signature of muscle differing the most. The deep neural network distinguished tissue types with an accuracy of > 80% each. Conclusion Oral fat, muscle and mucosa can be classified sufficiently and automatically by their specific HS-signature using a deep learning approach. Early detection of premalignant-mucosal-lesions using hyperspectral imaging and deep learning is so far represented rarely in in medical and computer vision research domain but has a high potential and is part of subsequent studies.

Author(s):  
Parvathi R. ◽  
Pattabiraman V.

This chapter proposes a hybrid method for classification of the objects based on deep neural network and a similarity-based search algorithm. The objects are pre-processed with external conditions. After pre-processing and training different deep learning networks with the object dataset, the authors compare the results to find the best model to improve the accuracy of the results based on the features of object images extracted from the feature vector layer of a neural network. RPFOREST (random projection forest) model is used to predict the approximate nearest images. ResNet50, InceptionV3, InceptionV4, and DenseNet169 models are trained with this dataset. A proposal for adaptive finetuning of the deep learning models by determining the number of layers required for finetuning with the help of the RPForest model is given, and this experiment is conducted using the Xception model.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Umashankar Subramaniam ◽  
M. Monica Subashini ◽  
Dhafer Almakhles ◽  
Alagar Karthick ◽  
S. Manoharan

The proposed method introduces algorithms for the preprocessing of normal, COVID-19, and pneumonia X-ray lung images which promote the accuracy of classification when compared with raw (unprocessed) X-ray lung images. Preprocessing of an image improves the quality of an image increasing the intersection over union scores in segmentation of lungs from the X-ray images. The authors have implemented an efficient preprocessing and classification technique for respiratory disease detection. In this proposed method, the histogram of oriented gradients (HOG) algorithm, Haar transform (Haar), and local binary pattern (LBP) algorithm were applied on lung X-ray images to extract the best features and segment the left lung and right lung. The segmentation of lungs from the X-ray can improve the accuracy of results in COVID-19 detection algorithms or any machine/deep learning techniques. The segmented lungs are validated over intersection over union scores to compare the algorithms. The preprocessed X-ray image results in better accuracy in classification for all three classes (normal/COVID-19/pneumonia) than unprocessed raw images. VGGNet, AlexNet, Resnet, and the proposed deep neural network were implemented for the classification of respiratory diseases. Among these architectures, the proposed deep neural network outperformed the other models with better classification accuracy.


Deep learning has arrived with a great number of advances in the research of machine learning and its models. Due to the advancements recently in the field of deep learning and its models especially in the fields like NLP and Computer Vision in supervised learning for which we have to pre-definably decide a dataset and train our model completely on it and make predictions but in case if we have any new samples of data on which we want our model to be predicted then we have to completely retrain the model, which is computationally costly therefore to avoid re-training the model, we add the new samples on the previously learnt features from the pre- trained model called Incremental Learning. In the paper we proposed the system to overcome the process of catastrophic forgetting we introduced the concept of building on pre-trained model.


Author(s):  
Yasir Eltigani Ali Mustaf ◽  
◽  
Bashir Hassan Ismail ◽  

Diagnosis of diabetic retinopathy (DR) via images of colour fundus requires experienced clinicians to determine the presence and importance of a large number of small characteristics. This work proposes and named Adapted Stacked Auto Encoder (ASAE-DNN) a novel deep learning framework for diabetic retinopathy (DR), three hidden layers have been used to extract features and classify them then use a Softmax classification. The models proposed are checked on Messidor's data set, including 800 training images and 150 test images. Exactness, accuracy, time, recall and calculation are assessed for the outcomes of the proposed models. The results of these studies show that the model ASAE-DNN was 97% accurate.


Author(s):  
Weston Upchurch ◽  
Alex Deakyne ◽  
David A. Ramirez ◽  
Paul A. Iaizzo

Abstract Acute compartment syndrome is a serious condition that requires urgent surgical treatment. While the current emergency treatment is straightforward — relieve intra-compartmental pressure via fasciotomy — the diagnosis is often a difficult one. A deep neural network is presented here that has been trained to detect whether isolated muscle bundles were exposed to hypoxic conditions and became ischemic.


Author(s):  
Shamik Tiwari

The classification of plants is one of the most important aims for botanists since plants have a significant part in the natural life cycle. In this work, a leaf-based automatic plant classification framework is investigated. The aim is to compare two different deep learning approaches named Deep Neural Network (DNN) and deep Convolutional Neural Network (CNN). In the case of deep neural network, hybrid shapes and texture features are utilized as hand-crafted features while in the case of the convolution non-handcraft, features are applied for classification. The offered frameworks are evaluated with a public leaf database. From the simulation results, it is confirmed that the deep CNN-based deep learning framework demonstrates superior classification performance than the handcraft feature based approach.


Author(s):  
Abaikesh Sharma

The human faces have vibrant frequency of characteristics, which makes it difficult to analyze the facial expression. Automated real time emotions recognition with the help of facial expressions is a work in computer vision. This environment is an important and interesting tool between the humans and computers. In this investigation an environment is created which is capable of analyzing the person’s emotions using the real time facial gestures with the help of Deep Neural Network. It can detect the facial expression from any image either real or animated after facial extraction (muscle position, eye expression and lips position). This system is setup to classify images of human faces into seven discrete emotion categories using Convolutional Neural Networks (CNNs). This type of environment is important for social interaction.


Author(s):  
David T. Wang ◽  
Brady Williamson ◽  
Thomas Eluvathingal ◽  
Bruce Mahoney ◽  
Jennifer Scheler

Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


Sign in / Sign up

Export Citation Format

Share Document