scholarly journals A Deep Learning Model to Classify and Detect Brain Abnormalities in Portable Microwave Based Imaging System

Author(s):  
Amran Hossain ◽  
Mohammad Tariqul Islam ◽  
Ali F. Almutairi

Abstract Automated classification and detection of brain abnormalities like tumors from microwave head images is essential for investigating and monitoring disease progression. This paper presents the automatic classification and detection of human brain abnormalities through the deep learning-based YOLOv5 model in microwave head images. The YOLOv5 is a faster object detection model, which has a less computational architecture with high accuracy. At the beginning, the backscattered signals are collected from the implemented 3D wideband nine antennas array-based microwave head imaging (MWHI) system, where one antenna operates as a transmitter and the remaining eight antennas operate as receivers. In this research, fabricated tissue-mimicking head phantom with a benign and malignant tumor as brain abnormalities, which is utilized in MWHI system. Afterwards, the M-DMAS (modified-delay-multiply-and-sum) imaging algorithm is applied on the post-processed scattering parameters to reconstruct head regions image with 640×640 pixels. Three hundred sample images are collected, including benign and malignant tumors from various locations in head regions by the MWHI system. Later, the images are preprocessed and augmented to create a final image dataset containing 3600 images, and then used for training, validation, and testing the YOLOv5 model. Subsequently, 80% of images are utilized for training, and 20% are used for testing the model. Then from the 80% training dataset, 20% is utilized for validation to avoid overfitting. The brain abnormalities classification and detection performances with various datasets are investigated by the YOLOv5s, YOLOv5m, and YOLOv5l models of YOLOv5. It is investigated that the YOLOv5l model showed the best result for abnormalities classification and detection compared to other models. However, the achieved training accuracy, validation loss, precision, recall, F1-score, training and validation classification loss, and mean average precision (mAP) are 99.84%, 9.38%, 93.20%, 94.80%, 94.01%, 0.004, 0.0133, and 96.20% respectively for the YOLOv5l model, which ensures the better classification and detection accuracy of the model. Finally, a testing dataset with different scenarios is evaluated through the three versions of the YOLOv5 model, and conclude that brain abnormalities classification and detection with location are successfully classified and detected. Thus, the deep model is applicable in the portable MWHI system.

Author(s):  
Dima M. Alalharith ◽  
Hajar M. Alharthi ◽  
Wejdan M. Alghamdi ◽  
Yasmine M. Alsenbel ◽  
Nida Aslam ◽  
...  

Computer-based technologies play a central role in the dentistry field, as they present many methods for diagnosing and detecting various diseases, such as periodontitis. The current study aimed to develop and evaluate the state-of-the-art object detection and recognition techniques and deep learning algorithms for the automatic detection of periodontal disease in orthodontic patients using intraoral images. In this study, a total of 134 intraoral images were divided into a training dataset (n = 107 [80%]) and a test dataset (n = 27 [20%]). Two Faster Region-based Convolutional Neural Network (R-CNN) models using ResNet-50 Convolutional Neural Network (CNN) were developed. The first model detects the teeth to locate the region of interest (ROI), while the second model detects gingival inflammation. The detection accuracy, precision, recall, and mean average precision (mAP) were calculated to verify the significance of the proposed model. The teeth detection model achieved an accuracy, precision, recall, and mAP of 100 %, 100%, 51.85%, and 100%, respectively. The inflammation detection model achieved an accuracy, precision, recall, and mAP of 77.12%, 88.02%, 41.75%, and 68.19%, respectively. This study proved the viability of deep learning models for the detection and diagnosis of gingivitis in intraoral images. Hence, this highlights its potential usability in the field of dentistry and aiding in reducing the severity of periodontal disease globally through preemptive non-invasive diagnosis.


2021 ◽  
Vol 8 ◽  
Author(s):  
Mohamed Elgendi ◽  
Muhammad Umer Nasir ◽  
Qunfeng Tang ◽  
David Smith ◽  
John-Paul Grenier ◽  
...  

Chest X-ray imaging technology used for the early detection and screening of COVID-19 pneumonia is both accessible worldwide and affordable compared to other non-invasive technologies. Additionally, deep learning methods have recently shown remarkable results in detecting COVID-19 on chest X-rays, making it a promising screening technology for COVID-19. Deep learning relies on a large amount of data to avoid overfitting. While overfitting can result in perfect modeling on the original training dataset, on a new testing dataset it can fail to achieve high accuracy. In the image processing field, an image augmentation step (i.e., adding more training data) is often used to reduce overfitting on the training dataset, and improve prediction accuracy on the testing dataset. In this paper, we examined the impact of geometric augmentations as implemented in several recent publications for detecting COVID-19. We compared the performance of 17 deep learning algorithms with and without different geometric augmentations. We empirically examined the influence of augmentation with respect to detection accuracy, dataset diversity, augmentation methodology, and network size. Contrary to expectation, our results show that the removal of recently used geometrical augmentation steps actually improved the Matthews correlation coefficient (MCC) of 17 models. The MCC without augmentation (MCC = 0.51) outperformed four recent geometrical augmentations (MCC = 0.47 for Data Augmentation 1, MCC = 0.44 for Data Augmentation 2, MCC = 0.48 for Data Augmentation 3, and MCC = 0.49 for Data Augmentation 4). When we retrained a recently published deep learning without augmentation on the same dataset, the detection accuracy significantly increased, with a χMcNemar′s statistic2=163.2 and a p-value of 2.23 × 10−37. This is an interesting finding that may improve current deep learning algorithms using geometrical augmentations for detecting COVID-19. We also provide clinical perspectives on geometric augmentation to consider regarding the development of a robust COVID-19 X-ray-based detector.


2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


Author(s):  
S. Su ◽  
T. Nawata ◽  
T. Fuse

Abstract. Automatic building change detection has become a topical issue owing to its wide range of applications, such as updating building maps. However, accurate building change detection remains challenging, particularly in urban areas. Thus far, there has been limited research on the use of the outdated building map (the building map before the update, referred to herein as the old-map) to increase the accuracy of building change detection. This paper presents a novel deep-learning-based method for building change detection using bitemporal aerial images containing RGB bands, bitemporal digital surface models (DSMs), and an old-map. The aerial images have two types of spatial resolutions, 12.5 cm or 16 cm, and the cell size of the DSMs is 50 cm × 50 cm. The bitemporal aerial images, the height variations calculated using the differences between the bitemporal DSMs, and the old-map were fed into a network architecture to build an automatic building change detection model. The performance of the model was quantitatively and qualitatively evaluated for an urban area that covered approximately 10 km2 and contained over 21,000 buildings. The results indicate that it can detect the building changes with optimum accuracy as compared to other methods that use inputs such as i) bitemporal aerial images only, ii) bitemporal aerial images and bitemporal DSMs, and iii) bitemporal aerial images and an old-map. The proposed method achieved recall rates of 89.3%, 88.8%, and 99.5% for new, demolished, and other buildings, respectively. The results also demonstrate that the old-map is an effective data source for increasing building change detection accuracy.


2021 ◽  
Author(s):  
Sung Hyun Noh ◽  
Chansik An ◽  
Dain Kim ◽  
Seung Hyun Lee ◽  
Min-Yung Chang ◽  
...  

Abstract Background A computer algorithm that automatically detects sacroiliac joint abnormalities on plain radiograph would help radiologists avoid missing sacroiliitis. This study aimed to develop and validate a deep learning model to detect and diagnose sacroiliitis on plain radiograph in young patients with low back pain. Methods This Institutional Review Board-approved retrospective study included 478 and 468 plain radiographs from 241 and 433 young (< 40 years) patients who complained of low back pain with and without ankylosing spondylitis, respectively. They were randomly split into training and test datasets with a ratio of 8:2. Radiologists reviewed the images and labeled the coordinates of a bounding box and determined the presence or absence of sacroiliitis for each sacroiliac joint. We fine-tined and optimized the EfficientDet-D4 object detection model pre-trained on the COCO 2107 dataset on the training dataset and validated the final model on the test dataset. Results The mean average precision, an evaluation metric for object detection accuracy, was 0.918 at 0.5 intersection over union. In the diagnosis of sacroiliitis, the area under the curve, sensitivity, specificity, accuracy, and F1-score were 0.932 (95% confidence interval, 0.903–0.961), 96.9% (92.9–99.0), 86.8% (81.5–90.9), 91.1% (87.7–93.7), and 90.2% (85.0–93.9), respectively. Conclusions The EfficientDet, a deep learning-based object detection algorithm, could be used to automatically diagnose sacroiliitis on plain radiograph.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yiran Feng ◽  
Xueheng Tao ◽  
Eung-Joo Lee

In view of the current absence of any deep learning algorithm for shellfish identification in real contexts, an improved Faster R-CNN-based detection algorithm is proposed in this paper. It achieves multiobject recognition and localization through a second-order detection network and replaces the original feature extraction module with DenseNet, which can fuse multilevel feature information, increase network depth, and avoid the disappearance of network gradients. Meanwhile, the proposal merging strategy is improved with Soft-NMS, where an attenuation function is designed to replace the conventional NMS algorithm, thereby avoiding missed detection of adjacent or overlapping objects and enhancing the network detection accuracy under multiple objects. By constructing a real contexts shellfish dataset and conducting experimental tests on a vision recognition seafood sorting robot production line, we were able to detect the features of shellfish in different scenarios, and the detection accuracy was improved by nearly 4% compared to the original detection model, achieving a better detection accuracy. This provides favorable technical support for future quality sorting of seafood using the improved Faster R-CNN-based approach.


2021 ◽  
Vol 14 (1) ◽  
pp. 106
Author(s):  
Cheng Chen ◽  
Sindhu Chandra ◽  
Yufan Han ◽  
Hyungjoon Seo

Automatic damage detection using deep learning warrants an extensive data source that captures complex pavement conditions. This paper proposes a thermal-RGB fusion image-based pavement damage detection model, wherein the fused RGB-thermal image is formed through multi-source sensor information to achieve fast and accurate defect detection including complex pavement conditions. The proposed method uses pre-trained EfficientNet B4 as the backbone architecture and generates an argument dataset (containing non-uniform illumination, camera noise, and scales of thermal images too) to achieve high pavement damage detection accuracy. This paper tests separately the performance of different input data (RGB, thermal, MSX, and fused image) to test the influence of input data and network on the detection results. The results proved that the fused image’s damage detection accuracy can be as high as 98.34% and by using the dataset after augmentation, the detection model deems to be more stable to achieve 98.35% precision, 98.34% recall, and 98.34% F1-score.


Despite various Stand-alone educational assistance application for normal children but for the special children it was an exceptional case still, so this children with (Anxiety Disorder, ADHD, Learning Disabilities) find difficult to learn for long hours without getting distracted. A caretaker is needs to be with them at all the time in order to engage them in studying efficiently. Using this technology at its best, Deep Learning can be used to monitor the children when they are distracted and their attention can be drawn back by imposing volunteer distractions on the screen based on the concept of Face Recognition (in terms of facial expressions). The work has been implemented using python & OpenCV platform. By using this, The scanned image i.e. testing dataset is being compared to training dataset and thus emotion is predicted for incorporating with assisting component


2020 ◽  
Author(s):  
Zongfeng Yang ◽  
Shang Gao ◽  
Feng Xiao ◽  
Ganghua Li ◽  
Yangfeng Ding ◽  
...  

Abstract Background : Identification and characterization of new traits with a sound physiological foundation is essential for crop breeding and management. Deep learning has been widely used in image data analysis to explore spatial and temporal information on crop growth and development, thus strengthening the power of the identification of physiological traits. This study aims to develop a novel trait that indicates source and sink relation in japonica rice based on deep learning.Results : We applied a deep learning approach to accurately segment leaf and panicle and subsequently developed the procedure of GvCrop to calculate the leaf to panicle ratio (LPR) of rice populations during grain filling. Images of the training dataset were captured in the field experiments, with large variations in camera shooting angle, the elevation angle and the azimuth angle of the sun, rice genotype, and plant phenological stages. Accurately labeled by manually annotating all the panicle and leaf regions, the resulting dataset were used to train FPN-Mask (Feature Pyramid Network Mask) models, consisting of a backbone network and a task-specific sub-network. The model with the highest accuracy is then selected to study the variations in LPR among 192 rice germplasms and among agronomical practices. Despite the challenging field conditions, FPN-Mask models achieved a high detection accuracy, with Pixel Accuracy being 0.99 for panicles and 0.98 for leaves. The calculated LPRs showed large spatial and temporal variations as well as genotypic differences.Conclusion : Deep learning techniques can achieve high accuracy in simultaneously detecting panicle and leaf data from complex rice field images. The proposed FPN-Mask model is applicable for detecting and quantifying crop performance under field conditions. The newly identified trait of LPR should provide a high throughput protocol for breeders to select superior rice cultivars as well as for agronomists to precisely manage field crops that have a good balance of source and sink.


2021 ◽  
pp. bjophthalmol-2020-318107
Author(s):  
Kenichi Nakahara ◽  
Ryo Asaoka ◽  
Masaki Tanito ◽  
Naoto Shibata ◽  
Keita Mitsuhashi ◽  
...  

Background/aimsTo validate a deep learning algorithm to diagnose glaucoma from fundus photography obtained with a smartphone.MethodsA training dataset consisting of 1364 colour fundus photographs with glaucomatous indications and 1768 colour fundus photographs without glaucomatous features was obtained using an ordinary fundus camera. The testing dataset consisted of 73 eyes of 73 patients with glaucoma and 89 eyes of 89 normative subjects. In the testing dataset, fundus photographs were acquired using an ordinary fundus camera and a smartphone. A deep learning algorithm was developed to diagnose glaucoma using a training dataset. The trained neural network was evaluated by prediction result of the diagnostic of glaucoma or normal over the test datasets, using images from both an ordinary fundus camera and a smartphone. Diagnostic accuracy was assessed using the area under the receiver operating characteristic curve (AROC).ResultsThe AROC with a fundus camera was 98.9% and 84.2% with a smartphone. When validated only in eyes with advanced glaucoma (mean deviation value < −12 dB, N=26), the AROC with a fundus camera was 99.3% and 90.0% with a smartphone. There were significant differences between these AROC values using different cameras.ConclusionThe usefulness of a deep learning algorithm to automatically screen for glaucoma from smartphone-based fundus photographs was validated. The algorithm had a considerable high diagnostic ability, particularly in eyes with advanced glaucoma.


Sign in / Sign up

Export Citation Format

Share Document