scholarly journals Classification of Lung Disease in Children by Using Lung Ultrasound Images and Deep Convolutional Neural Network

2021 ◽  
Vol 12 ◽  
Author(s):  
Silvia Magrelli ◽  
Piero Valentini ◽  
Cristina De Rose ◽  
Rosa Morello ◽  
Danilo Buonsenso

Bronchiolitis is the most common cause of hospitalization of children in the first year of life and pneumonia is the leading cause of infant mortality worldwide. Lung ultrasound technology (LUS) is a novel imaging diagnostic tool for the early detection of respiratory distress and offers several advantages due to its low-cost, relative safety, portability, and easy repeatability. More precise and efficient diagnostic and therapeutic strategies are needed. Deep-learning-based computer-aided diagnosis (CADx) systems, using chest X-ray images, have recently demonstrated their potential as a screening tool for pulmonary disease (such as COVID-19 pneumonia). We present the first computer-aided diagnostic scheme for LUS images of pulmonary diseases in children. In this study, we trained from scratch four state-of-the-art deep-learning models (VGG19, Xception, Inception-v3 and Inception-ResNet-v2) for detecting children with bronchiolitis and pneumonia. In our experiments we used a data set consisting of 5,907 images from 33 healthy infants, 3,286 images from 22 infants with bronchiolitis, and 4,769 images from 7 children suffering from bacterial pneumonia. Using four-fold cross-validation, we implemented one binary classification (healthy vs. bronchiolitis) and one three-class classification (healthy vs. bronchiolitis vs. bacterial pneumonia) out of three classes. Affine transformations were applied for data augmentation. Hyperparameters were optimized for the learning rate, dropout regularization, batch size, and epoch iteration. The Inception-ResNet-v2 model provides the highest classification performance, when compared with the other models used on test sets: for healthy vs. bronchiolitis, it provides 97.75% accuracy, 97.75% sensitivity, and 97% specificity whereas for healthy vs. bronchiolitis vs. bacterial pneumonia, the Inception-v3 model provides the best results with 91.5% accuracy, 91.5% sensitivity, and 95.86% specificity. We performed a gradient-weighted class activation mapping (Grad-CAM) visualization and the results were qualitatively evaluated by a pediatrician expert in LUS imaging: heatmaps highlight areas containing diagnostic-relevant LUS imaging-artifacts, e.g., A-, B-, pleural-lines, and consolidations. These complex patterns are automatically learnt from the data, thus avoiding hand-crafted features usage. By using LUS imaging, the proposed framework might aid in the development of an accessible and rapid decision support-method for diagnosing pulmonary diseases in children using LUS imaging.

2020 ◽  
Vol 17 (3) ◽  
pp. 299-305 ◽  
Author(s):  
Riaz Ahmad ◽  
Saeeda Naz ◽  
Muhammad Afzal ◽  
Sheikh Rashid ◽  
Marcus Liwicki ◽  
...  

This paper presents a deep learning benchmark on a complex dataset known as KFUPM Handwritten Arabic TexT (KHATT). The KHATT data-set consists of complex patterns of handwritten Arabic text-lines. This paper contributes mainly in three aspects i.e., (1) pre-processing, (2) deep learning based approach, and (3) data-augmentation. The pre-processing step includes pruning of white extra spaces plus de-skewing the skewed text-lines. We deploy a deep learning approach based on Multi-Dimensional Long Short-Term Memory (MDLSTM) networks and Connectionist Temporal Classification (CTC). The MDLSTM has the advantage of scanning the Arabic text-lines in all directions (horizontal and vertical) to cover dots, diacritics, strokes and fine inflammation. The data-augmentation with a deep learning approach proves to achieve better and promising improvement in results by gaining 80.02% Character Recognition (CR) over 75.08% as baseline.


2022 ◽  
Author(s):  
Nabeel Durrani ◽  
Damjan Vukovic ◽  
Maria Antico ◽  
Jeroen van der Burgt ◽  
Ruud JG van van Sloun ◽  
...  

<div>Our automated deep learning-based approach identifies consolidation/collapse in LUS images to aid in the diagnosis of late stages of COVID-19 induced pneumonia, where consolidation/collapse is one of the possible associated pathologies. A common challenge in training such models is that annotating each frame of an ultrasound video requires high labelling effort. This effort in practice becomes prohibitive for large ultrasound datasets. To understand the impact of various degrees of labelling precision, we compare labelling strategies to train fully supervised models (frame-based method, higher labelling effort) and inaccurately supervised models (video-based methods, lower labelling effort), both of which yield binary predictions for LUS videos on a frame-by-frame level. We moreover introduce a novel sampled quaternary method which randomly samples only 10% of the LUS video frames and subsequently assigns (ordinal) categorical labels to all frames in the video based on the fraction of positively annotated samples. This method outperformed the inaccurately supervised video-based method of our previous work on pleural effusions. More surprisingly, this method outperformed the supervised frame-based approach with respect to metrics such as precision-recall area under curve (PR-AUC) and F1 score that are suitable for the class imbalance scenario of our dataset despite being a form of inaccurate learning. This may be due to the combination of a significantly smaller data set size compared to our previous work and the higher complexity of consolidation/collapse compared to pleural effusion, two factors which contribute to label noise and overfitting; specifically, we argue that our video-based method is more robust with respect to label noise and mitigates overfitting in a manner similar to label smoothing. Using clinical expert feedback, separate criteria were developed to exclude data from the training and test sets respectively for our ten-fold cross validation results, which resulted in a PR-AUC score of 73% and an accuracy of 89%. While the efficacy of our classifier using the sampled quaternary method must be verified on a larger consolidation/collapse dataset, when considering the complexity of the pathology, our proposed classifier using the sampled quaternary video-based method is clinically comparable with trained experts and improves over the video-based method of our previous work on pleural effusions.</div>


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
BinBin Zhang ◽  
Fumin Zhang ◽  
Xinghua Qu

Purpose Laser-based measurement techniques offer various advantages over conventional measurement techniques, such as no-destructive, no-contact, fast and long measuring distance. In cooperative laser ranging systems, it’s crucial to extract center coordinates of retroreflectors to accomplish automatic measurement. To solve this problem, this paper aims to propose a novel method. Design/methodology/approach We propose a method using Mask RCNN (Region Convolutional Neural Network), with ResNet101 (Residual Network 101) and FPN (Feature Pyramid Network) as the backbone, to localize retroreflectors, realizing automatic recognition in different backgrounds. Compared with two other deep learning algorithms, experiments show that the recognition rate of Mask RCNN is better especially for small-scale targets. Based on this, an ellipse detection algorithm is introduced to obtain the ellipses of retroreflectors from recognized target areas. The center coordinates of retroreflectors in the camera coordinate system are obtained by using a mathematics method. Findings To verify the accuracy of this method, an experiment was carried out: the distance between two retroreflectors with a known distance of 1,000.109 mm was measured, with 2.596 mm root-mean-squar error, meeting the requirements of the coarse location of retroreflectors. Research limitations/implications The research limitations/implications are as follows: (i) As the data set only has 200 pictures, although we have used some data augmentation methods such as rotating, mirroring and cropping, there is still room for improvement in the generalization ability of detection. (ii) The ellipse detection algorithm needs to work in relatively dark conditions, as the retroreflector is made of stainless steel, which easily reflects light. Originality/value The originality/value of the article lies in being able to obtain center coordinates of multiple retroreflectors automatically even in a cluttered background; being able to recognize retroreflectors with different sizes, especially for small targets; meeting the recognition requirement of multiple targets in a large field of view and obtaining 3 D centers of targets by monocular model-based vision.


2019 ◽  
Vol 64 (23) ◽  
pp. 235013 ◽  
Author(s):  
Hiroki Tanaka ◽  
Shih-Wei Chiu ◽  
Takanori Watanabe ◽  
Setsuko Kaoku ◽  
Takuhiro Yamaguchi

2020 ◽  
Vol 47 (9) ◽  
pp. 3952-3960 ◽  
Author(s):  
Chao Sun ◽  
Yukang Zhang ◽  
Qing Chang ◽  
Tianjiao Liu ◽  
Shaohang Zhang ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
Suxia Cui ◽  
Yu Zhou ◽  
Yonghui Wang ◽  
Lujun Zhai

Recently, human being’s curiosity has been expanded from the land to the sky and the sea. Besides sending people to explore the ocean and outer space, robots are designed for some tasks dangerous for living creatures. Take the ocean exploration for an example. There are many projects or competitions on the design of Autonomous Underwater Vehicle (AUV) which attracted many interests. Authors of this article have learned the necessity of platform upgrade from a previous AUV design project, and would like to share the experience of one task extension in the area of fish detection. Because most of the embedded systems have been improved by fast growing computing and sensing technologies, which makes them possible to incorporate more and more complicated algorithms. In an AUV, after acquiring surrounding information from sensors, how to perceive and analyse corresponding information for better judgement is one of the challenges. The processing procedure can mimic human being’s learning routines. An advanced system with more computing power can facilitate deep learning feature, which exploit many neural network algorithms to simulate human brains. In this paper, a convolutional neural network (CNN) based fish detection method was proposed. The training data set was collected from the Gulf of Mexico by a digital camera. To fit into this unique need, three optimization approaches were applied to the CNN: data augmentation, network simplification, and training process speed up. Data augmentation transformation provided more learning samples; the network was simplified to accommodate the artificial neural network; the training process speed up is introduced to make the training process more time efficient. Experimental results showed that the proposed model is promising, and has the potential to be extended to other underwear objects.


2021 ◽  
Vol 4 (2) ◽  
pp. 139-143
Author(s):  
Abdullah Ajmal ◽  
Sundas Ibrar ◽  
Wakeel Ahmad ◽  
Syed Muhammad Adnan Shah

Abstract— The Novel Coronavirus generally, knows as COVID-19 which first appeared in Wuhan city of China in December 2019, spread quickly around the world and became a pandemic. It has caused an overwhelming effect on daily lives, Public health, and the global economy. Many people have been affected and have died. It is critical to control and prevent the spread of COVID-19 disease by applying quick alternative diagnostic techniques. COVID-19 cases are rising day by day around the world, the on-time diagnosis of COVID-19 patients is an increasingly long and difficult process. COVID-19 patient test kits are costly and not available for every individual in poor countries. For this purpose, screening patients with the established techniques like Chest X-ray images seems to be an effective method. This study used a deep learning data augmentation on a publicly available data set and train advanced CNN models on it. The proposed model was tested using a state-of-the-art evaluation measures and obtained better results. Our model, the COVID-19 images is available at (https://github.com/ieee8023/covid-chestxray-dataset) and for Non-COVID-19 images is available at (https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia). The maximum accuracy achieved in the validation was 96.67%. Our model of COVID-19 detection achieved an average F measure of 98%, and an Area Under Curve (AUC) of 99%. The results demonstrate that deep learning proved to be an effective and easily deployable approach for COVID-19 detection.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Dominik Müller ◽  
Frank Kramer

Abstract Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Jingzhe Ma ◽  
Shaobo Duan ◽  
Ye Zhang ◽  
Jing Wang ◽  
Zongmin Wang ◽  
...  

Ultrasonography is widely used in the clinical diagnosis of thyroid nodules. Ultrasound images of thyroid nodules have different appearances, interior features, and blurred borders that are difficult for a physician to diagnose into malignant or benign types merely through visual recognition. The development of artificial intelligence, especially deep learning, has led to great advances in the field of medical image diagnosis. However, there are some challenges to achieve precision and efficiency in the recognition of thyroid nodules. In this work, we propose a deep learning architecture, you only look once v3 dense multireceptive fields convolutional neural network (YOLOv3-DMRF), based on YOLOv3. It comprises a DMRF-CNN and multiscale detection layers. In DMRF-CNN, we integrate dilated convolution with different dilation rates to continue passing the edge and the texture features to deeper layers. Two different scale detection layers are deployed to recognize the different sizes of the thyroid nodules. We used two datasets to train and evaluate the YOLOv3-DMRF during the experiments. One dataset includes 699 original ultrasound images of thyroid nodules collected from a local health physical center. We obtained 10,485 images after data augmentation. Another dataset is an open-access dataset that includes ultrasound images of 111 malignant and 41 benign thyroid nodules. Average precision (AP) and mean average precision (mAP) are used as the metrics for quantitative and qualitative evaluations. We compared the proposed YOLOv3-DMRF with some state-of-the-art deep learning networks. The experimental results show that YOLOv3-DMRF outperforms others on mAP and detection time on both the datasets. Specifically, the values of mAP and detection time were 90.05 and 95.23% and 3.7 and 2.2 s, respectively, on the two test datasets. Experimental results demonstrate that the proposed YOLOv3-DMRF is efficient for detection and recognition of thyroid nodules for ultrasound images.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Kai-Chi Chen ◽  
Hong-Ren Yu ◽  
Wei-Shiang Chen ◽  
Wei-Che Lin ◽  
Yi-Chen Lee ◽  
...  

Abstract Acute lower respiratory infection is the leading cause of child death in developing countries. Current strategies to reduce this problem include early detection and appropriate treatment. Better diagnostic and therapeutic strategies are still needed in poor countries. Artificial-intelligence chest X-ray scheme has the potential to become a screening tool for lower respiratory infection in child. Artificial-intelligence chest X-ray schemes for children are rare and limited to a single lung disease. We need a powerful system as a diagnostic tool for most common lung diseases in children. To address this, we present a computer-aided diagnostic scheme for the chest X-ray images of several common pulmonary diseases of children, including bronchiolitis/bronchitis, bronchopneumonia/interstitial pneumonitis, lobar pneumonia, and pneumothorax. The study consists of two main approaches: first, we trained a model based on YOLOv3 architecture for cropping the appropriate location of the lung field automatically. Second, we compared three different methods for multi-classification, included the one-versus-one scheme, the one-versus-all scheme and training a classifier model based on convolutional neural network. Our model demonstrated a good distinguishing ability for these common lung problems in children. Among the three methods, the one-versus-one scheme has the best performance. We could detect whether a chest X-ray image is abnormal with 92.47% accuracy and bronchiolitis/bronchitis, bronchopneumonia, lobar pneumonia, pneumothorax, or normal with 71.94%, 72.19%, 85.42%, 85.71%, and 80.00% accuracy, respectively. In conclusion, we provide a computer-aided diagnostic scheme by deep learning for common pulmonary diseases in children. This scheme is mostly useful as a screening for normal versus most of lower respiratory problems in children. It can also help review the chest X-ray images interpreted by clinicians and may remind possible negligence. This system can be a good diagnostic assistance under limited medical resources.


Sign in / Sign up

Export Citation Format

Share Document