Synergic Deep Learning for Smart Health Diagnosis of COVID-19 for Connected Living and Smart Cities

2022 ◽  
Vol 22 (3) ◽  
pp. 1-14
Author(s):  
K. Shankar ◽  
Eswaran Perumal ◽  
Mohamed Elhoseny ◽  
Fatma Taher ◽  
B. B. Gupta ◽  
...  

COVID-19 pandemic has led to a significant loss of global deaths, economical status, and so on. To prevent and control COVID-19, a range of smart, complex, spatially heterogeneous, control solutions, and strategies have been conducted. Earlier classification of 2019 novel coronavirus disease (COVID-19) is needed to cure and control the disease. It results in a requirement of secondary diagnosis models, since no precise automated toolkits exist. The latest finding attained using radiological imaging techniques highlighted that the images hold noticeable details regarding the COVID-19 virus. The application of recent artificial intelligence (AI) and deep learning (DL) approaches integrated to radiological images finds useful to accurately detect the disease. This article introduces a new synergic deep learning (SDL)-based smart health diagnosis of COVID-19 using Chest X-Ray Images. The SDL makes use of dual deep convolutional neural networks (DCNNs) and involves a mutual learning process from one another. Particularly, the representation of images learned by both DCNNs is provided as the input of a synergic network, which has a fully connected structure and predicts whether the pair of input images come under the identical class. Besides, the proposed SDL model involves a fuzzy bilateral filtering (FBF) model to pre-process the input image. The integration of FBL and SDL resulted in the effective classification of COVID-19. To investigate the classifier outcome of the SDL model, a detailed set of simulations takes place and ensures the effective performance of the FBF-SDL model over the compared methods.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Song-Quan Ong ◽  
Hamdan Ahmad ◽  
Gomesh Nair ◽  
Pradeep Isawasan ◽  
Abdul Hafiz Ab Majid

AbstractClassification of Aedes aegypti (Linnaeus) and Aedes albopictus (Skuse) by humans remains challenging. We proposed a highly accessible method to develop a deep learning (DL) model and implement the model for mosquito image classification by using hardware that could regulate the development process. In particular, we constructed a dataset with 4120 images of Aedes mosquitoes that were older than 12 days old and had common morphological features that disappeared, and we illustrated how to set up supervised deep convolutional neural networks (DCNNs) with hyperparameter adjustment. The model application was first conducted by deploying the model externally in real time on three different generations of mosquitoes, and the accuracy was compared with human expert performance. Our results showed that both the learning rate and epochs significantly affected the accuracy, and the best-performing hyperparameters achieved an accuracy of more than 98% at classifying mosquitoes, which showed no significant difference from human-level performance. We demonstrated the feasibility of the method to construct a model with the DCNN when deployed externally on mosquitoes in real time.


Author(s):  
Mubarak Muhammad ◽  
Sertan Serte

Among the areas where AI studies centered on developing models that provide real-time solutions for the real estate industry are real estate price forecasting, building age, and types and design of the building (villa, apartment, floor number). Nevertheless, within the ML sector, DL is an emerging region with an Interest increases every year. As a result, a growing number of DL research are in conferences and papers, models for real estate have begun to emerge. In this study, we present a deep learning method for classification of houses in Northern Cyprus using Convolutional neural network. This work proposes the use of Convolutional neural networks in the classification of houses images. The classification will be based on the house age, house price, number of floors in the house, house type i.e. Villa and Apartment. The first category is Villa versus Apartments class; based on the training dataset of 362 images the class result shows the overall accuracy of 96.40%. The second category is split into two classes according to age of the buildings, namely 0 to 5 years Apartments 6 to 10 years Apartments. This class is to classify the building based on their age and the result shows the accuracy of 87.42%. The third category is villa with roof versus Villa without roof apartments class which also shows the overall accuracy of 87.60%. The fourth category is Villa Price from 10,000 euro to 200,000 Versus Villa Price from 200,000 Euro to above and the result shows the accuracy of 81.84%. The last category consists of three classes namely 2 floor Apartment versus 3 floor Apartment, 2 floor Apartment versus 4 floor Apartment and 2 floor Apartment versus 5 floor Apartment which all shows the accuracy of 83.54%, 82.48% and 84.77% respectively. From the experiments carried out in this thesis and the results obtained we conclude that the main aims and objectives of this thesis which is to used Deep learning in Classification and detection of houses in Northern Cyprus and to test the performance of AlexNet for houses classification was successful. This study will be very significant in creation of smart cities and digitization of real estate sector as the world embrace the used of the vast power of Artificial Intelligence, machine learning and machine vision.


2020 ◽  
Vol 10 (16) ◽  
pp. 5683 ◽  
Author(s):  
Lourdes Duran-Lopez ◽  
Juan Pedro Dominguez-Morales ◽  
Jesús Corral-Jaime ◽  
Saturnino Vicente-Diaz ◽  
Alejandro Linares-Barranco

The COVID-19 pandemic caused by the new coronavirus SARS-CoV-2 has changed the world as we know it. An early diagnosis is crucial in order to prevent new outbreaks and control its rapid spread. Medical imaging techniques, such as X-ray or chest computed tomography, are commonly used for this purpose due to their reliability for COVID-19 diagnosis. Computer-aided diagnosis systems could play an essential role in aiding radiologists in the screening process. In this work, a novel Deep Learning-based system, called COVID-XNet, is presented for COVID-19 diagnosis in chest X-ray images. The proposed system performs a set of preprocessing algorithms to the input images for variability reduction and contrast enhancement, which are then fed to a custom Convolutional Neural Network in order to extract relevant features and perform the classification between COVID-19 and normal cases. The system is trained and validated using a 5-fold cross-validation scheme, achieving an average accuracy of 94.43% and an AUC of 0.988. The output of the system can be visualized using Class Activation Maps, highlighting the main findings for COVID-19 in X-ray images. These promising results indicate that COVID-XNet could be used as a tool to aid radiologists and contribute to the fight against COVID-19.


Author(s):  
Tong Lin ◽  
◽  
Xin Chen ◽  
Xiao Tang ◽  
Ling He ◽  
...  

This paper discusses the use of deep convolutional neural networks for radar target classification. In this paper, three parts of the work are carried out: firstly, effective data enhancement methods are used to augment the dataset and address unbalanced datasets. Second, using deep learning techniques, we explore an effective framework for classifying and identifying targets based on radar spectral map data. By using data enhancement and the framework, we achieved an overall classification accuracy of 0.946. In the end, we researched the automatic annotation of image ROI (region of interest). By adjusting the model, we obtained a 93% accuracy in automatic labeling and classification of targets for both car and cyclist categories.


2021 ◽  
Vol 11 (22) ◽  
pp. 10528
Author(s):  
Khin Yadanar Win ◽  
Noppadol Maneerat ◽  
Syna Sreng ◽  
Kazuhiko Hamamoto

The ongoing COVID-19 pandemic has caused devastating effects on humanity worldwide. With practical advantages and wide accessibility, chest X-rays (CXRs) play vital roles in the diagnosis of COVID-19 and the evaluation of the extent of lung damages incurred by the virus. This study aimed to leverage deep-learning-based methods toward the automated classification of COVID-19 from normal and viral pneumonia on CXRs, and the identification of indicative regions of COVID-19 biomarkers. Initially, we preprocessed and segmented the lung regions usingDeepLabV3+ method, and subsequently cropped the lung regions. The cropped lung regions were used as inputs to several deep convolutional neural networks (CNNs) for the prediction of COVID-19. The dataset was highly unbalanced; the vast majority were normal images, with a small number of COVID-19 and pneumonia images. To remedy the unbalanced distribution and to avoid biased classification results, we applied five different approaches: (i) balancing the class using weighted loss; (ii) image augmentation to add more images to minority cases; (iii) the undersampling of majority classes; (iv) the oversampling of minority classes; and (v) a hybrid resampling approach of oversampling and undersampling. The best-performing methods from each approach were combined as the ensemble classifier using two voting strategies. Finally, we used the saliency map of CNNs to identify the indicative regions of COVID-19 biomarkers which are deemed useful for interpretability. The algorithms were evaluated using the largest publicly available COVID-19 dataset. An ensemble of the top five CNNs with image augmentation achieved the highest accuracy of 99.23% and area under curve (AUC) of 99.97%, surpassing the results of previous studies.


2021 ◽  
Author(s):  
Soheil Ashkani-Esfahani ◽  
Reze Mojahed Yazdi ◽  
Rohan Bhimani ◽  
Gino M Kerkhoffs ◽  
Mario Maas ◽  
...  

Early and accurate detection of ankle fractures is crucial for reducing future complications. Radiographs are the most abundant imaging techniques for assessing fractures. We believe deep learning (DL) methods, through adequately trained deep convolutional neural networks (DCNNs), can assess radiographic images fast and accurate without human intervention. Herein, we aimed to assess the performance of two different DCNNs in detecting ankle fractures using radiographs compared to the ground truth. In this retrospective study, our DCNNs were trained using radiographs obtained from 1050 patients with ankle fracture and the same number of individuals with otherwise healthy ankles. Inception V3 and Renet50 pretrained models were used in our algorithms. Danis-Weber classification method was used. Out of 1050, 72 individuals were labeled as occult fractures as they were not detected in the primary radiographic assessment. Using single-view radiographs was compared with 3-views (anteroposterior, mortise, lateral) for training the DCNNs. Our DCNNs showed a better performance using 3-views images versus single-view based on greater values for accuracy, F-score, and area under the curve (AUC). The sensitivity and specificity in detection of ankle fractures using 3-views were 97.5% and 93.9% using Resnet50 compared to 98.7% and 98.6 using inception V3, respectively. Resnet50 missed 3 occult fractures while Inception V3 missed only one case. Clinical Significance: The performance of our DCNNs showed a promising potential that can be considered in developing the currently used image interpretation programs or as a separate assistant to the clinicians to detect ankle fractures faster and more precisely.


2018 ◽  
pp. 1-8 ◽  
Author(s):  
Okyaz Eminaga ◽  
Nurettin Eminaga ◽  
Axel Semjonow ◽  
Bernhard Breil

Purpose The recognition of cystoscopic findings remains challenging for young colleagues and depends on the examiner’s skills. Computer-aided diagnosis tools using feature extraction and deep learning show promise as instruments to perform diagnostic classification. Materials and Methods Our study considered 479 patient cases that represented 44 urologic findings. Image color was linearly normalized and was equalized by applying contrast-limited adaptive histogram equalization. Because these findings can be viewed via cystoscopy from every possible angle and side, we ultimately generated images rotated in 10-degree grades and flipped them vertically or horizontally, which resulted in 18,681 images. After image preprocessing, we developed deep convolutional neural network (CNN) models (ResNet50, VGG-19, VGG-16, InceptionV3, and Xception) and evaluated these models using F1 scores. Furthermore, we proposed two CNN concepts: 90%-previous-layer filter size and harmonic-series filter size. A training set (60%), a validation set (10%), and a test set (30%) were randomly generated from the study data set. All models were trained on the training set, validated on the validation set, and evaluated on the test set. Results The Xception-based model achieved the highest F1 score (99.52%), followed by models that were based on ResNet50 (99.48%) and the harmonic-series concept (99.45%). All images with cancer lesions were correctly determined by these models. When the focus was on the images misclassified by the model with the best performance, 7.86% of images that showed bladder stones with indwelling catheter and 1.43% of images that showed bladder diverticulum were falsely classified. Conclusion The results of this study show the potential of deep learning for the diagnostic classification of cystoscopic images. Future work will focus on integration of artificial intelligence–aided cystoscopy into clinical routines and possibly expansion to other clinical endoscopy applications.


Electronics ◽  
2019 ◽  
Vol 8 (8) ◽  
pp. 850 ◽  
Author(s):  
Caleb Vununu ◽  
Suk-Hwan Lee ◽  
Oh-Jun Kwon ◽  
Ki-Ryong Kwon

The complete analysis of the images representing the human epithelial cells of type 2, commonly referred to as HEp-2 cells, is one of the most important tasks in the diagnosis procedure of various autoimmune diseases. The problem of the automatic classification of these images has been widely discussed since the unfolding of deep learning-based methods. Certain datasets of the HEp-2 cell images exhibit an extreme complexity due to their significant heterogeneity. We propose in this work a method that tackles specifically the problem related to this disparity. A dynamic learning process is conducted with different networks taking different input variations in parallel. In order to emphasize the localized changes in intensity, the discrete wavelet transform is used to produce different versions of the input image. The approximation and detail coefficients are fed to four different deep networks in a parallel learning paradigm in order to efficiently homogenize the features extracted from the images that have different intensity levels. The feature maps from these different networks are then concatenated and passed to the classification layers to produce the final type of the cellular image. The proposed method was tested on a public dataset that comprises images from two intensity levels. The significant heterogeneity of this dataset limits the discrimination results of some of the state-of-the-art deep learning-based methods. We have conducted a comparative study with these methods in order to demonstrate how the dynamic learning proposed in this work manages to significantly minimize this heterogeneity related problem, thus boosting the discrimination results.


2020 ◽  
pp. 2361-2370
Author(s):  
Elham Mohammed Thabit A. ALSAADI ◽  
Nidhal K. El Abbadi

Detection and classification of animals is a major challenge that is facing the researchers. There are five classes of vertebrate animals, namely the Mammals, Amphibians, Reptiles, Birds, and Fish, and each type includes many thousands of different animals. In this paper, we propose a new model based on the training of deep convolutional neural networks (CNN) to detect and classify two classes of vertebrate animals (Mammals and Reptiles). Deep CNNs are the state of the art in image recognition and are known for their high learning capacity, accuracy, and robustness to typical object recognition challenges. The dataset of this system contains 6000 images, including 4800 images for training. The proposed algorithm was tested by using 1200 images. The accuracy of the system’s prediction for the target object was 97.5%.


Author(s):  
Venu K. ◽  
Natesan Palanisamy ◽  
Krishnakumar B. ◽  
Sasipriyaa N.

Early detection of disease in the plant leads to an early treatment and reduction in the economic loss considerably. Recent development has introduced deep learning based convolutional neural network for detecting the diseases in the images accurately using image classification techniques. In the chapter, CNN is supplied with the input image. In each convolutional layer of CNN, features are extracted and are transferred to the next pooling layer. Finally, all the features which are extracted from convolution layers are concatenated and formed as input to the fully-connected layer of state-of-the-art architecture and then output class will be predicted by the model. The model is evaluated for three different datasets such as grape, pepper, and peach leaves. It is observed from the experimental results that the accuracy of the model obtained for grape, pepper, peach datasets are 74%, 69%, 84%, respectively.


Sign in / Sign up

Export Citation Format

Share Document