scholarly journals Earthquake Damage Assessment in Three Spatial Scale Using Naive Bayes, SVM, and Deep Learning Algorithms

2021 ◽  
Vol 11 (20) ◽  
pp. 9737
Author(s):  
Sajjad Ahadzadeh ◽  
Mohammad Reza Malek

Earthquakes lead to enormous harm to life and assets. The ability to quickly assess damage across a vast area is crucial for effective disaster response. In recent years, social networks have demonstrated a lot of capability for improving situational awareness and identifying impacted areas. In this regard, this study proposed an approach that applied social media data for the earthquake damage assessment at the county, city, and 10 × 10 km grids scale using Naive Bayes, support vector machine (SVM), and deep learning classification algorithms. In this study, classification was evaluated using accuracy, precision, recall, and F-score metrics. Then, for understanding the message propagation behavior in the study area, temporal analysis based on classified messages was performed. In addition, variability of spatial topic concentration in three classification algorithms after the earthquake was examined using location quotation (LQ). A damage map based on the results of the classification of the three algorithms into three scales was created. For validation, confusion matrix metrics, Spearman’s rho, Pearson correlation, and Kendall’s tau were used. In this study, binary classification and multi-class classification have been done. Binary classification was used to classify messages into two classes of damage and non-damage so that their results could finally be used to estimate the earthquake damage. Multi-class classification was used to categorize messages to increase post-crisis situational awareness. In the binary classification, the SVM algorithm performed better in all the indices, gaining 71.22% accuracy, 81.22 F-measure, 79.08% accuracy, 85.62% precision, and 0.634 Kappa. In the multi-class classification, the SVM algorithm performed better in all the indices, gaining 90.25% accuracy, 88.58% F-measure, 84.34% accuracy, 93.26% precision, and 0.825 Kappa. Based on the results of the temporal analysis, most of the damage-related messages were reported on the day of the earthquake and decreased in the following days. Most of the messages related to infrastructure damages and injured, dead, and missing people were reported on the day of the earthquake. In addition, results of LQ indicated Napa as a center of the earthquake as the concentration of damage-related messages in all algorithms were based there. This indicates that our approach has been able to identify the damage well and has considered the earthquake center one of the most affected counties. The findings of the damage estimation showed that going away from the epicenter lowered the amount of damage. Based on the result of the validation of the estimated damage map with official data, the SVM performed better for damage estimation, followed by deep learning. In addition, at the county scale, algorithms showed better performance with Spearman’s rho of 0.8205, Pearson correlation of 0.5217, and Kendall’s tau of 0.6666.

Author(s):  
Bosede Iyiade Edwards ◽  
Nosiba Hisham Osman Khougali ◽  
Adrian David Cheok

With recent focus on deep neural network architectures for development of algorithms for computer-aided diagnosis (CAD), we provide a review of studies within the last 3 years (2015-2017) reported in selected top journals and conferences. 29 studies that met our inclusion criteria were reviewed to identify trends in this field and to inform future development. Studies have focused mostly on cancer-related diseases within internal medicine while diseases within gender-/age-focused fields like gynaecology/pediatrics have not received much focus. All reviewed studies employed image datasets, mostly sourced from publicly available databases (55.2%) and few based on data from human subjects (31%) and non-medical datasets (13.8%), while CNN architecture was employed in most (70%) of the studies. Confirmation of the effect of data manipulation on quality of output and adoption of multi-class rather than binary classification also require more focus. Future studies should leverage collaborations with medical experts to aid future with actual clinical testing with reporting based on some generally applicable index to enable comparison. Our next steps on plans for CAD development for osteoarthritis (OA), with plans to consider multi-class classification and comparison across deep learning approaches and unsupervised architectures were also highlighted.


2021 ◽  
Vol 19 (11) ◽  
pp. 126-140
Author(s):  
Zahraa S. Aaraji ◽  
Hawraa H. Abbas

Neuroimaging data analysis has attracted a great deal of attention with respect to the accurate diagnosis of Alzheimer’s disease (AD). Magnetic Resonance Imaging (MRI) scanners have thus been commonly used to study AD-related brain structural variations, providing images that demonstrate both morphometric and anatomical changes in the human brain. Deep learning algorithms have already been effectively exploited in other medical image processing applications to identify features and recognise patterns for many diseases that affect the brain and other organs; this paper extends on this to describe a novel computer aided software pipeline for the classification and early diagnosis of AD. The proposed method uses two types of three-dimensional Convolutional Neural Networks (3D CNN) to facilitate brain MRI data analysis and automatic feature extraction and classification, so that pre-processing and post-processing are utilised to normalise the MRI data and facilitate pattern recognition. The experimental results show that the proposed approach achieves 97.5%, 82.5%, and 83.75% accuracy in terms of binary classification AD vs. cognitively normal (CN), CN vs. mild cognitive impairment (MCI) and MCI vs. AD, respectively, as well as 85% accuracy for multi class-classification, based on publicly available data sets from the Alzheimer’s disease Neuroimaging Initiative (ADNI).


COVID ◽  
2021 ◽  
Vol 1 (1) ◽  
pp. 403-415
Author(s):  
Abeer Badawi ◽  
Khalid Elgazzar

Coronavirus disease (COVID-19) is an illness caused by a novel coronavirus family. One of the practical examinations for COVID-19 is chest radiography. COVID-19 infected patients show abnormalities in chest X-ray images. However, examining the chest X-rays requires a specialist with high experience. Hence, using deep learning techniques in detecting abnormalities in the X-ray images is presented commonly as a potential solution to help diagnose the disease. Numerous research has been reported on COVID-19 chest X-ray classification, but most of the previous studies have been conducted on a small set of COVID-19 X-ray images, which created an imbalanced dataset and affected the performance of the deep learning models. In this paper, we propose several image processing techniques to augment COVID-19 X-ray images to generate a large and diverse dataset to boost the performance of deep learning algorithms in detecting the virus from chest X-rays. We also propose innovative and robust deep learning models, based on DenseNet201, VGG16, and VGG19, to detect COVID-19 from a large set of chest X-ray images. A performance evaluation shows that the proposed models outperform all existing techniques to date. Our models achieved 99.62% on the binary classification and 95.48% on the multi-class classification. Based on these findings, we provide a pathway for researchers to develop enhanced models with a balanced dataset that includes the highest available COVID-19 chest X-ray images. This work is of high interest to healthcare providers, as it helps to better diagnose COVID-19 from chest X-rays in less time with higher accuracy.


2021 ◽  
Vol 13 (9) ◽  
pp. 4814
Author(s):  
Sajjad Ahadzadeh ◽  
Mohammad Reza Malek

Natural disasters have always been one of the threats to human societies. As a result of such crises, many people will be affected, injured, and many financial losses will incur. Large earthquakes often occur suddenly; consequently, crisis management is difficult. Quick identification of affected areas after critical events can help relief workers to provide emergency services more quickly. This paper uses social media text messages to create a damage map. A support vector machine (SVM) machine-learning method was used to identify mentions of damage among social media text messages. The damage map was created based on damage-related tweets. The results showed the SVM classifier accurately identified damage-related messages where the F-score attained 58%, precision attained 56.8%, recall attained 59.25%, and accuracy attained 71.03%. In addition, the temporal pattern of damage and non-damage tweets was investigated on each day and per hour. The results of the temporal analysis showed that most damage-related messages were sent on the day of the earthquake. The results of our research were evaluated by comparing the created damage map with official intensity maps. The findings showed that the damage of the earthquake can be estimated efficiently by our strategy at multispatial units with an overall accuracy of 69.89 at spatial grid unit and Spearman’s rho and Pearson correlation of 0.429 and 0.503, respectively, at the spatial county unit. We used two spatial units in this research to examine the impact of the spatial unit on the accuracy of damage assessment. The damage map created in this research can determine the priority of the relief workers.


2021 ◽  
Vol 5 (4) ◽  
pp. 73
Author(s):  
Mohamed Chetoui ◽  
Moulay A. Akhloufi ◽  
Bardia Yousefi ◽  
El Mostafa Bouattane

The coronavirus pandemic is spreading around the world. Medical imaging modalities such as radiography play an important role in the fight against COVID-19. Deep learning (DL) techniques have been able to improve medical imaging tools and help radiologists to make clinical decisions for the diagnosis, monitoring and prognosis of different diseases. Computer-Aided Diagnostic (CAD) systems can improve work efficiency by precisely delineating infections in chest X-ray (CXR) images, thus facilitating subsequent quantification. CAD can also help automate the scanning process and reshape the workflow with minimal patient contact, providing the best protection for imaging technicians. The objective of this study is to develop a deep learning algorithm to detect COVID-19, pneumonia and normal cases on CXR images. We propose two classifications problems, (i) a binary classification to classify COVID-19 and normal cases and (ii) a multiclass classification for COVID-19, pneumonia and normal. Nine datasets and more than 3200 COVID-19 CXR images are used to assess the efficiency of the proposed technique. The model is trained on a subset of the National Institute of Health (NIH) dataset using swish activation, thus improving the training accuracy to detect COVID-19 and other pneumonia. The models are tested on eight merged datasets and on individual test sets in order to confirm the degree of generalization of the proposed algorithms. An explainability algorithm is also developed to visually show the location of the lung-infected areas detected by the model. Moreover, we provide a detailed analysis of the misclassified images. The obtained results achieve high performances with an Area Under Curve (AUC) of 0.97 for multi-class classification (COVID-19 vs. other pneumonia vs. normal) and 0.98 for the binary model (COVID-19 vs. normal). The average sensitivity and specificity are 0.97 and 0.98, respectively. The sensitivity of the COVID-19 class achieves 0.99. The results outperformed the comparable state-of-the-art models for the detection of COVID-19 on CXR images. The explainability model shows that our model is able to efficiently identify the signs of COVID-19.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dandi Yang ◽  
Cristhian Martinez ◽  
Lara Visuña ◽  
Hardev Khandhar ◽  
Chintan Bhatt ◽  
...  

AbstractThe main purpose of this work is to investigate and compare several deep learning enhanced techniques applied to X-ray and CT-scan medical images for the detection of COVID-19. In this paper, we used four powerful pre-trained CNN models, VGG16, DenseNet121, ResNet50,and ResNet152, for the COVID-19 CT-scan binary classification task. The proposed Fast.AI ResNet framework was designed to find out the best architecture, pre-processing, and training parameters for the models largely automatically. The accuracy and F1-score were both above 96% in the diagnosis of COVID-19 using CT-scan images. In addition, we applied transfer learning techniques to overcome the insufficient data and to improve the training time. The binary and multi-class classification of X-ray images tasks were performed by utilizing enhanced VGG16 deep transfer learning architecture. High accuracy of 99% was achieved by enhanced VGG16 in the detection of X-ray images from COVID-19 and pneumonia. The accuracy and validity of the algorithms were assessed on X-ray and CT-scan well-known public datasets. The proposed methods have better results for COVID-19 diagnosis than other related in literature. In our opinion, our work can help virologists and radiologists to make a better and faster diagnosis in the struggle against the outbreak of COVID-19.


Author(s):  
Rehab M. Duwairi ◽  
Saad A. Al-Zboon ◽  
Rami A. Al-Dwairi ◽  
Ahmad Obaidi

The rapid development of artificial neural network techniques, especially convolutional neural networks, encouraged the researchers to adapt such techniques in the medical domain. Specifically, to provide assist tools to help the professionals in patients’ diagnosis. The main problem faced by the researchers in the medical domain is the lack of available annotated datasets which can be used to train and evaluate large and complex deep neural networks. In this paper, to assist researchers who are interested in applying deep learning techniques to aid the ophthalmologists in diagnosing eye-related diseases, we provide an optical coherence tomography dataset with collaboration with ophthalmologists from the King Abdullah University Hospital, Irbid, Jordan. This dataset consists of 21,991 OCT images distributed over seven eye diseases in addition to normal images (no disease), namely, Choroidal Neovascularisation, Full Macular Hole (Full Thickness), Partial Macular Hole, Central Serous Retinopathy, Geographic atrophy, Macular Retinal Oedema, and Vitreomacular Traction. To the best of our knowledge, this dataset is the largest of its kind, where images belong to actual patients from Jordan and the annotation was carried out by ophthalmologists. Two classification tasks were applied to this dataset; a binary classification to distinguish between images which belong to healthy eyes (normal) and images which belong to diseased eyes (abnormal). The second classification task is a multi-class classification, where the deep neural network is trained to distinguish between the seven diseases listed above in addition to the normal case. In both classification tasks, the U-Net neural network was modified and subsequently utilised. This modification adds an additional block of layers to the original U-Net model to become capable of handling classification as the original network is used for image segmentation. The results of the binary classification were equal to 84.90% and 69.50% as accuracy and quadratic weighted kappa, respectively. The results of the multi-class classification, by contrast, were equal to 63.68% and 66.06% as accuracy and quadratic weighted kappa, respectively.


2020 ◽  
Vol 14 ◽  
Author(s):  
Lahari Tipirneni ◽  
Rizwan Patan

Abstract:: Millions of deaths all over the world are caused by breast cancer every year. It has become the most common type of cancer in women. Early detection will help in better prognosis and increases the chance of survival. Automating the classification using Computer-Aided Diagnosis (CAD) systems can make the diagnosis less prone to errors. Multi class classification and Binary classification of breast cancer is a challenging problem. Convolutional neural network architectures extract specific feature descriptors from images, which cannot represent different types of breast cancer. This leads to false positives in classification, which is undesirable in disease diagnosis. The current paper presents an ensemble Convolutional neural network for multi class classification and Binary classification of breast cancer. The feature descriptors from each network are combined to produce the final classification. In this paper, histopathological images are taken from publicly available BreakHis dataset and classified between 8 classes. The proposed ensemble model can perform better when compared to the methods proposed in the literature. The results showed that the proposed model could be a viable approach for breast cancer classification.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1550
Author(s):  
Alexandros Liapis ◽  
Evanthia Faliagka ◽  
Christos P. Antonopoulos ◽  
Georgios Keramidas ◽  
Nikolaos Voros

Physiological measurements have been widely used by researchers and practitioners in order to address the stress detection challenge. So far, various datasets for stress detection have been recorded and are available to the research community for testing and benchmarking. The majority of the stress-related available datasets have been recorded while users were exposed to intense stressors, such as songs, movie clips, major hardware/software failures, image datasets, and gaming scenarios. However, it remains an open research question if such datasets can be used for creating models that will effectively detect stress in different contexts. This paper investigates the performance of the publicly available physiological dataset named WESAD (wearable stress and affect detection) in the context of user experience (UX) evaluation. More specifically, electrodermal activity (EDA) and skin temperature (ST) signals from WESAD were used in order to train three traditional machine learning classifiers and a simple feed forward deep learning artificial neural network combining continues variables and entity embeddings. Regarding the binary classification problem (stress vs. no stress), high accuracy (up to 97.4%), for both training approaches (deep-learning, machine learning), was achieved. Regarding the stress detection effectiveness of the created models in another context, such as user experience (UX) evaluation, the results were quite impressive. More specifically, the deep-learning model achieved a rather high agreement when a user-annotated dataset was used for validation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shaker El-Sappagh ◽  
Jose M. Alonso ◽  
S. M. Riazul Islam ◽  
Ahmad M. Sultan ◽  
Kyung Sup Kwak

AbstractAlzheimer’s disease (AD) is the most common type of dementia. Its diagnosis and progression detection have been intensively studied. Nevertheless, research studies often have little effect on clinical practice mainly due to the following reasons: (1) Most studies depend mainly on a single modality, especially neuroimaging; (2) diagnosis and progression detection are usually studied separately as two independent problems; and (3) current studies concentrate mainly on optimizing the performance of complex machine learning models, while disregarding their explainability. As a result, physicians struggle to interpret these models, and feel it is hard to trust them. In this paper, we carefully develop an accurate and interpretable AD diagnosis and progression detection model. This model provides physicians with accurate decisions along with a set of explanations for every decision. Specifically, the model integrates 11 modalities of 1048 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) real-world dataset: 294 cognitively normal, 254 stable mild cognitive impairment (MCI), 232 progressive MCI, and 268 AD. It is actually a two-layer model with random forest (RF) as classifier algorithm. In the first layer, the model carries out a multi-class classification for the early diagnosis of AD patients. In the second layer, the model applies binary classification to detect possible MCI-to-AD progression within three years from a baseline diagnosis. The performance of the model is optimized with key markers selected from a large set of biological and clinical measures. Regarding explainability, we provide, for each layer, global and instance-based explanations of the RF classifier by using the SHapley Additive exPlanations (SHAP) feature attribution framework. In addition, we implement 22 explainers based on decision trees and fuzzy rule-based systems to provide complementary justifications for every RF decision in each layer. Furthermore, these explanations are represented in natural language form to help physicians understand the predictions. The designed model achieves a cross-validation accuracy of 93.95% and an F1-score of 93.94% in the first layer, while it achieves a cross-validation accuracy of 87.08% and an F1-Score of 87.09% in the second layer. The resulting system is not only accurate, but also trustworthy, accountable, and medically applicable, thanks to the provided explanations which are broadly consistent with each other and with the AD medical literature. The proposed system can help to enhance the clinical understanding of AD diagnosis and progression processes by providing detailed insights into the effect of different modalities on the disease risk.


Sign in / Sign up

Export Citation Format

Share Document