scholarly journals Smart Heart Disease Prediction System with IoT and Fog Computing Sectors Enabled by Cascaded Deep Learning Model

2022 ◽  
Vol 2022 ◽  
pp. 1-22
Author(s):  
K. Butchi Raju ◽  
Suresh Dara ◽  
Ankit Vidyarthi ◽  
V. MNSSVKR Gupta ◽  
Baseem Khan

Chronic illnesses like chronic respiratory disease, cancer, heart disease, and diabetes are threats to humans around the world. Among them, heart disease with disparate features or symptoms complicates diagnosis. Because of the emergence of smart wearable gadgets, fog computing and “Internet of Things” (IoT) solutions have become necessary for diagnosis. The proposed model integrates Edge-Fog-Cloud computing for the accurate and fast delivery of outcomes. The hardware components collect data from different patients. The heart feature extraction from signals is done to get significant features. Furthermore, the feature extraction of other attributes is also gathered. All these features are gathered and subjected to the diagnostic system using an Optimized Cascaded Convolution Neural Network (CCNN). Here, the hyperparameters of CCNN are optimized by the Galactic Swarm Optimization (GSO). Through the performance analysis, the precision of the suggested GSO-CCNN is 3.7%, 3.7%, 3.6%, 7.6%, 67.9%, 48.4%, 33%, 10.9%, and 7.6% more advanced than PSO-CCNN, GWO-CCNN, WOA-CCNN, DHOA-CCNN, DNN, RNN, LSTM, CNN, and CCNN, respectively. Thus, the comparative analysis of the suggested system ensures its efficiency over the conventional models.

2021 ◽  
pp. 1-12
Author(s):  
Irfan Javid ◽  
Ahmed Khalaf Zager Alsaedi ◽  
Rozaida Binti Ghazali ◽  
Yana Mazwin ◽  
Muhammad Zulqarnain

In previous studies, various machine-driven decision support systems based on recurrent neural networks (RNN) were ordinarily projected for the detection of cardiovascular disease. However, the majority of these approaches are restricted to feature preprocessing. In this paper, we concentrate on both, including, feature refinement and the removal of the predictive model’s problems, e.g., underfitting and overfitting. By evading overfitting and underfitting, the model will demonstrate good enactment on equally the training and testing datasets. Overfitting the training data is often triggered by inadequate network configuration and inappropriate features. We advocate using Chi2 statistical model to remove irrelevant features when searching for the best-configured gated recurrent unit (GRU) using an exhaustive search strategy. The suggested hybrid technique, called Chi2 GRU, is tested against traditional ANN and GRU models, as well as different progressive machine learning models and antecedently revealed strategies for cardiopathy prediction. The prediction accuracy of proposed model is 92.17% . In contrast to formerly stated approaches, the obtained outcomes are promising. The study’s results indicate that medical practitioner will use the proposed diagnostic method to reliably predict heart disease.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Basma Abd El-Rahiem ◽  
Ahmed Sedik ◽  
Ghada M. El Banby ◽  
Hani M. Ibrahem ◽  
Mohamed Amin ◽  
...  

PurposeThe objective of this paper is to perform infrared (IR) face recognition efficiently with convolutional neural networks (CNNs). The proposed model in this paper has several advantages such as the automatic feature extraction using convolutional and pooling layers and the ability to distinguish between faces without visual details.Design/methodology/approachA model which comprises five convolutional layers in addition to five max-pooling layers is introduced for the recognition of IR faces.FindingsThe experimental results and analysis reveal high recognition rates of IR faces with the proposed model.Originality/valueA designed CNN model is presented for IR face recognition. Both the feature extraction and classification tasks are incorporated into this model. The problems of low contrast and absence of details in IR images are overcome with the proposed model. The recognition accuracy reaches 100% in experiments on the Terravic Facial IR Database (TFIRDB).


Author(s):  
Surenthiran Krishnan ◽  
Pritheega Magalingam ◽  
Roslina Ibrahim

<span>This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.</span>


2021 ◽  
pp. 1-17
Author(s):  
Santosh Ashokrao Darade ◽  
M. Akkalakshmi

From the recent study, it is observed that even though cloud computing grants the greatest performance in the case of storage, computing, and networking services, the Internet of Things (IoT) still suffers from high processing latency, awareness of location, and least mobility support. To address these issues, this paper integrates fog computing and Software-Defined Networking (SDN). Importantly, fog computing does the extension of computing and storing to the network edge that could minimize the latency along with mobility support. Further, this paper aims to incorporate a new optimization strategy to address the “Load balancing” problem in terms of latency minimization. A new Thresholded-Whale Optimization Algorithm (T-WOA) is introduced for the optimal selection of load distribution coefficient (time allocation for doing a task). Finally, the performance of the proposed model is compared with other conventional models concerning latency. The simulation results prove that the SDN based T-WOA algorithm could efficiently minimize the latency and improve the Quality of Service (QoS) in Software Defined Cloud/Fog architecture.


Author(s):  
Sanaa Elyassami ◽  
Achraf Ait Kaddour

<span lang="EN-US">Cardiovascular diseases remain the leading cause of death, taking an estimated 17.9 million lives each year and representing 31% of all global deaths. The patient records including blood reports, cardiac echo reports, and physician’s notes can be used to perform feature analysis and to accurately classify heart disease patients. In this paper, an incremental deep learning model was developed and trained with stochastic gradient descent using feedforward neural networks. The chi-square test and the dropout regularization have been incorporated into the model to improve the generalization capabilities and the performance of the heart disease patients' classification model. The impact of the learning rate and the depth of neural networks on the performance were explored. The hyperbolic tangent, the rectifier linear unit, the Maxout, and the exponential rectifier linear unit were used as activation functions for the hidden and the output layer neurons. To avoid over-optimistic results, the performance of the proposed model was evaluated using balanced accuracy and the overall predictive value in addition to the accuracy, sensitivity, and specificity. The obtained results are promising, and the proposed model can be applied to a larger dataset and used by physicians to accurately classify heart disease patients.</span>


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
S. T. Sukanya ◽  
Jerine

AbstractObjectivesThe main intention of this paper is to propose a new Improved K-means clustering algorithm, by optimally tuning the centroids.MethodsThis paper introduces a new melanoma detection model that includes three major phase’s viz. segmentation, feature extraction and detection. For segmentation, this paper introduces a new Improved K-means clustering algorithm, where the initial centroids are optimally tuned by a new algorithm termed Lion Algorithm with New Mating Process (LANM), which is an improved version of standard LA. Moreover, the optimal selection is based on the consideration of multi-objective including intensity diverse centroid, spatial map, and frequency of occurrence, respectively. The subsequent phase is feature extraction, where the proposed Local Vector Pattern (LVP) and Grey-Level Co-Occurrence Matrix (GLCM)-based features are extracted. Further, these extracted features are fed as input to Deep Convolution Neural Network (DCNN) for melanoma detection.ResultsFinally, the performance of the proposed model is evaluated over other conventional models by determining both the positive as well as negative measures. From the analysis, it is observed that for the normal skin image, the accuracy of the presented work is 0.86379, which is 47.83% and 0.245% better than the traditional works like Conventional K-means and PA-MSA, respectively.ConclusionsFrom the overall analysis it can be observed that the proposed model is more robust in melanoma prediction, when compared over the state-of-art models.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Author(s):  
Huimin Lu ◽  
Rui Yang ◽  
Zhenrong Deng ◽  
Yonglin Zhang ◽  
Guangwei Gao ◽  
...  

Chinese image description generation tasks usually have some challenges, such as single-feature extraction, lack of global information, and lack of detailed description of the image content. To address these limitations, we propose a fuzzy attention-based DenseNet-BiLSTM Chinese image captioning method in this article. In the proposed method, we first improve the densely connected network to extract features of the image at different scales and to enhance the model’s ability to capture the weak features. At the same time, a bidirectional LSTM is used as the decoder to enhance the use of context information. The introduction of an improved fuzzy attention mechanism effectively improves the problem of correspondence between image features and contextual information. We conduct experiments on the AI Challenger dataset to evaluate the performance of the model. The results show that compared with other models, our proposed model achieves higher scores in objective quantitative evaluation indicators, including BLEU , BLEU , METEOR, ROUGEl, and CIDEr. The generated description sentence can accurately express the image content.


Internet of things (IoT) is an emerging concept which aims to connect billions of devices with each other anytime regardless of their location. Sadly, these IoT devices do not have enough computing resources to process huge amount of data. Therefore, Cloud computing is relied on to provide these resources. However, cloud computing based architecture fails in applications that demand very low and predictable latency, therefore the need for fog computing which is a new paradigm that is regarded as an extension of cloud computing to provide services between end users and the cloud user. Unfortunately, Fog-IoT is confronted with various security and privacy risks and prone to several cyberattacks which is a serious challenge. The purpose of this work is to present security and privacy threats towards Fog-IoT platform and discuss the security and privacy requirements in fog computing. We then proceed to propose an Intrusion Detection System (IDS) model using Standard Deep Neural Network's Back Propagation algorithm (BPDNN) to mitigate intrusions that attack Fog-IoT platform. The experimental Dataset for the proposed model is obtained from the Canadian Institute for Cybersecurity 2017 Dataset. Each instance of the attack in the dataset is separated into separate files, which are DoS (Denial of Service), DDoS (Distributed Denial of Service), Web Attack, Brute Force FTP, Brute Force SSH, Heartbleed, Infiltration and Botnet (Bot Network) Attack. The proposed model is trained using a 3-layer BP-DNN


2021 ◽  
Author(s):  
Emir Akcin ◽  
Kemal Sami Isleyen ◽  
Enes Ozcan ◽  
Alaa Ali Hameed ◽  
Erdal Alimovski ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document