Optimally organized GRU-deep learning model with Chi2 feature selection for heart disease prediction

2021 ◽  
pp. 1-12
Author(s):  
Irfan Javid ◽  
Ahmed Khalaf Zager Alsaedi ◽  
Rozaida Binti Ghazali ◽  
Yana Mazwin ◽  
Muhammad Zulqarnain

In previous studies, various machine-driven decision support systems based on recurrent neural networks (RNN) were ordinarily projected for the detection of cardiovascular disease. However, the majority of these approaches are restricted to feature preprocessing. In this paper, we concentrate on both, including, feature refinement and the removal of the predictive model’s problems, e.g., underfitting and overfitting. By evading overfitting and underfitting, the model will demonstrate good enactment on equally the training and testing datasets. Overfitting the training data is often triggered by inadequate network configuration and inappropriate features. We advocate using Chi2 statistical model to remove irrelevant features when searching for the best-configured gated recurrent unit (GRU) using an exhaustive search strategy. The suggested hybrid technique, called Chi2 GRU, is tested against traditional ANN and GRU models, as well as different progressive machine learning models and antecedently revealed strategies for cardiopathy prediction. The prediction accuracy of proposed model is 92.17% . In contrast to formerly stated approaches, the obtained outcomes are promising. The study’s results indicate that medical practitioner will use the proposed diagnostic method to reliably predict heart disease.

Author(s):  
Surenthiran Krishnan ◽  
Pritheega Magalingam ◽  
Roslina Ibrahim

<span>This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.</span>


IoT ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 428-448
Author(s):  
Imtiaz Ullah ◽  
Ayaz Ullah ◽  
Mazhar Sajjad

The tremendous number of Internet of Things (IoT) applications, with their ubiquity, has provided us with unprecedented productivity and simplified our daily life. At the same time, the insecurity of these technologies ensures that our daily lives are surrounded by vulnerable computers, allowing for the launch of multiple attacks via large-scale botnets through the IoT. These attacks have been successful in achieving their heinous objectives. A strong identification strategy is essential to keep devices secured. This paper proposes and implements a model for anomaly-based intrusion detection in IoT networks that uses a convolutional neural network (CNN) and gated recurrent unit (GRU) to detect and classify binary and multiclass IoT network data. The proposed model is validated using the BoT-IoT, IoT Network Intrusion, MQTT-IoT-IDS2020, and IoT-23 intrusion detection datasets. Our proposed binary and multiclass classification model achieved an exceptionally high level of accuracy, precision, recall, and F1 score.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Chirag Roy ◽  
Satyendra Singh Yadav ◽  
Vipin Pal ◽  
Mangal Singh ◽  
Sarat Kumar Patra ◽  
...  

With rapid advancement in artificial intelligence (AI) and machine learning (ML), automatic modulation classification (AMC) using deep learning (DL) techniques has become very popular. This is even more relevant for Internet of things (IoT)-assisted wireless systems. This paper presents a lightweight, ensemble model with convolution, long short term memory (LSTM), and gated recurrent unit (GRU) layers. The proposed model is termed as deep recurrent convoluted network with additional gated layer (DRCaG). It has been tested on a dataset derived from the RadioML2016(b) and comprises of 8 different modulation types named as BPSK, QPSK, 8-PSK, 16-QAM, 4-PAM, CPFSK, GFSK, and WBFM. The performance of the proposed model has been presented through extensive simulation in terms of training loss, accuracy, and confusion matrix with variable signal to noise ratio (SNR) ranging from −20 dB to +20 dB and it demonstrates the superiority of DRCaG vis-a-vis existing ones.


Data ◽  
2021 ◽  
Vol 6 (12) ◽  
pp. 134
Author(s):  
Kangying Li ◽  
Biligsaikhan Batjargal ◽  
Akira Maeda

This paper introduces a framework for retrieving low-resource font typeface databases by handwritten input. A new deep learning model structure based on metric learning is proposed to extract the features of a character typeface and predict the category of handwrittten input queries. Rather than using sufficient training data, we aim to utilize ancient character font typefaces with only one sample per category. Our research aims to achieve decent retrieval performances over more than 600 categories of handwritten characters automatically. We consider utilizing generic handcrafted features to train a model to help the voting classifier make the final prediction. The proposed method is implemented on the ‘Shirakawa font oracle bone script’ dataset as an isolated ancient-character-recognition system based on free ordering and connective strokes. We evaluate the proposed model on several standard character and symbol datasets. The experimental results showed that the proposed method provides good performance in extracting the features of symbols or characters’ font images necessary to perform further retrieval tasks. The demo system has been released, and it requires only one sample for each character to predict the user input. The extracted features have a better effect in finding the highest-ranked relevant item in retrieval tasks and can also be utilized in various technical frameworks for ancient character recognition and can be applied to educational application development.


Author(s):  
Sanaa Elyassami ◽  
Achraf Ait Kaddour

<span lang="EN-US">Cardiovascular diseases remain the leading cause of death, taking an estimated 17.9 million lives each year and representing 31% of all global deaths. The patient records including blood reports, cardiac echo reports, and physician’s notes can be used to perform feature analysis and to accurately classify heart disease patients. In this paper, an incremental deep learning model was developed and trained with stochastic gradient descent using feedforward neural networks. The chi-square test and the dropout regularization have been incorporated into the model to improve the generalization capabilities and the performance of the heart disease patients' classification model. The impact of the learning rate and the depth of neural networks on the performance were explored. The hyperbolic tangent, the rectifier linear unit, the Maxout, and the exponential rectifier linear unit were used as activation functions for the hidden and the output layer neurons. To avoid over-optimistic results, the performance of the proposed model was evaluated using balanced accuracy and the overall predictive value in addition to the accuracy, sensitivity, and specificity. The obtained results are promising, and the proposed model can be applied to a larger dataset and used by physicians to accurately classify heart disease patients.</span>


2022 ◽  
Vol 2022 ◽  
pp. 1-22
Author(s):  
K. Butchi Raju ◽  
Suresh Dara ◽  
Ankit Vidyarthi ◽  
V. MNSSVKR Gupta ◽  
Baseem Khan

Chronic illnesses like chronic respiratory disease, cancer, heart disease, and diabetes are threats to humans around the world. Among them, heart disease with disparate features or symptoms complicates diagnosis. Because of the emergence of smart wearable gadgets, fog computing and “Internet of Things” (IoT) solutions have become necessary for diagnosis. The proposed model integrates Edge-Fog-Cloud computing for the accurate and fast delivery of outcomes. The hardware components collect data from different patients. The heart feature extraction from signals is done to get significant features. Furthermore, the feature extraction of other attributes is also gathered. All these features are gathered and subjected to the diagnostic system using an Optimized Cascaded Convolution Neural Network (CCNN). Here, the hyperparameters of CCNN are optimized by the Galactic Swarm Optimization (GSO). Through the performance analysis, the precision of the suggested GSO-CCNN is 3.7%, 3.7%, 3.6%, 7.6%, 67.9%, 48.4%, 33%, 10.9%, and 7.6% more advanced than PSO-CCNN, GWO-CCNN, WOA-CCNN, DHOA-CCNN, DNN, RNN, LSTM, CNN, and CCNN, respectively. Thus, the comparative analysis of the suggested system ensures its efficiency over the conventional models.


2021 ◽  
Vol 9 (4) ◽  
pp. 383
Author(s):  
Ting Yu ◽  
Jichao Wang

Mean wave period (MWP) is one of the key parameters affecting the design of marine facilities. Currently, there are two main methods, numerical and data-driven methods, for forecasting wave parameters, of which the latter are widely used. However, few studies have focused on MWP forecasting, and even fewer have investigated it with spatial and temporal information. In this study, correlations between ocean dynamic parameters are explored to obtain appropriate input features, significant wave height (SWH) and MWP. Subsequently, a data-driven approach, the convolution gated recurrent unit (Conv-GRU) model with spatiotemporal characteristics, is utilized to field forecast MWP with 1, 3, 6, 12, and 24-h lead times in the South China Sea. Six points at different locations and six consecutive moments at every 12-h intervals are selected to study the forecasting ability of the proposed model. The Conv-GRU model has a better performance than the single gated recurrent unit (GRU) model in terms of root mean square error (RMSE), the scattering index (SI), Bias, and the Pearson’s correlation coefficient (R). With the lead time increasing, the forecast effect shows a decreasing trend, specifically, the experiment displays a relatively smooth forecast curve and presents a great advantage in the short-term forecast of the MWP field in the Conv-GRU model, where the RMSE is 0.121 m for 1-h lead time.


2021 ◽  
Vol 13 (10) ◽  
pp. 2003
Author(s):  
Daeyong Jin ◽  
Eojin Lee ◽  
Kyonghwan Kwon ◽  
Taeyun Kim

In this study, we used convolutional neural networks (CNNs)—which are well-known deep learning models suitable for image data processing—to estimate the temporal and spatial distribution of chlorophyll-a in a bay. The training data required the construction of a deep learning model acquired from the satellite ocean color and hydrodynamic model. Chlorophyll-a, total suspended sediment (TSS), visibility, and colored dissolved organic matter (CDOM) were extracted from the satellite ocean color data, and water level, currents, temperature, and salinity were generated from the hydrodynamic model. We developed CNN Model I—which estimates the concentration of chlorophyll-a using a 48 × 27 sized overall image—and CNN Model II—which uses a 7 × 7 segmented image. Because the CNN Model II conducts estimation using only data around the points of interest, the quantity of training data is more than 300 times larger than that of CNN Model I. Consequently, it was possible to extract and analyze the inherent patterns in the training data, improving the predictive ability of the deep learning model. The average root mean square error (RMSE), calculated by applying CNN Model II, was 0.191, and when the prediction was good, the coefficient of determination (R2) exceeded 0.91. Finally, we performed a sensitivity analysis, which revealed that CDOM is the most influential variable in estimating the spatiotemporal distribution of chlorophyll-a.


2021 ◽  
Vol 263 (6) ◽  
pp. 486-492
Author(s):  
Shuang Yang ◽  
Xiangyang Zeng

Underwater acoustic target recognition is an important part of underwater acoustic signal processing and an important technical support for underwater acoustic information acquisition and underwater acoustic information confrontation. Taking into account that the gated recurrent unit (GRU) has an internal feedback mechanism that can reflect the temporal correlation of underwater acoustic target features, a model with gated recurrent unit and Network in Network (NIN) is proposed to recognize underwater acoustic targets in this paper. The proposed model introduces NIN to compress the hidden states of GRU while retaining the original timing characteristics of underwater acoustic target features. The higher recognition rate and faster calculation speed of the proposed model are demonstrated with experiments for raw underwater acoustic signals comparing with the multi-layer stacked GRU model.


2021 ◽  
Author(s):  
J. Annrose ◽  
N. Herald Anantha Rufus ◽  
C. R. Edwin Selva Rex ◽  
D. Godwin Immanuel

Abstract Bean which is botanically called Phaseolus vulgaris L belongs to the Fabaceae family.During bean disease identification, unnecessary economical losses occur due to the delay of the treatment period, incorrect treatment, and lack of knowledge. The existing deep learning and machine learning techniques met few issues such as high computational complexity, higher cost associated with the training data, more execution time, noise, feature dimensionality, lower accuracy, low speed, etc. To tackle these problems, we have proposed a hybrid deep learning model with an Archimedes optimization algorithm (HDL-AOA) for bean disease classification. In this work, there are five bean classes of which one is a healthy class whereas the remaining four classes indicate different diseases such as Bean halo blight, Pythium diseases, Rhizoctonia root rot, and Anthracnose abnormalities acquired from the Soybean (Large) Data Set.The hybrid deep learning technique is the combination of wavelet packet decomposition (WPD) and long short term memory (LSTM). Initially, the WPD decomposes the input images into four sub-series. For these sub-series, four LSTM networks were developed. During bean disease classification, an Archimedes optimization algorithm (AOA) enhances the classification accuracy for multiple single LSTM networks. MATLAB software implements the HDL-AOA model for bean disease classification. The proposed model accomplishes lower MAPE than other exiting methods. Finally, the proposed HDL-AOA model outperforms excellent classification results using different evaluation measures such as accuracy, specificity, sensitivity, precision, recall, and F-score.


Sign in / Sign up

Export Citation Format

Share Document