Multi Disease-Prediction Framework Using Hybrid Deep Learning: An Optimal Prediction Model (Preprint)

2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.

2021 ◽  
pp. 0734242X2110039
Author(s):  
Elham Shadkam

Today, reverse logistics (RL) is one of the main activities of supply chain management that covers all physical activities associated with return products (such as collection, recovery, recycling and destruction). In this regard, the designing and proper implementation of RL, in addition to increasing the level of customer satisfaction, reduces inventory and transportation costs. In this paper, in order to minimize the costs associated with fixed costs, material flow costs, and the costs of building potential centres, a complex integer linear programming model for an integrated direct logistics and RL network design is presented. Due to the outbreak of the ongoing global coronavirus pandemic (COVID-19) at the beginning of 2020 and the consequent increase in medical waste, the need for an inverse logistics system to manage waste is strongly felt. Also, due to the worldwide vaccination in the near future, this waste will increase even more and careful management must be done in this regard. For this purpose, the proposed RL model in the field of COVID-19 waste management and especially vaccine waste has been designed. The network consists of three parts – factory, consumers’ and recycling centres – each of which has different sub-parts. Finally, the proposed model is solved using the cuckoo optimization algorithm, which is one of the newest and most powerful meta-heuristic algorithms, and the computational results are presented along with its sensitivity analysis.


Big data is large-scale data collected for knowledge discovery, it has been widely used in various applications. Big data often has image data from the various applications and requires effective technique to process data. In this paper, survey has been done in the big image data researches to analysis the effective performance of the methods. Deep learning techniques provides the effective performance compared to other methods included wavelet based methods. The deep learning techniques has the problem of requiring more computational time, and this can be overcome by lightweight methods.


2021 ◽  
Vol 11 (10) ◽  
pp. 2618-2625
Author(s):  
R. T. Subhalakshmi ◽  
S. Appavu Alias Balamurugan ◽  
S. Sasikala

In recent times, the COVID-19 epidemic turn out to be increased in an extreme manner, by the accessibility of an inadequate amount of rapid testing kits. Consequently, it is essential to develop the automated techniques for Covid-19 detection to recognize the existence of disease from the radiological images. The most ordinary symptoms of COVID-19 are sore throat, fever, and dry cough. Symptoms are able to progress to a rigorous type of pneumonia with serious impediment. As medical imaging is not recommended currently in Canada for crucial COVID-19 diagnosis, systems of computer-aided diagnosis might aid in early COVID-19 abnormalities detection and help out to observe the disease progression, reduce mortality rates potentially. In this approach, a deep learning based design for feature extraction and classification is employed for automatic COVID-19 diagnosis from computed tomography (CT) images. The proposed model operates on three main processes based pre-processing, feature extraction, and classification. The proposed design incorporates the fusion of deep features using GoogLe Net models. Finally, Multi-scale Recurrent Neural network (RNN) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the proposed model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity, specificity, and accuracy.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 982 ◽  
Author(s):  
Hyo Lee ◽  
Ihsan Ullah ◽  
Weiguo Wan ◽  
Yongbin Gao ◽  
Zhijun Fang

Make and model recognition (MMR) of vehicles plays an important role in automatic vision-based systems. This paper proposes a novel deep learning approach for MMR using the SqueezeNet architecture. The frontal views of vehicle images are first extracted and fed into a deep network for training and testing. The SqueezeNet architecture with bypass connections between the Fire modules, a variant of the vanilla SqueezeNet, is employed for this study, which makes our MMR system more efficient. The experimental results on our collected large-scale vehicle datasets indicate that the proposed model achieves 96.3% recognition rate at the rank-1 level with an economical time slice of 108.8 ms. For inference tasks, the deployed deep model requires less than 5 MB of space and thus has a great viability in real-time applications.


Author(s):  
Surenthiran Krishnan ◽  
Pritheega Magalingam ◽  
Roslina Ibrahim

<span>This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.</span>


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Yixuan Zhao ◽  
Qinghua Tang

Big data is a large-scale rapidly growing database of information. Big data has a huge data size and complexity that cannot be easily stored or processed by conventional data processing tools. Big data research methods have been widely used in many disciplines as research methods based on massively big data analysis have aroused great interest in scientific methodology. In this paper, we proposed a deep computational model to analyze the factors that affect social and mental health. The proposed model utilizes a large number of microblog manual annotation datasets. This huge amount of dataset is divided into six main factors that affect social and mental health, that is, economic market correlation, the political democracy, the management law, the cultural trend, the expansion of the information level, and the fast correlation of the rhythm of life. The proposed model compares the review data of different influencing factors to get the correlation degree between social mental health and these factors.


2021 ◽  
Vol 16 (3) ◽  
Author(s):  
Khushbu Verma ◽  
Ankit Singh Bartwal ◽  
Mathura Prasad Thapliyal

People nowadays suffer from a variety of heart ailments as a result of the environment and their lifestyle choices. As a result, analyzing sickness at an early stage becomes a critical responsibility. Data mining uses disease data to uncover important knowledge. In this research paper, we employ the hybrid combination of a Genetic Algorithm based Feature selection and Ensemble Deep Neural Network Model for Heart Disease prediction. In this algorithm, we used a 0.04 learning rate and Adam optimizer was used for enhancement of the proposed model. The proposed algorithm has come to 98% accuracy of heart disease prediction, which is higher than the past approaches. Other exist models such as random forest, logistic regression, support vector machine, Decision tree algorithms have taken a higher time and give less accuracy compare to the proposed hybrid deep learning-based approach.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Bin Li ◽  
Yuqing He

The synergy of computational logistics and deep learning provides a new methodology and solution to the operational decisions of container terminal handling systems (CTHS) at the strategic, tactical, and executive levels. Above all, the container terminal logistics tactical operational complexity is discussed by computational logistics, and the liner handling volume (LHV) has important influences on a series of terminal scheduling decision problems. Subsequently, a feature-extraction-based lightweight convolutional and recurrent neural network adaptive computing model (FEB-LCR-ACM) is presented initially to predict the LHV by the fusion of multiple deep learning algorithms and mechanisms, especially for the specific feature extraction package of tsfresh. Consequently, the container-terminal-oriented logistics service scheduling decision support design paradigm is put forward tentatively by FEB-LCR-ACM. Finally, a typical large-scale container terminal of China is chosen to implement, execute, and evaluate the FEB-LCR-ACM based on the terminal running log around the indicator of LHV. In the case of severe vibration of LHV between 2 twenty-foot equivalent units (TEUs) and 4215 TEUs, while forecasting the LHV of 300 liners by the log of five years, the forecasting error within 100 TEUs almost accounts for 80%. When predicting the operation of 350 ships by the log of six years, the forecasting deviation within 100 TEUs reaches up to nearly 90%. The abovementioned two deep learning experimental performances with FEB-LCR-ACM are so far ahead of the forecasting results by the classical machine learning algorithm that is similar to Gaussian support vector machine. Consequently, the FEB-LCR-ACM achieves sufficiently good performance for the LHV prediction with a lightweight deep learning architecture based on the typical small datasets, and then it is supposed to overcome the operational nonlinearity, dynamics, coupling, and complexity of CTHS partially.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3876 ◽  
Author(s):  
Tiantian Zhu ◽  
Zhengqiu Weng ◽  
Guolang Chen ◽  
Lei Fu

With the popularity of smartphones and the development of hardware, mobile devices are widely used by people. To ensure availability and security, how to protect private data in mobile devices without disturbing users has become a key issue. Mobile user authentication methods based on motion sensors have been proposed by many works, but the existing methods have a series of problems such as poor de-noising ability, insufficient availability, and low coverage of feature extraction. Based on the shortcomings of existing methods, this paper proposes a hybrid deep learning system for complex real-world mobile authentication. The system includes: (1) a variational mode decomposition (VMD) based de-noising method to enhance the singular value of sensors, such as discontinuities and mutations, and increase the extraction range of the feature; (2) semi-supervised collaborative training (Tri-Training) methods to effectively deal with mislabeling problems in complex real-world situations; and (3) a combined convolutional neural network (CNN) and support vector machine (SVM) model for effective hybrid feature extraction and training. The training results under large-scale, real-world data show that the proposed system can achieve 95.01% authentication accuracy, and the effect is better than the existing frontier methods.


Sign in / Sign up

Export Citation Format

Share Document