An Improved Deep Learning Algorithm for Diabetes Prediction

2022 ◽  
pp. 103-119
Author(s):  
Basetty Mallikarjuna ◽  
Supriya Addanke ◽  
Anusha D. J.

This chapter introduces the novel approach in deep learning for diabetes prediction. The related work described the various ML algorithms in the field of diabetic prediction that has been used for early detection and post examination of the diabetic prediction. It proposed the Jaya-Tree algorithm, which is updated as per the existing random forest algorithm, and it is used to classify the two parameters named as the ‘Jaya' and ‘Apajaya'. The results described that Pima Indian diabetes dataset 2020 (PIS) predicts diabetes and obtained 97% accuracy.

2021 ◽  
Author(s):  
Tony Lee ◽  
Matthias Ziegler

Current practices of personnel selection often use questionnaires and interviews to assess candidates’ personality, but the effectiveness of both approaches can be hampered if social desirable responding (SDR) occurs. Detecting biases like SDR is important to ensure valid personnel selection for any organization, yet current instruments for assessing SDR are either inefficient or insufficient. In this paper, we propose a novel approach to appraise job applicants’ SDR tendency by employing Artificial Intelligence (AI)-based techniques. Our study extracts thousands of image and voice features from the video presentation of 91 simulated applicants to train two deep learning models for predicting their SDR tendency. The result shows that our two models, namely the Deep Image Model and Deep Voice Model, can predict SDR tendency with 82.55% and 88.89% accuracy rate, respectively. The Deep Voice Model moreover outperformed the baseline model built on a popular deep learning algorithm ResNet by 4.35%. These findings suggest that organizations can use AI driven technologies to assess job applicants’ SDR tendency during recruitment and improve the performance of their personnel selection.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 788-803
Author(s):  
Ahmed Mahdi Abdulkadium

Robotics mainly concern with the movement of robot with improvement obstacle avoidance, this issue is handed. It contains of a Microcontroller to process the data, and Ultrasonic sensors to detect the obstacles on its path. Artificial intelligence is used to predict the presence of obstacle in the path. In this research random forest algorithm is used and it is improved by RFHTMC algorithm. Deep learning mainly compromises of reducing the mean absolute error of forecasting. Problem with random forest is time complexity, as it involves formation of many classification trees. The proposed algorithm reduces the set of rules which is used for classification model, to improve time complexity. Performance analysis shows an significant improvement in results as compare to other deep learning algorithm as well as random forest. Forecasting accuracy shows 8% improvement as compare to random forest with 26% reduced operation time.


2020 ◽  
Vol 10 (14) ◽  
pp. 4986 ◽  
Author(s):  
Xuefei Ma ◽  
Waleed Raza ◽  
Zhiqiang Wu ◽  
Muhammad Bilal ◽  
Ziqi Zhou ◽  
...  

Machine learning and deep learning algorithms have proved to be a powerful tool for developing data-driven signal processing algorithms for challenging engineering problems. This paper studies the modern machine learning algorithm for modeling nonlinear devices like power amplifiers (PAs) for underwater acoustic (UWA) orthogonal frequency divisional multiplexing (OFDM) communication. The OFDM system has a high peak to average power ratio (PAPR) in the time domain because the subcarriers are added coherently via inverse fast Fourier transform (IFFT). This causes a higher bit error rate (BER) and degrades the performance of the PAs; hence, it reduces the power efficiency. For long-range underwater acoustic applications such as the long-term monitoring of the sea, the PA works in full consumption mode. Thus, it becomes a challenging task to minimize power consumption and unnecessary distortion. To mitigate this problem, a receiver-based nonlinearity distortion mitigation method is proposed, assuming that the transmitting side has enough computation power. We propose a novel approach to identify the nonlinear power model using a modern deep learning algorithm named frequentative decision feedback (FFB); PAPR performance is verified by the clipping method. The simulation results prove the better performance of the PA model with a BER with the shortest learning time.


2020 ◽  
Vol 6 (2) ◽  
pp. 97-106
Author(s):  
Khan Nasik Sami ◽  
Zian Md Afique Amin ◽  
Raini Hassan

Waste Management is one of the essential issues that the world is currently facing does not matter if the country is developed or under developing. The key issue in this waste segregation is that the trash bin at open spots gets flooded well ahead of time before the beginning of the following cleaning process. The isolation of waste is done by unskilled workers which is less effective, time-consuming, and not plausible because of a lot of waste. So, we are proposing an automated waste classification problem utilizing Machine Learning and Deep Learning algorithms. The goal of this task is to gather a dataset and arrange it into six classes consisting of glass, paper, and metal, plastic, cardboard, and waste. The model that we have used are classification models. For our research we did comparisons between four algorithms, those are CNN, SVM, Random Forest, and Decision Tree. As our concern is a classification problem, we have used several machine learning and deep learning algorithm that best fits for classification solutions. For our model, CNN accomplished high characterization on accuracy around 90%, while SVM additionally indicated an excellent transformation to various kinds of waste which were 85%, and Random Forest and Decision Tree have accomplished 55% and 65% respectively


Author(s):  
Piyush Kumar

Abstract: Diabetic retinopathy is a disease which cause of blindness due to diabetes. For this reason, it is very important to detect diabetic retinopathy in early stages. A deep learning-based approach is used for the early detection of diabetic retinopathy from retinal images. The proposed approach consists of two steps. In the first stage, pre treatments were performed to remove retinal images from different data sets and standardize them to size. In the second stage, classification is done by the help of Convolutional Neural Network using deep learning algorithm and 98.5% success is achieved. The difference of this technique from similar studies is that instead of creating the feature set manually as in traditional methods, the deep learning network automatically constructs and trained by using CPU and GPU in a very short time. Keywords: CNN, Early detection, Artificial intelligence, Deep learning, Machine-learning, Fundus Image, Optical coherence tomography, Ophthalmology.


Sign in / Sign up

Export Citation Format

Share Document