scholarly journals Cleft prediction before birth using deep neural network

2020 ◽  
Vol 26 (4) ◽  
pp. 2568-2585 ◽  
Author(s):  
Numan Shafi ◽  
Faisal Bukhari ◽  
Waheed Iqbal ◽  
Khaled Mohamad Almustafa ◽  
Muhammad Asif ◽  
...  

In developing countries like Pakistan, cleft surgery is expensive for families, and the child also experiences much pain. In this article, we propose a machine learning–based solution to avoid cleft in the mother’s womb. The possibility of cleft lip and palate in embryos can be predicted before birth by using the proposed solution. We collected 1000 pregnant female samples from three different hospitals in Lahore, Punjab. A questionnaire has been designed to obtain a variety of data, such as gender, parenting, family history of cleft, the order of birth, the number of children, midwives counseling, miscarriage history, parent smoking, and physician visits. Different cleaning, scaling, and feature selection methods have been applied to the data collected. After selecting the best features from the cleft data, various machine learning algorithms were used, including random forest, k-nearest neighbor, decision tree, support vector machine, and multilayer perceptron. In our implementation, multilayer perceptron is a deep neural network, which yields excellent results for the cleft dataset compared to the other methods. We achieved 92.6% accuracy on test data based on the multilayer perceptron model. Our promising results of predictions would help to fight future clefts for children who would have cleft.

2021 ◽  
Vol 30 (04) ◽  
pp. 2150020
Author(s):  
Luke Holbrook ◽  
Miltiadis Alamaniotis

With the increase of cyber-attacks on millions of Internet of Things (IoT) devices, the poor network security measures on those devices are the main source of the problem. This article aims to study a number of these machine learning algorithms available for their effectiveness in detecting malware in consumer internet of things devices. In particular, the Support Vector Machines (SVM), Random Forest, and Deep Neural Network (DNN) algorithms are utilized for a benchmark with a set of test data and compared as tools in safeguarding the deployment for IoT security. Test results on a set of 4 IoT devices exhibited that all three tested algorithms presented here detect the network anomalies with high accuracy. However, the deep neural network provides the highest coefficient of determination R2, and hence, it is identified as the most precise among the tested algorithms concerning the security of IoT devices based on the data sets we have undertaken.


Author(s):  
Akshay Rajendra Naik ◽  
A. V. Deorankar ◽  
P. B. Ambhore

Rainfall prediction is useful for all people for decision making in all fields, such as out door gamming, farming, traveling, and factory and for other activities. We studied various methods for rainfall prediction such as machine learning and neural networks. There is various machine learning algorithms are used in previous existing methods such as naïve byes, support vector machines, random forest, decision trees, and ensemble learning methods. We used deep neural network for rainfall prediction, and for optimization of deep neural network Adam optimizer is used for setting modal parameters, as a result our method gives better results as compare to other machine learning methods.


2019 ◽  
Vol 11 (4) ◽  
pp. 1766-1783 ◽  
Author(s):  
Suresh Sankaranarayanan ◽  
Malavika Prabhakar ◽  
Sreesta Satish ◽  
Prerna Jain ◽  
Anjali Ramprasad ◽  
...  

Abstract Today, India is one of the worst flood-affected countries in the world, with the recent disaster in Kerala in August 2018 being a prime example. A good amount of work has been carried out by employing Internet of Things (IoT) and machine learning (ML) techniques in the past for flood occurrence based on rainfall, humidity, temperature, water flow, water level etc. However, the challenge is that no one has attempted the possibility of occurrence of flood based on temperature and rainfall intensity. So accordingly Deep Neural Network has been employed for predicting the occurrence of flood based on temperature and rainfall intensity. In addition, a deep learning model is compared with other machine learning models (support vector machine (SVM), K-nearest neighbor (KNN) and Naïve Bayes) in terms of accuracy and error. The results indicate that the deep neural network can be efficiently used for flood forecasting with highest accuracy based on monsoon parameters only before flood occurrence.


Computers ◽  
2019 ◽  
Vol 8 (4) ◽  
pp. 77 ◽  
Author(s):  
Muhammad Azfar Firdaus Azlah ◽  
Lee Suan Chua ◽  
Fakhrul Razan Rahmad ◽  
Farah Izana Abdullah ◽  
Sharifah Rafidah Wan Alwi

Plant systematics can be classified and recognized based on their reproductive system (flowers) and leaf morphology. Neural networks is one of the most popular machine learning algorithms for plant leaf classification. The commonly used neutral networks are artificial neural network (ANN), probabilistic neural network (PNN), convolutional neural network (CNN), k-nearest neighbor (KNN) and support vector machine (SVM), even some studies used combined techniques for accuracy improvement. The utilization of several varying preprocessing techniques, and characteristic parameters in feature extraction appeared to improve the performance of plant leaf classification. The findings of previous studies are critically compared in terms of their accuracy based on the applied neural network techniques. This paper aims to review and analyze the implementation and performance of various methodologies on plant classification. Each technique has its advantages and limitations in leaf pattern recognition. The quality of leaf images plays an important role, and therefore, a reliable source of leaf database must be used to establish the machine learning algorithm prior to leaf recognition and validation.


2021 ◽  
Vol 13 (21) ◽  
pp. 4476
Author(s):  
Adama Traore ◽  
Syed Tahir Ata-Ul-Karim ◽  
Aiwang Duan ◽  
Mukesh Kumar Soothar ◽  
Seydou Traore ◽  
...  

The equivalent water thickness (EWT) is an important biophysical indicator of water status in crops. The effective monitoring of EWT in wheat under different nitrogen and water treatments is important for irrigation management in precision agriculture. This study aimed to investigate the performances of machine learning (ML) algorithms in retrieving wheat EWT. For this purpose, a rain shelter experiment (Exp. 1) with four irrigation quantities (0, 120, 240, 360 mm) and two nitrogen levels (75 and 255 kg N/ha), and field experiments (Exps. 2–3) with the same irrigation and rainfall water levels (360 mm) but different nitrogen levels (varying from 75 to 255 kg N/ha) were conducted in the North China Plain. The canopy reflectance was measured for all plots at 30 m using an unmanned aerial vehicle (UAV)-mounted multispectral camera. Destructive sampling was conducted immediately after the UAV flights to measure total fresh and dry weight. Deep Neural Network (DNN) is a special type of neural network, which has shown performance in regression analysis is compared with other machine learning (ML) models. A feature selection (FS) algorithm named the decision tree (DT) was used as the automatic relevance determination method to obtain the relative relevance of 5 out of 67 vegetation indices (Vis), which were used for estimating EWT. The selected VIs were used to estimate EWT using multiple linear regression (MLR), deep neural network multilayer perceptron (DNN-MLP), artificial neural networks multilayer perceptron (ANN-MLP), boosted tree regression (BRT), and support vector machines (SVMs). The results show that the DNN-MLP with R2 = 0.934, NSE = 0.933, RMSE = 0.028 g/cm2, and MAE of 0.017 g/cm2 outperformed other ML algorithms (ANN-MPL, BRT, and SVM- Polynomial) owing to its high capacity for estimating EWT as compared to other ML methods. Our findings support the conclusion that ML can potentially be applied in combination with VIs for retrieving EWT. Despite the complexity of the ML models, the EWT map should help farmers by improving the real-time irrigation efficiency of wheat by quantifying field water content and addressing variability.


2021 ◽  
Vol 186 (Supplement_1) ◽  
pp. 445-451
Author(s):  
Yifei Sun ◽  
Navid Rashedi ◽  
Vikrant Vaze ◽  
Parikshit Shah ◽  
Ryan Halter ◽  
...  

ABSTRACT Introduction Early prediction of the acute hypotensive episode (AHE) in critically ill patients has the potential to improve outcomes. In this study, we apply different machine learning algorithms to the MIMIC III Physionet dataset, containing more than 60,000 real-world intensive care unit records, to test commonly used machine learning technologies and compare their performances. Materials and Methods Five classification methods including K-nearest neighbor, logistic regression, support vector machine, random forest, and a deep learning method called long short-term memory are applied to predict an AHE 30 minutes in advance. An analysis comparing model performance when including versus excluding invasive features was conducted. To further study the pattern of the underlying mean arterial pressure (MAP), we apply a regression method to predict the continuous MAP values using linear regression over the next 60 minutes. Results Support vector machine yields the best performance in terms of recall (84%). Including the invasive features in the classification improves the performance significantly with both recall and precision increasing by more than 20 percentage points. We were able to predict the MAP with a root mean square error (a frequently used measure of the differences between the predicted values and the observed values) of 10 mmHg 60 minutes in the future. After converting continuous MAP predictions into AHE binary predictions, we achieve a 91% recall and 68% precision. In addition to predicting AHE, the MAP predictions provide clinically useful information regarding the timing and severity of the AHE occurrence. Conclusion We were able to predict AHE with precision and recall above 80% 30 minutes in advance with the large real-world dataset. The prediction of regression model can provide a more fine-grained, interpretable signal to practitioners. Model performance is improved by the inclusion of invasive features in predicting AHE, when compared to predicting the AHE based on only the available, restricted set of noninvasive technologies. This demonstrates the importance of exploring more noninvasive technologies for AHE prediction.


2021 ◽  
pp. 1-17
Author(s):  
Ahmed Al-Tarawneh ◽  
Ja’afer Al-Saraireh

Twitter is one of the most popular platforms used to share and post ideas. Hackers and anonymous attackers use these platforms maliciously, and their behavior can be used to predict the risk of future attacks, by gathering and classifying hackers’ tweets using machine-learning techniques. Previous approaches for detecting infected tweets are based on human efforts or text analysis, thus they are limited to capturing the hidden text between tweet lines. The main aim of this research paper is to enhance the efficiency of hacker detection for the Twitter platform using the complex networks technique with adapted machine learning algorithms. This work presents a methodology that collects a list of users with their followers who are sharing their posts that have similar interests from a hackers’ community on Twitter. The list is built based on a set of suggested keywords that are the commonly used terms by hackers in their tweets. After that, a complex network is generated for all users to find relations among them in terms of network centrality, closeness, and betweenness. After extracting these values, a dataset of the most influential users in the hacker community is assembled. Subsequently, tweets belonging to users in the extracted dataset are gathered and classified into positive and negative classes. The output of this process is utilized with a machine learning process by applying different algorithms. This research build and investigate an accurate dataset containing real users who belong to a hackers’ community. Correctly, classified instances were measured for accuracy using the average values of K-nearest neighbor, Naive Bayes, Random Tree, and the support vector machine techniques, demonstrating about 90% and 88% accuracy for cross-validation and percentage split respectively. Consequently, the proposed network cyber Twitter model is able to detect hackers, and determine if tweets pose a risk to future institutions and individuals to provide early warning of possible attacks.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4324
Author(s):  
Moaed A. Abd ◽  
Rudy Paul ◽  
Aparna Aravelli ◽  
Ou Bai ◽  
Leonel Lagos ◽  
...  

Multifunctional flexible tactile sensors could be useful to improve the control of prosthetic hands. To that end, highly stretchable liquid metal tactile sensors (LMS) were designed, manufactured via photolithography, and incorporated into the fingertips of a prosthetic hand. Three novel contributions were made with the LMS. First, individual fingertips were used to distinguish between different speeds of sliding contact with different surfaces. Second, differences in surface textures were reliably detected during sliding contact. Third, the capacity for hierarchical tactile sensor integration was demonstrated by using four LMS signals simultaneously to distinguish between ten complex multi-textured surfaces. Four different machine learning algorithms were compared for their successful classification capabilities: K-nearest neighbor (KNN), support vector machine (SVM), random forest (RF), and neural network (NN). The time-frequency features of the LMSs were extracted to train and test the machine learning algorithms. The NN generally performed the best at the speed and texture detection with a single finger and had a 99.2 ± 0.8% accuracy to distinguish between ten different multi-textured surfaces using four LMSs from four fingers simultaneously. The capability for hierarchical multi-finger tactile sensation integration could be useful to provide a higher level of intelligence for artificial hands.


Author(s):  
Jonas Marx ◽  
Stefan Gantner ◽  
Jörn Städing ◽  
Jens Friedrichs

In recent years, the demands of Maintenance, Repair and Overhaul (MRO) customers to provide resource-efficient after market services have grown increasingly. One way to meet these requirements is by making use of predictive maintenance methods. These are ideas that involve the derivation of workscoping guidance by assessing and processing previously unused or undocumented service data. In this context a novel approach on predictive maintenance is presented in form of a performance-based classification method for high pressure compressor (HPC) airfoils. The procedure features machine learning algorithms that establish a relation between the airfoil geometry and the associated aerodynamic behavior and is hereby able to divide individual operating characteristics into a finite number of distinct aero-classes. By this means the introduced method not only provides a fast and simple way to assess piece part performance through geometrical data, but also facilitates the consideration of stage matching (axial as well as circumferential) in a simplified manner. It thus serves as prerequisite for an improved customary HPC performance workscope as well as for an automated optimization process for compressor buildup with used or repaired material that would be applicable in an MRO environment. The methods of machine learning that are used in the present work enable the formation of distinct groups of similar aero-performance by unsupervised (step 1) and supervised learning (step 2). The application of the overall classification procedure is shown exemplary on an artificially generated dataset based on real characteristics of a front and a rear rotor of a 10-stage axial compressor that contains both geometry as well as aerodynamic information. In step 1 of the investigation only the aerodynamic quantities in terms of multivariate functional data are used in order to benchmark different clustering algorithms and generate a foundation for a geometry-based aero-classification. Corresponding classifiers are created in step 2 by means of both, the k Nearest Neighbor and the linear Support Vector Machine algorithms. The methods’ fidelities are brought to the test with the attempt to recover the aero-based similarity classes solely by using normalized and reduced geometry data. This results in high classification probabilities of up to 96 % which is proven by using stratified k-fold cross-validation.


Author(s):  
Sandy C. Lauguico ◽  
◽  
Ronnie S. Concepcion II ◽  
Jonnel D. Alejandrino ◽  
Rogelio Ruzcko Tobias ◽  
...  

The arising problem on food scarcity drives the innovation of urban farming. One of the methods in urban farming is the smart aquaponics. However, for a smart aquaponics to yield crops successfully, it needs intensive monitoring, control, and automation. An efficient way of implementing this is the utilization of vision systems and machine learning algorithms to optimize the capabilities of the farming technique. To realize this, a comparative analysis of three machine learning estimators: Logistic Regression (LR), K-Nearest Neighbor (KNN), and Linear Support Vector Machine (L-SVM) was conducted. This was done by modeling each algorithm from the machine vision-feature extracted images of lettuce which were raised in a smart aquaponics setup. Each of the model was optimized to increase cross and hold-out validations. The results showed that KNN having the tuned hyperparameters of n_neighbors=24, weights='distance', algorithm='auto', leaf_size = 10 was the most effective model for the given dataset, yielding a cross-validation mean accuracy of 87.06% and a classification accuracy of 91.67%.


Sign in / Sign up

Export Citation Format

Share Document