Neuronal Communication Genetic Algorithm-Based Inductive Learning

2020 ◽  
Vol 13 (2) ◽  
pp. 141-154
Author(s):  
Abdiya Alaoui ◽  
Zakaria Elberrichi

The development of powerful learning strategies in the medical domain constitutes a real challenge. Machine learning algorithms are used to extract high-level knowledge from medical datasets. Rule-based machine learning algorithms are easily interpreted by humans. To build a robust rule-based algorithm, a new hybrid metaheuristic was proposed for the classification of medical datasets. The hybrid approach uses neural communication and genetic algorithm-based inductive learning to build a robust model for disease prediction. The resulting classification models are characterized by good predictive accuracy and relatively small size. The results on 16 well-known medical datasets from the UCI machine learning repository shows the efficiency of the proposed approach compared to other states-of-the-art approaches.

Author(s):  
Abdiya Alaoui ◽  
Zakaria Elberrichi

The development of powerful learning strategies in the medical domain constitutes a real challenge. Machine learning algorithms are used to extract high-level knowledge from medical datasets. Rule-based machine learning algorithms are easily interpreted by humans. To build a robust rule-based algorithm, a new hybrid metaheuristic was proposed for the classification of medical datasets. The hybrid approach uses neural communication and genetic algorithm-based inductive learning to build a robust model for disease prediction. The resulting classification models are characterized by good predictive accuracy and relatively small size. The results on 16 well-known medical datasets from the UCI machine learning repository shows the efficiency of the proposed approach compared to other states-of-the-art approaches.


2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Eli Bloch ◽  
Tammy Rotem ◽  
Jonathan Cohen ◽  
Pierre Singer ◽  
Yehudit Aperstein

Objective. Achieving accurate prediction of sepsis detection moment based on bedside monitor data in the intensive care unit (ICU). A good clinical outcome is more probable when onset is suspected and treated on time, thus early insight of sepsis onset may save lives and reduce costs. Methodology. We present a novel approach for feature extraction, which focuses on the hypothesis that unstable patients are more prone to develop sepsis during ICU stay. These features are used in machine learning algorithms to provide a prediction of a patient’s likelihood to develop sepsis during ICU stay, hours before it is diagnosed. Results. Five machine learning algorithms were implemented using R software packages. The algorithms were trained and tested with a set of 4 features which represent the variability in vital signs. These algorithms aimed to calculate a patient’s probability to become septic within the next 4 hours, based on recordings from the last 8 hours. The best area under the curve (AUC) was achieved with Support Vector Machine (SVM) with radial basis function, which was 88.38%. Conclusions. The high level of predictive accuracy along with the simplicity and availability of input variables present great potential if applied in ICUs. Variability of a patient’s vital signs proves to be a good indicator of one’s chance to become septic during ICU stay.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 444 ◽  
Author(s):  
Valerio Morfino ◽  
Salvatore Rampone

In the fields of Internet of Things (IoT) infrastructures, attack and anomaly detection are rising concerns. With the increased use of IoT infrastructure in every domain, threats and attacks in these infrastructures are also growing proportionally. In this paper the performances of several machine learning algorithms in identifying cyber-attacks (namely SYN-DOS attacks) to IoT systems are compared both in terms of application performances, and in training/application times. We use supervised machine learning algorithms included in the MLlib library of Apache Spark, a fast and general engine for big data processing. We show the implementation details and the performance of those algorithms on public datasets using a training set of up to 2 million instances. We adopt a Cloud environment, emphasizing the importance of the scalability and of the elasticity of use. Results show that all the Spark algorithms used result in a very good identification accuracy (>99%). Overall, one of them, Random Forest, achieves an accuracy of 1. We also report a very short training time (23.22 sec for Decision Tree with 2 million rows). The experiments also show a very low application time (0.13 sec for over than 600,000 instances for Random Forest) using Apache Spark in the Cloud. Furthermore, the explicit model generated by Random Forest is very easy-to-implement using high- or low-level programming languages. In light of the results obtained, both in terms of computation times and identification performance, a hybrid approach for the detection of SYN-DOS cyber-attacks on IoT devices is proposed: the application of an explicit Random Forest model, implemented directly on the IoT device, along with a second level analysis (training) performed in the Cloud.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4332
Author(s):  
Daniel Jancarczyk ◽  
Marcin Bernaś ◽  
Tomasz Boczar

The paper proposes a method of automatic detection of parameters of a distribution transformer (model, type, and power) from a distance, based on its low-frequency noise spectra. The spectra are registered by sensors and processed by a method based on evolutionary algorithms and machine learning. The method, as input data, uses the frequency spectra of sound pressure levels generated during operation by transformers in the real environment. The model also uses the background characteristic to take under consideration the changing working conditions of the transformers. The method searches for frequency intervals and its resolution using both a classic genetic algorithm and particle swarm optimization. The interval selection was verified using five state-of-the-art machine learning algorithms. The research was conducted on 16 different distribution transformers. As a result, a method was proposed that allows the detection of a specific transformer model, its type, and its power with an accuracy greater than 84%, 99%, and 87%, respectively. The proposed optimization process using the genetic algorithm increased the accuracy by up to 5%, at the same time reducing the input data set significantly (from 80% up to 98%). The machine learning algorithms were selected, which were proven efficient for this task.


2019 ◽  
Vol 10 (1) ◽  
pp. 3-16
Author(s):  
Claudia Schubert ◽  
Marc-Thorsten Hütt

Algorithms are the key instrument for the economy-on-demand using platforms for its clients, workers and self-employed. An effective legal enforcement must not be limited to the control of the outcome of the algorithm but should also focus on the algorithm itself. This article assesses the present capacities of computer science to control and certify rule-based and data-centric (machine learning) algorithms. It discusses the legal instruments for the control of algorithms and their enforcement and institutional pre-conditions. It favours a digital agency that concentrates expertise and bureaucracy for the certification and official calibration of algorithms and promotes an international approach to the regulation of legal standards.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. 2581-2581 ◽  
Author(s):  
Paul Johannet ◽  
Nicolas Coudray ◽  
George Jour ◽  
Douglas MacArthur Donnelly ◽  
Shirin Bajaj ◽  
...  

2581 Background: There is growing interest in optimizing patient selection for treatment with immune checkpoint inhibitors (ICIs). We postulate that phenotypic features present in metastatic melanoma tissue reflect the biology of tumor cells, immune cells, and stromal tissue, and hence can provide predictive information about tumor behavior. Here, we test the hypothesis that machine learning algorithms can be trained to predict the likelihood of response and/or toxicity to ICIs. Methods: We examined 124 stage III/IV melanoma patients who received anti-CTLA-4 (n = 81), anti-PD-1 (n = 25), or combination (n = 18) therapy as first line. The tissue analyzed was resected before treatment with ICIs. In total, 340 H&E slides were digitized and annotated for three regions of interest: tumor, lymphocytes, and stroma. The slides were then partitioned into training (n = 285), validation (n = 26), and test (n = 29) sets. Slides were tiled (299x299 pixels) at 20X magnification. We trained a deep convolutional neural network (DCNN) to automatically segment the images into each of the three regions and then deconstruct images into their component features to detect non-obvious patterns with objectivity and reproducibility. We then trained the DCNN for two classifications: 1) complete/partial response versus progression of disease (POD), and 2) severe versus no immune-related adverse events (irAEs). Predictive accuracy was estimated by area under the curve (AUC) of receiver operating characteristics (ROC). Results: The DCNN identified tumor within LN with AUC 0.987 and within ST with AUC 0.943. Prediction of POD based on ST-only always performed better than prediction based on LN-only (AUC 0.84 compared to 0.61, respectively). The DCNN had an average AUC 0.69 when analyzing only tumor regions from both LN and ST data sets and AUC 0.68 when analyzing tumor and lymphocyte regions. Severe irAEs were predicted with limited accuracy (AUC 0.53). Conclusions: Our results support the potential application of machine learning on pre-treatment histologic slides to predict response to ICIs. It also revealed their limited value in predicting toxicity. We are currently investigating whether the predictive capability of the algorithm can be further improved by incorporating additional immunologic biomarkers.


Scientific Knowledge and Electronic devices are growing day by day. In this aspect, many expert systems are involved in the healthcare industry using machine learning algorithms. Deep neural networks beat the machine learning techniques and often take raw data i.e., unrefined data to calculate the target output. Deep learning or feature learning is used to focus on features which is very important and gives a complete understanding of the model generated. Existing methodology used data mining technique like rule based classification algorithm and machine learning algorithm like hybrid logistic regression algorithm to preprocess data and extract meaningful insights of data. This is, however a supervised data. The proposed work is based on unsupervised data that is there is no labelled data and deep neural techniques is deployed to get the target output. Machine learning algorithms are compared with proposed deep learning techniques using TensorFlow and Keras in the aspect of accuracy. Deep learning methodology outfits the existing rule based classification and hybrid logistic regression algorithm in terms of accuracy. The designed methodology is tested on the public MIT-BIH arrhythmia database, classifying four kinds of abnormal beats. The proposed approach based on deep learning technique offered a better performance, improving the results when compared to machine learning approaches of the state-of-the-art


2021 ◽  
Vol 23 (11) ◽  
pp. 749-758
Author(s):  
Saranya N ◽  
◽  
Kavi Priya S ◽  

Breast Cancer is one of the chronic diseases occurred to human beings throughout the world. Early detection of this disease is the most promising way to improve patients’ chances of survival. The strategy employed in this paper is to select the best features from various breast cancer datasets using a genetic algorithm and machine learning algorithm is applied to predict the outcomes. Two machine learning algorithms such as Support Vector Machines and Decision Tree are used along with Genetic Algorithm. The proposed work is experimented on five datasets such as Wisconsin Breast Cancer-Diagnosis Dataset, Wisconsin Breast Cancer-Original Dataset, Wisconsin Breast Cancer-Prognosis Dataset, ISPY1 Clinical trial Dataset, and Breast Cancer Dataset. The results exploit that SVM-GA achieves higher accuracy of 98.16% than DT-GA of 97.44%.


2021 ◽  
Vol 2021 ◽  
pp. 1-22
Author(s):  
Tanya Gera ◽  
Jaiteg Singh ◽  
Abolfazl Mehbodniya ◽  
Julian L. Webber ◽  
Mohammad Shabaz ◽  
...  

Ransomware is a special malware designed to extort money in return for unlocking the device and personal data files. Smartphone users store their personal as well as official data on these devices. Ransomware attackers found it bewitching for their financial benefits. The financial losses due to ransomware attacks are increasing rapidly. Recent studies witness that out of 87% reported cyber-attacks, 41% are due to ransomware attacks. The inability of application-signature-based solutions to detect unknown malware has inspired many researchers to build automated classification models using machine learning algorithms. Advanced malware is capable of delaying malicious actions on sensing the emulated environment and hence posing a challenge to dynamic monitoring of applications also. Existing hybrid approaches utilize a variety of features combination for detection and analysis. The rapidly changing nature and distribution strategies are possible reasons behind the deteriorated performance of primitive ransomware detection techniques. The limitations of existing studies include ambiguity in selecting the features set. Increasing the feature set may lead to freedom of adept attackers against learning algorithms. In this work, we intend to propose a hybrid approach to identify and mitigate Android ransomware. This study employs a novel dominant feature selection algorithm to extract the dominant feature set. The experimental results show that our proposed model can differentiate between clean and ransomware with improved precision. Our proposed hybrid solution confirms an accuracy of 99.85% with zero false positives while considering 60 prominent features. Further, it also justifies the feature selection algorithm used. The comparison of the proposed method with the existing frameworks indicates its better performance.


2021 ◽  
Author(s):  
Mihai Niculita

<p>Machine learning algorithms are increasingly used in geosciences for the detection of susceptibility modeling of certain landforms or processes. The increased availability of high-resolution data and the increase of available machine learning algorithms opens up the possibility of creating datasets for the training of models for automatic detection of specific landforms. In this study, we tested the usage of LiDAR DEMs for creating a dataset of labeled images representing shallow single event landslides in order to use them for the detection of other events. The R stat implementation of the keras high-level neural networks API was used to build and test the proposed approach. A 5m LiDAR DEM was cut in 25 by 25 pixels tiles, and the tiles that overlayed shallow single event landslides were labeled accordingly, while the tiles that did not contain landslides were randomly selected to be labeled as non-landslides. The binary classification approach was tested with 255 grey levels elevation images and 255 grey levels shading images, the shading approach giving better results. The presented study case shows the possibility of using machine learning in the landslide detection on high-resolution DEMs.</p>


Sign in / Sign up

Export Citation Format

Share Document