scholarly journals Learning machine learning

2018 ◽  
Vol 61 (12) ◽  
pp. 24-27 ◽  
Author(s):  
Ted G. Lewis ◽  
Peter J. Denning
2019 ◽  
Vol XVI (4) ◽  
pp. 95-113
Author(s):  
Muhammad Tariq ◽  
Tahir Mehmood

Accurate detection, classification and mitigation of power quality (PQ) distortive events are of utmost importance for electrical utilities and corporations. An integrated mechanism is proposed in this paper for the identification of PQ distortive events. The proposed features are extracted from the waveforms of the distortive events using modified form of Stockwell’s transform. The categories of the distortive events were determined based on these feature values by applying extreme learning machine as an intelligent classifier. The proposed methodology was tested under the influence of both the noisy and noiseless environments on a database of seven thousand five hundred simulated waveforms of distortive events which classify fifteen types of PQ events such as impulses, interruptions, sags and swells, notches, oscillatory transients, harmonics, and flickering as single stage events with their possible integrations. The results of the analysis indicated satisfactory performance of the proposed method in terms of accuracy in classifying the events in addition to its reduced sensitivity under various noisy environments.


2021 ◽  
Author(s):  
Muhammad Sajid

Abstract Machine learning is proving its successes in all fields of life including medical, automotive, planning, engineering, etc. In the world of geoscience, ML showed impressive results in seismic fault interpretation, advance seismic attributes analysis, facies classification, and geobodies extraction such as channels, carbonates, and salt, etc. One of the challenges faced in geoscience is the availability of label data which is one of the most time-consuming requirements in supervised deep learning. In this paper, an advanced learning approach is proposed for geoscience where the machine observes the seismic interpretation activities and learns simultaneously as the interpretation progresses. Initial testing showed that through the proposed method along with transfer learning, machine learning performance is highly effective, and the machine accurately predicts features requiring minor post prediction filtering to be accepted as the optimal interpretation.


2019 ◽  
Vol 29 (07) ◽  
pp. 1850058 ◽  
Author(s):  
Juan M. Górriz ◽  
Javier Ramírez ◽  
F. Segovia ◽  
Francisco J. Martínez ◽  
Meng-Chuan Lai ◽  
...  

Although much research has been undertaken, the spatial patterns, developmental course, and sexual dimorphism of brain structure associated with autism remains enigmatic. One of the difficulties in investigating differences between the sexes in autism is the small sample sizes of available imaging datasets with mixed sex. Thus, the majority of the investigations have involved male samples, with females somewhat overlooked. This paper deploys machine learning on partial least squares feature extraction to reveal differences in regional brain structure between individuals with autism and typically developing participants. A four-class classification problem (sex and condition) is specified, with theoretical restrictions based on the evaluation of a novel upper bound in the resubstitution estimate. These conditions were imposed on the classifier complexity and feature space dimension to assure generalizable results from the training set to test samples. Accuracies above [Formula: see text] on gray and white matter tissues estimated from voxel-based morphometry (VBM) features are obtained in a sample of equal-sized high-functioning male and female adults with and without autism ([Formula: see text], [Formula: see text]/group). The proposed learning machine revealed how autism is modulated by biological sex using a low-dimensional feature space extracted from VBM. In addition, a spatial overlap analysis on reference maps partially corroborated predictions of the “extreme male brain” theory of autism, in sexual dimorphic areas.


2021 ◽  
Vol 2083 (3) ◽  
pp. 032058
Author(s):  
Ting Liu

Abstract With the development of water conservancy informatization, the research on water information system integration is born, which is the need of water conservancy informatization construction at present and also an urgent problem to be solved. Based on the machine learning algorithm, combined with the actual needs of water conservancy business field, the overall framework of computer system integration for water conservancy engineering design is put forward. The overall framework includes: resource layer, comprehensive integration layer and user layer, which exchange data with configuration monitoring software by means of communication. The analytic hierarchy process in machine learning algorithm is used to construct the risk prediction index system, and the risk prediction index and initial prediction results are taken as the input and output of extreme learning machine algorithm in machine learning algorithm. The simulation results show that the prediction accuracy of this method is 94.88%, which can accurately predict the risks existing in hydraulic engineering design computer system and improve the system security.


2016 ◽  
Vol 24 (1) ◽  
pp. 54-65 ◽  
Author(s):  
Stefano Parodi ◽  
Chiara Manneschi ◽  
Damiano Verda ◽  
Enrico Ferrari ◽  
Marco Muselli

This study evaluates the performance of a set of machine learning techniques in predicting the prognosis of Hodgkin’s lymphoma using clinical factors and gene expression data. Analysed samples from 130 Hodgkin’s lymphoma patients included a small set of clinical variables and more than 54,000 gene features. Machine learning classifiers included three black-box algorithms ( k-nearest neighbour, Artificial Neural Network, and Support Vector Machine) and two methods based on intelligible rules (Decision Tree and the innovative Logic Learning Machine method). Support Vector Machine clearly outperformed any of the other methods. Among the two rule-based algorithms, Logic Learning Machine performed better and identified a set of simple intelligible rules based on a combination of clinical variables and gene expressions. Decision Tree identified a non-coding gene ( XIST) involved in the early phases of X chromosome inactivation that was overexpressed in females and in non-relapsed patients. XIST expression might be responsible for the better prognosis of female Hodgkin’s lymphoma patients.


Author(s):  
P. Priakanth ◽  
S. Gopikrishnan

The idea of an intelligent, independent learning machine has fascinated humans for decades. The philosophy behind machine learning is to automate the creation of analytical models in order to enable algorithms to learn continuously with the help of available data. Since IoT will be among the major sources of new data, data science will make a great contribution to make IoT applications more intelligent. Machine learning can be applied in cases where the desired outcome is known (guided learning) or the data is not known beforehand (unguided learning) or the learning is the result of interaction between a model and the environment (reinforcement learning). This chapter answers the questions: How could machine learning algorithms be applied to IoT smart data? What is the taxonomy of machine learning algorithms that can be adopted in IoT? And what are IoT data characteristics in real-world which requires data analytics?


Author(s):  
P. Priyanga ◽  
N. C. Naveen

This article describes how healthcare organizations is growing increasingly and are the potential beneficiary users of the data that is generated and gathered. From hospitals to clinics, data and analytics can be a very powerful tool that can improve patient care and satisfaction with efficiency. In developing countries, cardiovascular diseases have a huge impact on increasing death rates and are expected by the end of 2020 in spite of the best clinical practices. The current Machine Learning (ml) algorithms are adapted to estimate the heart disease risks in middle aged patients. Hence, to predict the heart diseases a detailed analysis is made in this research work by taking into account the angiographic heart disease status (i.e. ≥ 50% diameter narrowing). Deep Neural Network (DNN), Extreme Learning Machine (elm), K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) learning algorithm (with linear and polynomial kernel functions) are considered in this work. The accuracy and results of these algorithms are analyzed by comparing the effectiveness among them.


Author(s):  
P. Priakanth ◽  
S. Gopikrishnan

The idea of an intelligent, independent learning machine has fascinated humans for decades. The philosophy behind machine learning is to automate the creation of analytical models in order to enable algorithms to learn continuously with the help of available data. Since IoT will be among the major sources of new data, data science will make a great contribution to make IoT applications more intelligent. Machine learning can be applied in cases where the desired outcome is known (guided learning) or the data is not known beforehand (unguided learning) or the learning is the result of interaction between a model and the environment (reinforcement learning). This chapter answers the questions: How could machine learning algorithms be applied to IoT smart data? What is the taxonomy of machine learning algorithms that can be adopted in IoT? And what are IoT data characteristics in real-world which requires data analytics?


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Alessio Lugnan ◽  
Emmanuel Gooskens ◽  
Jeremy Vatin ◽  
Joni Dambre ◽  
Peter Bienstman

AbstractMachine learning offers promising solutions for high-throughput single-particle analysis in label-free imaging microflow cytomtery. However, the throughput of online operations such as cell sorting is often limited by the large computational cost of the image analysis while offline operations may require the storage of an exceedingly large amount of data. Moreover, the training of machine learning systems can be easily biased by slight drifts of the measurement conditions, giving rise to a significant but difficult to detect degradation of the learned operations. We propose a simple and versatile machine learning approach to perform microparticle classification at an extremely low computational cost, showing good generalization over large variations in particle position. We present proof-of-principle classification of interference patterns projected by flowing transparent PMMA microbeads with diameters of $${15.2}\,\upmu \text {m}$$ 15.2 μ m and $${18.6}\,\upmu \text {m}$$ 18.6 μ m . To this end, a simple, cheap and compact label-free microflow cytometer is employed. We also discuss in detail the detection and prevention of machine learning bias in training and testing due to slight drifts of the measurement conditions. Moreover, we investigate the implications of modifying the projected particle pattern by means of a diffraction grating, in the context of optical extreme learning machine implementations.


Sign in / Sign up

Export Citation Format

Share Document