scholarly journals Research on machine learning Algorithm optimization based on 0-1 coding

2021 ◽  
Vol 2083 (4) ◽  
pp. 042086
Author(s):  
Yuqi Qin

Abstract Machine learning algorithm is the core of artificial intelligence, is the fundamental way to make computer intelligent, its application in all fields of artificial intelligence. Aiming at the problems of the existing algorithms in the discrete manufacturing industry, this paper proposes a new 0-1 coding method to optimize the learning algorithm, and finally proposes a learning algorithm of “IG type learning only from the best”.

Author(s):  
Ladly Patel ◽  
Kumar Abhishek Gaurav

In today's world, a huge amount of data is available. So, all the available data are analyzed to get information, and later this data is used to train the machine learning algorithm. Machine learning is a subpart of artificial intelligence where machines are given training with data and the machine predicts the results. Machine learning is being used in healthcare, image processing, marketing, etc. The aim of machine learning is to reduce the work of the programmer by doing complex coding and decreasing human interaction with systems. The machine learns itself from past data and then predict the desired output. This chapter describes machine learning in brief with different machine learning algorithms with examples and about machine learning frameworks such as tensor flow and Keras. The limitations of machine learning and various applications of machine learning are discussed. This chapter also describes how to identify features in machine learning data.


2020 ◽  
Vol 41 (7) ◽  
pp. 826-830 ◽  
Author(s):  
Arni S. R. Srinivasa Rao ◽  
Jose A. Vazquez

AbstractWe propose the use of a machine learning algorithm to improve possible COVID-19 case identification more quickly using a mobile phone–based web survey. This method could reduce the spread of the virus in susceptible populations under quarantine.


2020 ◽  
pp. practneurol-2020-002688
Author(s):  
Stephen D Auger ◽  
Benjamin M Jacobs ◽  
Ruth Dobson ◽  
Charles R Marshall ◽  
Alastair J Noyce

Modern clinical practice requires the integration and interpretation of ever-expanding volumes of clinical data. There is, therefore, an imperative to develop efficient ways to process and understand these large amounts of data. Neurologists work to understand the function of biological neural networks, but artificial neural networks and other forms of machine learning algorithm are likely to be increasingly encountered in clinical practice. As their use increases, clinicians will need to understand the basic principles and common types of algorithm. We aim to provide a coherent introduction to this jargon-heavy subject and equip neurologists with the tools to understand, critically appraise and apply insights from this burgeoning field.


Author(s):  
Otmar Hilliges

Sensing of user input lies at the core of HCI research. Deciding which input mechanisms to use and how to implement them such that they work in a way that is easy to use, robust to various environmental factors and accurate in reconstruction of the users intent is a tremendously challenging problem. The main difficulties stem from the complex nature of human behavior which is highly non-linear, dynamic and context dependent and can often only be observed partially. Due to these complexities, research has turned its attention to data-driven techniques in order to build sophisticated and robust input recognition mechanisms. In this chapter we discuss the most important aspects that constitute data-driven signal analysis approaches. The aim is to provide the reader with an overall understanding of the process irrespective of the exact choice of sensor or machine learning algorithm.


2021 ◽  
Author(s):  
Quentin Lenouvel ◽  
Vincent Génot ◽  
Philippe Garnier ◽  
Benoit Lavraud ◽  
Sergio Toledo

<p>The understanding of magnetic reconnection's physical processes has considerably been improved thanks to the data of the Magnetopsheric Multiscale mission (MMS). However, a lot of work still has to be done to better characterize the core of the reconnection process : the electron diffusion region (EDR). We previously developed a machine learning algorithm to automatically detect EDR candidates, in order to increase the available list of events identified in the literature. However, identifying the parameters that are the most relevant to describe EDRs is complex, all the more that some of the small scale plasma/fields parameters show limitations in some configurations such as for low particle densities or large guide fields cases. In this study, we perform a statistical study of previously reported dayside EDRs as well as newly reported EDR candidates found using machine learning methods. We also show different single and multi-spacecraft parameters that can be used to better identify dayside EDRs in time series from MMS data recorded at the magnetopause. And finally we show an analysis of the link between the guide field and the strength of the energy conversion around each EDR.</p>


Machine learning is a branch of Artificial Intelligence which is gaining importance in the 21st century with increasing processing speeds and miniaturization of sensors, the applications of Artificial Intelligence and cognitive technologies are growing rapidly. An array of ultrasonic sensors i.e., HCSR-04 is placed at different directions, collecting data for a particularinterval of a period during a particular day. The acquired sensor values are subjected to pre-processing, data analytics, and visualization. The prepared data is now split into test and train. A prediction model is designed using logistic regression and linear regression and checked for accuracy, F1 score, and precision compared.


2021 ◽  
Author(s):  
Arvind Thorat

<div>In the above research paper we describe the how machine learning algorithm can be applied to cyber security purpose, like how to detect malware, botnet. How can we recognize strong password for our system. And detail implementation of Artificial Intelligence and machine learning algorithms is mentioned.</div>


Author(s):  
Jingjing Hu

To explore the adoption of artificial intelligence (AI) technology in the field of teacher teaching evaluation, the machine learning algorithm is proposed to construct a teaching evaluation model, which is suitable for the current educational model, and can help colleges and universities to improve the existing problems in teaching. Firstly, the existing problems in the current teaching evaluation system are put forward and a novel teaching evaluation model is designed. Then, the relevant theories and techniques required to build the model are introduced. Finally, the experiment methods and process are carried out to find out the appropriate machine learning algorithm and optimize the obtained weighted naive Bayes (WNB) algorithm, which is compared with traditional naive Bayes (NB) algorithm and back propagation (BP) algorithm. The results reveal that compared with NB algorithm, the average classification accuracy of WNB algorithm is 0.817, while that of NB algorithm is 0.751. Compared with BP algorithm, WNB algorithm has a classification accuracy of 0.800, while that of BP algorithm is 0.680. Therefore, it is proved that WNB algorithm has favorable effect in teaching evaluation model.


Sign in / Sign up

Export Citation Format

Share Document