scholarly journals The Use of Neural Networks and Genetic Algorithms to Control Low Rigidity Shafts Machining

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4683
Author(s):  
Antoni Świć ◽  
Dariusz Wołos ◽  
Arkadiusz Gola ◽  
Grzegorz Kłosowski

The article presents an original machine-learning-based automated approach for controlling the process of machining of low-rigidity shafts using artificial intelligence methods. Three models of hybrid controllers based on different types of neural networks and genetic algorithms were developed. In this study, an objective function optimized by a genetic algorithm was replaced with a neural network trained on real-life data. The task of the genetic algorithm is to select the optimal values of the input parameters of a neural network to ensure minimum deviation. Both input vector values and the neural network’s output values are real numbers, which means the problem under consideration is regressive. The performance of three types of neural networks was analyzed: a classic multilayer perceptron network, a nonlinear autoregressive network with exogenous input (NARX) prediction network, and a deep recurrent long short-term memory (LSTM) network. Algorithmic machine learning methods were used to achieve a high level of automation of the control process. By training the network on data from real measurements, we were able to control the reliability of the turning process, taking into account many factors that are usually overlooked during mathematical modelling. Positive results of the experiments confirm the effectiveness of the proposed method for controlling low-rigidity shaft turning.

1997 ◽  
Vol 1 (2) ◽  
pp. 345-356 ◽  
Author(s):  
Z. Rao ◽  
D. G. Jamieson

Abstract. The increasing incidence of groundwater pollution has led to recognition of a need to develop objective techniques for designing reniediation schemes. This paper outlines one such possibility for determining how many abstraction/injection wells are required, where they should be located etc., having regard to minimising the overall cost. To that end, an artificial neural network is used in association with a 2-D or 3-D groundwater simulation model to determine the performance of different combinations of abstraction/injection wells. Thereafter, a genetic algorithm is used to identify which of these combinations offers the least-cost solution to achieve the prescribed residual levels of pollutant within whatever timescale is specified. The resultant hybrid algorithm has been shown to be effective for a simplified but nevertheless representative problem; based on the results presented, it is expected the methodology developed will be equally applicable to large-scale, real-world situations.


2020 ◽  
Author(s):  
Alisson Steffens Henrique ◽  
Vinicius Almeida dos Santos ◽  
Rodrigo Lyra

There are several challenges when modeling artificial intelligencemethods for autonomous players on games (bots). NEAT is one ofthe models that, combining genetic algorithms and neural networks,seek to describe a bot behavior more intelligently. In NEAT, a neuralnetwork is used for decision making, taking relevant inputs fromthe environment and giving real-time decisions. In a more abstractway, a genetic algorithm is applied for the learning step of the neuralnetworks’ weights, layers, and parameters. This paper proposes theuse of relative position as the input of the neural network, basedon the hypothesis that the bot profit will be improved.


Author(s):  
Hang Wu ◽  
Jinwei Chen ◽  
Huisheng Zhang

Abstract Monitoring and diagnosis of a gas turbine is a critical issue in equipment maintenance field. Traditional diagnosis methods are established on the basis of physical models. However, the complexity and degradation of gas turbine limit both comprehensiveness and accuracy of these physical models, making the diagnosis less effective. Therefore, data-driven models are introduced to supplement and revise previous models. Benefitting from the prosperous development of machine learning, neural network has been greatly improved and widely used in various fields of data mining. Three neural networks, Multilayer Perceptron, Convolutional Neural Network and Long Short-term Memory Network are applied in data-driven model establishment. Their training time and prediction accuracy are the two most important factors in judging the effectiveness. An active real time training which means training and predicting simultaneously is applied as the main modelling method for an on-line diagnosis system. Three periods are defined according to the time line: data preparation period, model establishing period and stable prediction period. From the three above neural networks, the most effective data-driven models that corresponding to the last two periods are tested and selected, the purpose is to ensure the high level of accuracy. When high level of accuracy is demanded, neural network always need large computing time and memory space in data learning process. To avoid prediction delay and keep rapid response for the coming fault, distributed training on a 1-master 2-workers computer cluster is designed and applied in this system. Two types of data parallelism are realized on the cluster through Apache Spark and Shell Script for Linux. Comparing with each other and the local training mode, the results shows that dispensing data at first and averaging parameters at last reaches a better outcome both in high accuracy and low training time.


2021 ◽  
Vol 7 (2) ◽  
pp. 113-121
Author(s):  
Firman Pradana Rachman

Setiap orang mempunyai pendapat atau opini terhadap suatu produk, tokoh masyarakat, atau pun sebuah kebijakan pemerintah yang tersebar di media sosial. Pengolahan data opini itu di sebut dengan sentiment analysis. Dalam pengolahan data opini yang besar tersebut tidak hanya cukup menggunakan machine learning, namun bisa juga menggunakan deep learning yang di kombinasikan dengan teknik NLP (Natural Languange Processing). Penelitian ini membandingkan beberapa model deep learning seperti CNN (Convolutional Neural Network), RNN (Recurrent Neural Networks), LSTM (Long Short-Term Memory) dan beberapa variannya untuk mengolah data sentiment analysis dari review produk amazon dan yelp.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Young-Seob Jeong ◽  
Jiyoung Woo ◽  
Ah Reum Kang

With increasing amount of data, the threat of malware keeps growing recently. The malicious actions embedded in nonexecutable documents especially (e.g., PDF files) can be more dangerous, because it is difficult to detect and most users are not aware of such type of malicious attacks. In this paper, we design a convolutional neural network to tackle the malware detection on the PDF files. We collect malicious and benign PDF files and manually label the byte sequences within the files. We intensively examine the structure of the input data and illustrate how we design the proposed network based on the characteristics of data. The proposed network is designed to interpret high-level patterns among collectable spatial clues, thereby predicting whether the given byte sequence has malicious actions or not. By experimental results, we demonstrate that the proposed network outperform several representative machine-learning models as well as other networks with different settings.


2021 ◽  
Vol 15 ◽  
Author(s):  
Karun Thanjavur ◽  
Dionissios T. Hristopulos ◽  
Arif Babul ◽  
Kwang Moo Yi ◽  
Naznin Virji-Babul

Artificial neural networks (ANNs) are showing increasing promise as decision support tools in medicine and particularly in neuroscience and neuroimaging. Recently, there has been increasing work on using neural networks to classify individuals with concussion using electroencephalography (EEG) data. However, to date the need for research grade equipment has limited the applications to clinical environments. We recently developed a deep learning long short-term memory (LSTM) based recurrent neural network to classify concussion using raw, resting state data using 64 EEG channels and achieved high accuracy in classifying concussion. Here, we report on our efforts to develop a clinically practical system using a minimal subset of EEG sensors. EEG data from 23 athletes who had suffered a sport-related concussion and 35 non-concussed, control athletes were used for this study. We tested and ranked each of the original 64 channels based on its contribution toward the concussion classification performed by the original LSTM network. The top scoring channels were used to train and test a network with the same architecture as the previously trained network. We found that with only six of the top scoring channels the classifier identified concussions with an accuracy of 94%. These results show that it is possible to classify concussion using raw, resting state data from a small number of EEG sensors, constituting a first step toward developing portable, easy to use EEG systems that can be used in a clinical setting.


Author(s):  
Taki Hasan Rafi

Recent advancement of deep learning has been elevated the multifaceted nature in various applications of this field. Artificial neural networks are now turning into a genuinely old procedure in the vast area of computer science; the principal thoughts and models are more than fifty years of age. However, in this modern computing era, 3rd generation intelligent models are introduced by scientists. In the biological neuron, actual film channels control the progression of particles over the layer by opening and shutting in light of voltage changes because of inborn current flows and remotely led to signals. A comprehensive 3rd generation, Spiking Neural Network (SNN) is diminishing the distance between deep learning, machine learning, and neuroscience in a biologically-inspired manner. It also connects neuroscience and machine learning to establish high-level efficient computing. Spiking Neural Networks initiate utilizing spikes, which are discrete functions that happen at focuses as expected, as opposed to constant values. This paper is a review of the biological-inspired spiking neural network and its applications in different areas. The author aims to present a brief introduction to SNN, which incorporates the mathematical structure, applications, and implementation of SNN. This paper also represents an overview of machine learning, deep learning, and reinforcement learning. This review paper can help advanced artificial intelligence researchers to get a compact brief intuition of spiking neural networks.


Water ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 912
Author(s):  
Tianyu Song ◽  
Wei Ding ◽  
Haixing Liu ◽  
Jian Wu ◽  
Huicheng Zhou ◽  
...  

As a revolutionary tool leading to substantial changes across many areas, Machine Learning (ML) techniques have obtained growing attention in the field of hydrology due to their potentials to forecast time series. Moreover, a subfield of ML, Deep Learning (DL) is more concerned with datasets, algorithms and layered structures. Despite numerous applications of novel ML/DL techniques in discharge simulation, the uncertainty involved in ML/DL modeling has not drawn much attention, although it is an important issue. In this study, a framework is proposed to quantify uncertainty contributions of the sample set, ML approach, ML architecture and their interactions to multi-step time-series forecasting based on the analysis of variance (ANOVA) theory. Then a discharge simulation, using Recurrent Neural Networks (RNNs), is taken as an example. Long Short-Term Memory (LSTM) network, a state-of-the-art DL approach, was selected due to its outstanding performance in time-series forecasting, and compared with simple RNN. Besides, novel discharge forecasting architecture is designed by combining the expertise of hydrology and stacked DL structure, and compared with conventional design. Taking hourly discharge simulations of Anhe (China) catchment as a case study, we constructed five sample sets, chose two RNN approaches and designed two ML architectures. The results indicate that none of the investigated uncertainty sources are negligible and the influence of uncertainty sources varies with lead-times and discharges. LSTM demonstrates its superiority in discharge simulations, and the ML architecture is as important as the ML approach. In addition, some of the uncertainty is attributable to interactions rather than individual modeling components. The proposed framework can both reveal uncertainty quantification in ML/DL modeling and provide references for ML approach evaluation and architecture design in discharge simulations. It indicates uncertainty quantification is an indispensable task for a successful application of ML/DL.


Author(s):  
Emeric Sibieude ◽  
Akash Khandelwal ◽  
Pascal Girard ◽  
Jan S. Hesthaven ◽  
Nadia Terranova

AbstractA fit-for-purpose structural and statistical model is the first major requirement in population pharmacometric model development. In this manuscript we discuss how this complex and computationally intensive task could benefit from supervised machine learning algorithms. We compared the classical pharmacometric approach with two machine learning methods, genetic algorithm and neural networks, in different scenarios based on simulated pharmacokinetic data. Genetic algorithm performance was assessed using a fitness function based on log-likelihood, whilst neural networks were trained using mean square error or binary cross-entropy loss. Machine learning provided a selection based only on statistical rules and achieved accurate selection. The minimization process of genetic algorithm was successful at allowing the algorithm to select plausible models. Neural network classification tasks achieved the most accurate results. Neural network regression tasks were less precise than neural network classification and genetic algorithm methods. The computational gain obtained by using machine learning was substantial, especially in the case of neural networks. We demonstrated that machine learning methods can greatly increase the efficiency of pharmacokinetic population model selection in case of large datasets or complex models requiring long run-times. Our results suggest that machine learning approaches can achieve a first fast selection of models which can be followed by more conventional pharmacometric approaches.


1995 ◽  
Vol 06 (03) ◽  
pp. 299-316 ◽  
Author(s):  
PETER G. KORNING

In the neural network/genetic algorithm community, rather limited success in the training of neural networks by genetic algorithms has been reported. In a paper by Whitley et al. (1991), he claims that, due to “the multiple representations problem”, genetic algorithms will not effectively be able to train multilayer perceptrons, whose chromosomal representation of its weights exceeds 300 bits. In the following paper, by use of a “real-life problem”, known to be non-trivial, and by a comparison with “classic” neural net training methods, I will try to show, that the modest success of applying genetic algorithms to the training of perceptrons, is caused not so much by the “multiple representations problem” as by the fact that problem-specific knowledge available is often ignored, thus making the problem unnecessarily tough for the genetic algorithm to solve. Special success is obtained by the use of a new fitness function, which takes into account the fact that the search performed by a genetic algorithm is holistic, and not local as is usually the case when perceptrons are trained by traditional methods.


Sign in / Sign up

Export Citation Format

Share Document