scholarly journals Hyperparameter Optimization Techniques for Designing Software Sensors Based on Artificial Neural Networks

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8435
Author(s):  
Sebastian Blume ◽  
Tim Benedens ◽  
Dieter Schramm

Software sensors are playing an increasingly important role in current vehicle development. Such soft sensors can be based on both physical modeling and data-based modeling. Data-driven modeling is based on building a model purely on captured data which means that no system knowledge is required for the application. At the same time, hyperparameters have a particularly large influence on the quality of the model. These parameters influence the architecture and the training process of the machine learning algorithm. This paper deals with the comparison of different hyperparameter optimization methods for the design of a roll angle estimator based on an artificial neural network. The comparison is drawn based on a pre-generated simulation data set created with ISO standard driving maneuvers. Four different optimization methods are used for the comparison. Random Search and Hyperband are two similar methods based purely on randomness, whereas Bayesian Optimization and the genetic algorithm are knowledge-based methods, i.e., they process information from previous iterations. The objective function for all optimization methods consists of the root mean square error of the training process and the reference data generated in the simulation. To guarantee a meaningful result, k-fold cross-validation is integrated for the training process. Finally, all methods are applied to the predefined parameter space. It is shown that the knowledge-based methods lead to better results. In particular, the Genetic Algorithm leads to promising solutions in this application.

2014 ◽  
Vol 17 (1) ◽  
pp. 56-74 ◽  
Author(s):  
Gurjeet Singh ◽  
Rabindra K. Panda ◽  
Marc Lamers

The reported study was undertaken in a small agricultural watershed, namely, Kapgari in Eastern India having a drainage area of 973 ha. The watershed was subdivided into three sub-watersheds on the basis of drainage network and land topography. An attempt was made to relate the continuously monitored runoff data from the sub-watersheds and the whole-watershed with the rainfall and temperature data using the artificial neural network (ANN) technique. The reported study also evaluated the bias in the prediction of daily runoff with shorter length of training data set using different resampling techniques with the ANN modeling. A 10-fold cross-validation (CV) technique was used to find the optimum number of hidden neurons in the hidden layer and to avoid neural network over-fitting during the training process for shorter length of data. The results illustrated that the ANN models developed with shorter length of training data set avoid neural network over-fitting during the training process, using a 10-fold CV method. Moreover, the biasness was investigated using the bootstrap resampling technique based ANN (BANN) for short length of training data set. In comparison with the 10-fold CV technique, the BANN is more efficient in solving the problems of the over-fitting and under-fitting during training of models for shorter length of data set.


2021 ◽  
Vol 11 (19) ◽  
pp. 8940
Author(s):  
Wonseok Choi ◽  
Wonseok Yang ◽  
Jaeyoung Na ◽  
Giuk Lee ◽  
Woochul Nam

For gait phase estimation, time-series data of lower-limb motion can be segmented according to time windows. Time-domain features can then be calculated from the signal enclosed in a time window. A set of time-domain features is used for gait phase estimation. In this approach, the components of the feature set and the length of the time window are influential parameters for gait phase estimation. However, optimal parameter values, which determine a feature set and its values, can vary across subjects. Previously, these parameters were determined empirically, which led to a degraded estimation performance. To address this problem, this paper proposes a new feature extraction approach. Specifically, the components of the feature set are selected using a binary genetic algorithm, and the length of the time window is determined through Bayesian optimization. In this approach, the two optimization techniques are integrated to conduct a dual optimization task. The proposed method is validated using data from five walking and five running motions. For walking, the proposed approach reduced the gait phase estimation error from 1.284% to 0.910%, while for running, the error decreased from 1.997% to 1.484%.


2020 ◽  
Vol 10 (24) ◽  
pp. 9110 ◽  
Author(s):  
José Luis Olazagoitia ◽  
Jesus Angel Perez ◽  
Francisco Badea

Accurate modeling of tire characteristics is one of the most challenging tasks. Many mathematical models can be used to fit measured data. Identification of the parameters of these models usually relies on least squares optimization techniques. Different researchers have shown that the proper selection of an initial set of parameters is key to obtain a successful fitting. Besides, the mathematical process to identify the right parameters is, in some cases, quite time-consuming and not adequate for fast computing. This paper investigates the possibility of using Artificial Neural Networks (ANN) to reliably identify tire model parameters. In this case, the Pacejka’s “Magic Formula” has been chosen for the identification due to its complex mathematical form which, in principle, could result in a more difficult learning than other formulations. The proposed methodology is based on the creation of a sufficiently large training dataset, without errors, by randomly choosing the MF parameters within a range compatible with reality. The results obtained in this paper suggest that the use of ANN to directly identify parameters in tire models for real test data is possible without the need of complicated cost functions, iterative fitting or initial iteration point definition. The errors in the identification are normally very low for every parameter and the fitting problem time is reduced to a few milliseconds for any new given data set, which makes this methodology very appropriate to be used in applications where the computing time needs to be reduced to a minimum.


2017 ◽  
Vol 47 (1) ◽  
pp. 68-81
Author(s):  
Anierudh Vishwanathan

This paper suggests a novel design of a multi cylinder internal combustion engine crankshaft which will convert the unnecessary/extra torque provided by the engine into speed of the vehicle. Transmission gear design has been incorporated with crankshaft design to enable the vehicle attain same speed and torque at lower R.P.M resulting in improved fuel economy provided the operating power remains same. This paper also depicts the reduction in the fuel consumption of the engine due to the proposed design of the crankshaft system. In order to accommodate the wear and tear of the crankshaft due to the gearing action, design parameters like crankpin diameter, journal bearing diameter, crankpin fillet radii and journal bearing fillet radii have been optimized for output parameters like stress which has been calculated using finite element analysis with ANSYS Mechanical APDL and minimum volume using integrated Artificial Neural Network-Multi objective genetic algorithm. The data set for the optimization process has been generated using Latin Hypercube Sampling technique.


2015 ◽  
Vol 80 (2) ◽  
pp. 253-264 ◽  
Author(s):  
N. Anu ◽  
S. Rangabhashiyam ◽  
Antony Rahul ◽  
N. Selvaraju

Balance (CMB) model has been extensively used in order to determine source contribution for particulate matters (size diameters less than 10 ?m and 2.5 ?m) in the air quality analysis. A comparison of the source contribution estimated from the three CMB models (CMB 8.2, CMB-fmincon and CMB-GA) have been carried out through optimization techniques such as ?fmincon? (CMB-fmincon) and genetic algorithm (CMB-GA) using MATLAB. The proposed approach has been validated using San Joaquin Valley Air Quality Study (SJVAQS) California Fresno and Bakersfield PM10 and PM2.5 followed with Oregon PM10 data. The source contribution estimated from CMB-GA was better in source interpretation in comparison with CMB8.2 and CMB-fmincon. The performance accuracy of three CMB approaches were validated using R-square, reduced chi-square and percentage mass tests. The R-square (0.90, 0.67 and 0.81, 0.83), Chi-square (0.36, 0.66 and 0.65, 0.43) and percentage mass (67.36 %, 55.03 % and 94.24 %, 74.85 %) of CMB-GA showed high correlation for PM10, PM2.5 Fresno and Bakersfield data respectively. To make a complete decision, the proposed methodology has been bench marked with Portland, Oregon PM10 data with best fit with R2 (0.99), Chi-square (1.6) and percentage mass (94.4 %) from CMB-GA. Therefore, the study revealed that CMB with genetic algorithm optimization method holds better stability in determining the source contributions.


2021 ◽  
Author(s):  
Slaven Conevski ◽  
Massimo Guerrero ◽  
Axel Winterscheid ◽  
Nils Ruther

<p>Measuring and assessing the bedload data is a crucial for successful and efficient river management. Hence, the information about the bedload transport and characteristics helps to describe the dynamics of the river morphology and to evaluate the impacts on boat navigation, hydropower production, ecological systems and aquatic habitat.</p><p>Although the acoustic Doppler current profilers are designed to measure water velocities and discharges, they have been successfully used to measure some bedload characteristics, such as the apparent bedload velocity. The correlation between the apparent bedload velocity and the bedload transport rates measured by physical bedload samplers (e.g. pressure difference) has been examined and relatively high correlations have been reported. Moreover, laboratory experiments have proven that there is a strong correlation between the bedload concentration and particle size distribution and corrected backscattering strength obtained from the ADCPs.</p><p>The bedload transport rates yielded from the ADCPs outputs are usually derived as regression model-fitting of the measured apparent velocity and the physically collected bedload samples at the same time and position.  Alternatively, a semi-empirical kinematical approach is used, where the apparent bedload velocity is the main component and the bedload concentration is empirically estimated. However, the heterogeneous and sporadic motion of the bedload particles is often followed by high uncertainty and weak performance of these approaches.</p><p>Machine learning offers a relatively simple and robust method that has the potential to describe the nonlinearity of the complex bedload motion and so far, it has not been previously exploited for predicting transport rates. This study implements artificial neural network techniques to develop a model for predicting bedload transport rates by using only ADCP data outputs as training data. Data processing techniques are used to extract relevant features from the corrected backscattering strength and apparent velocity obtained from the ADCPs. More than 60 features were derived in the ADCPs dataset, and the most relevant features are selected through neighborhood component analysis. These features are used as inputs in conventional supervised neural network architecture which consists of two hidden layers and 35 neurons. This model is used to capture the distribution of the ADCP features for each output (e.g., physically measured transport rates and grain size from bedload samples) in the training sample. The back-propagation algorithm (BPA) is still one of the most widely used learning algorithms in the training process and thus herein applied. The learning rate, number of neurons and hidden layers were optimized by using Bayesian optimization techniques. The network was trained with more than 60 bedload samples and corresponding 5 - 10 min time series of ADCP preprocessed data. The rest of the samples were used for validation of the model. The validation resulted in correlation coefficients higher than 0.9 and the, which is significantly higher value than the corresponding values for the methodologies developed before. Aiming to develop a more robust and stable ANN model, further testing of different training algorithms must be performed, different ANN architecture should be tested, and more data shall be included.</p>


Tech-E ◽  
2017 ◽  
Vol 1 (1) ◽  
pp. 37
Author(s):  
Rino -

Cancer is a major challenge for mankind. Cancer can affect various parts of the body. This deadly disease can be detected in people of all ages. However, the risk of cancer increases with increasing age. Breast cancer is the most common cancer among women, and form largest cause of death for women as well. Then there are problems in the detection of breast cancer, resulting in the patient experiencing unnecessary treatment and cost. Insimilar studies, there are several methods used but there are problems due to the shape of the cancer cells are nonlinear. Neural networks can solve these problems, but neural network is weak in terms of determining the value of the parameter, so it needs to be optimized. Genetic algorithm is one of the optimization methods is good, therefore the values ​​of the parameters of the neural network will be optimized by using a genetic algorithm so as to get the best value of the parameter. Neural Network-based GA algorithm has the higher accuracy value than just using Neural Network algorithm. This is evident from the increase in value for the accuracy of the model Neural Network algorithm by 95.42% and the accuracy of algorithm-based Neural Network algorithm GA (Genetic Algorithm) of 96.85% with a difference of 1.43% accuracy. So it can be concluded that the application of Genetic Algorithm optimization techniques to improve the accuracy values on Neural Network algorithm.


2021 ◽  
Author(s):  
Nicholas Farouk Ali

The field of optimization has been and continues to be an area of significant importance in the industry. From financial, industrial, social and any other sector conceivable, people are interested in improving the scheme of existing methodologies and products and/or in creating new ideas. Due to the growing need for humans to improve their lives and add efficiency to a system, optimization has been and still is an area of active research. Typically optimization methods seek to improve rather than create new ideas. However, the ability of optimization methods to mold new ideas should not be ruled out, since optimized solutions usually lead to new designs, which are in most cases unique. Combinatorial optimization is the term used to define the method of finding the best sequence or combination of variables or elements in a large complex system in order to attain a particular objective. This thesis promises to provide a panoramic view of optimization in general before zooming into a specific artificial intelligence technique in optimization. Detailed information on optimization techniques commonly used in mechanical engineering is first provided to ensure a clear understanding of the thesis. Moreover, the thesis highlights the differences and similarities, advantages and disadvantages of these techniques. After a brief study of the techniques entailed in optimization, an artificial intelligence algorithm, namely genetic algorithm, was selected, developed, improved and later applied to a wide variety of mechanical engineering problems. Ample examples from various fields of engineering are provided to illustrate the versatility of genetic algorithms. The major focus of this thesis is therefore the application of genetic algorithms to solve a broad range of engineering problems. The viability of the genetic algorithm (GA) as an optimization tool for mechanical engineering applications is assessed and discussed. Comparison between GA generated results and results found in the literature are presented when possible to underscore the power of GA to solve problems. Moreover, the disadvantages and advantages of the genetic algorithms are discussed based on the results obtained. The mechanical engineering applications studied include conceptual aircraft design, design of truss structures under various constraints and loading conditions, and armour design using established penetration analytical models. Results show that the genetic algorithm developed is capable of handling a wide range of problems, is an efficient cost effective tool, and often provides superior results when compared to other optimization methods found in the literature.


Geophysics ◽  
1996 ◽  
Vol 61 (2) ◽  
pp. 422-436 ◽  
Author(s):  
Zehui Huang ◽  
John Shimeld ◽  
Mark Williamson ◽  
John Katsube

Estimating permeability from well log information in uncored borehole intervals is an important yet difficult task encountered in many earth science disciplines. Most commonly, permeability is estimated from various well log curves using either empirical relationships or some form of multiple linear regression (MLR). More sophisticated, multiple nonlinear regression (MNLR) techniques are not as common because of difficulties associated with choosing an appropriate mathematical model and with analyzing the sensitivity of the chosen model to the various input variables. However, the recent development of a class of nonlinear optimization techniques known as artificial neural networks (ANNs) does much to overcome these difficulties. We use a back‐propagation ANN (BP-ANN) to model the interrelationships between spatial position, six different well logs, and permeability. Data from four wells in the Venture gas field (offshore eastern Canada) are organized into training and supervising data sets for BP-ANN modeling. Data from a fifth well in the same field are retained as an independent data set for testing. When applied to this test data, the trained BP-ANN produces permeability values that compare well with measured values in the cored intervals. Permeability profiles calculated with the trained BP-ANN exhibit numerous low permeability horizons that are correlatable between the wells at Venture. These horizons likely represent important, intra‐reservoir barriers to fluid migration that are significant for future reservoir production plans at Venture. For discussion, we also derive predictive equations using conventional statistical methods (i.e., MLR, and MNLR) with the same data set used for BP-ANN modeling. These examples highlight the efficacy of BP-ANNs as a means of obtaining multivariate, nonlinear models for difficult problems such as permeability estimation.


Sign in / Sign up

Export Citation Format

Share Document