Waveform processing using neural network algorithms on the front-end electronics

2022 ◽  
Vol 17 (01) ◽  
pp. C01039
Author(s):  
S. Miryala ◽  
S. Mittal ◽  
Y. Ren ◽  
G. Carini ◽  
G. Deptuch ◽  
...  

Abstract In a multi-channel radiation detector readout system, waveform sampling, digitization, and raw data transmission to the data acquisition system constitute a conventional processing chain. The deposited energy on the sensor is estimated by extracting peak amplitudes, area under pulse envelopes from the raw data, and starting times of signals or time of arrivals. However, such quantities can be estimated using machine learning algorithms on the front-end Application-Specific Integrated Circuits (ASICs), often termed as “edge computing”. Edge computation offers enormous benefits, especially when the analytical forms are not fully known or the registered waveform suffers from noise and imperfections of practical implementations. In this work, we aim to predict peak amplitude from a single waveform snippet whose rising and falling edges containing only 3 to 4 samples. We thoroughly studied two well-accepted neural network algorithms, Multi-Layer Perceptron (MLP) and Convolutional Neural Network (CNN) by varying their model sizes. To better fit front-end electronics, neural network model reduction techniques, such as network pruning methods and variable-bit quantization approaches, were also studied. By combining pruning and quantization, our best performing model has the size of 1.5 KB, reduced from 16.6 KB of its full model counterpart. It can reach mean absolute error of 0.034 comparing to that of a naive baseline of 0.135. Such parameter-efficient and predictive neural network models established feasibility and practicality of their deployment on front-end ASICs.

Vibration ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 341-356
Author(s):  
Jessada Sresakoolchai ◽  
Sakdirat Kaewunruen

Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.


Water ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 2927
Author(s):  
Jiyeong Hong ◽  
Seoro Lee ◽  
Joo Hyun Bae ◽  
Jimin Lee ◽  
Woon Ji Park ◽  
...  

Predicting dam inflow is necessary for effective water management. This study created machine learning algorithms to predict the amount of inflow into the Soyang River Dam in South Korea, using weather and dam inflow data for 40 years. A total of six algorithms were used, as follows: decision tree (DT), multilayer perceptron (MLP), random forest (RF), gradient boosting (GB), recurrent neural network–long short-term memory (RNN–LSTM), and convolutional neural network–LSTM (CNN–LSTM). Among these models, the multilayer perceptron model showed the best results in predicting dam inflow, with the Nash–Sutcliffe efficiency (NSE) value of 0.812, root mean squared errors (RMSE) of 77.218 m3/s, mean absolute error (MAE) of 29.034 m3/s, correlation coefficient (R) of 0.924, and determination coefficient (R2) of 0.817. However, when the amount of dam inflow is below 100 m3/s, the ensemble models (random forest and gradient boosting models) performed better than MLP for the prediction of dam inflow. Therefore, two combined machine learning (CombML) models (RF_MLP and GB_MLP) were developed for the prediction of the dam inflow using the ensemble methods (RF and GB) at precipitation below 16 mm, and the MLP at precipitation above 16 mm. The precipitation of 16 mm is the average daily precipitation at the inflow of 100 m3/s or more. The results show the accuracy verification results of NSE 0.857, RMSE 68.417 m3/s, MAE 18.063 m3/s, R 0.927, and R2 0.859 in RF_MLP, and NSE 0.829, RMSE 73.918 m3/s, MAE 18.093 m3/s, R 0.912, and R2 0.831 in GB_MLP, which infers that the combination of the models predicts the dam inflow the most accurately. CombML algorithms showed that it is possible to predict inflow through inflow learning, considering flow characteristics such as flow regimes, by combining several machine learning algorithms.


Symmetry ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 262 ◽  
Author(s):  
Shaobo Li ◽  
Yabo Dan ◽  
Xiang Li ◽  
Tiantian Hu ◽  
Rongzhi Dong ◽  
...  

In this paper, a hybrid neural network (HNN) that combines a convolutional neural network (CNN) and long short-term memory neural network (LSTM) is proposed to extract the high-level characteristics of materials for critical temperature (Tc) prediction of superconductors. Firstly, by obtaining 73,452 inorganic compounds from the Materials Project (MP) database and building an atomic environment matrix, we obtained a vector representation (atomic vector) of 87 atoms by singular value decomposition (SVD) of the atomic environment matrix. Then, the obtained atom vector was used to implement the coded representation of the superconductors in the order of the atoms in the chemical formula of the superconductor. The experimental results of the HNN model trained with 12,413 superconductors were compared with three benchmark neural network algorithms and multiple machine learning algorithms using two commonly used material characterization methods. The experimental results show that the HNN method proposed in this paper can effectively extract the characteristic relationships between the atoms of superconductors, and it has high accuracy in predicting the Tc.


2019 ◽  
Vol 8 (4) ◽  
pp. 3152-3158

With the digitization, the importance of content writing is being increased. This is due to the huge improvement in accessibility and the major impact of digital content on human beings. Due to veracity and huge demand for digital content, author profiling becomes a necessity to identify the correct person for particular content writing. This paper works on deep neural network models to identify the gender of author for any particular content. The analysis has been done on the corpus dataset by using artificial neural networks with different number of layers, long short term memory based Recurrent Neural Network (RNN), bidirectional long short term memory based RNN and attention-based RNN models using mean absolute error, root mean square error, accuracy, and loss as analysis parameters. The results of different epochs show the significance of each model.


Images are the fastest growing content, they contribute significantly to the amount of data generated on the internet every day. Image classification is a challenging problem that social media companies work on vigorously to enhance the user’s experience with the interface. The recent advances in the field of machine learning and computer vision enables personalized suggestions and automatic tagging of images. Convolutional neural network is a hot research topic these days in the field of machine learning. With the help of immensely dense labelled data available on the internet the networks can be trained to recognize the differentiating features among images under the same label. New neural network algorithms are developed frequently that outperform the state-of-art machine learning algorithms. Recent algorithms have managed to produce error rates as low as 3.1%. In this paper the architecture of important CNN algorithms that have gained attention are discussed, analyzed and compared and the concept of transfer learning is used to classify different breeds of dogs..


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Mansi Gupta ◽  
Kumar Rajnish ◽  
Vandana Bhattacharjee

Deep neural network models built by the appropriate design decisions are crucial to obtain the desired classifier performance. This is especially desired when predicting fault proneness of software modules. When correctly identified, this could help in reducing the testing cost by directing the efforts more towards the modules identified to be fault prone. To be able to build an efficient deep neural network model, it is important that the parameters such as number of hidden layers, number of nodes in each layer, and training details such as learning rate and regularization methods be investigated in detail. The objective of this paper is to show the importance of hyperparameter tuning in developing efficient deep neural network models for predicting fault proneness of software modules and to compare the results with other machine learning algorithms. It is shown that the proposed model outperforms the other algorithms in most cases.


2021 ◽  
Vol 6 (2) ◽  
pp. 128-133
Author(s):  
Ihor Koval ◽  

The problem of finding objects in images using modern computer vision algorithms has been considered. The description of the main types of algorithms and methods for finding objects based on the use of convolutional neural networks has been given. A comparative analysis and modeling of neural network algorithms to solve the problem of finding objects in images has been conducted. The results of testing neural network models with different architectures on data sets VOC2012 and COCO have been presented. The results of the study of the accuracy of recognition depending on different hyperparameters of learning have been analyzed. The change in the value of the time of determining the location of the object depending on the different architectures of the neural network has been investigated.


2020 ◽  
pp. 193229682097134
Author(s):  
Simon Lebech Cichosz ◽  
Nicklas Højgaard Rasmussen ◽  
Peter Vestergaard ◽  
Ole Hejlesen

Background: Estimating body composition is relevant in diabetes disease management, such as drug administration and risk assessment of morbidity/mortality. It is unclear how machine learning algorithms could improve easily obtainable body muscle and fat estimates. The objective was to develop and validate machine learning algorithms (neural networks) for precise prediction of body composition based on anthropometric and demographic data. Methods: Cross-sectional cohort study of 18 430 adults and children from the US population. Participants were examined with whole-body dual X-ray absorptiometry (DXA) scans, anthropometric assessment, and answered a demographic questionnaire. The primary outcomes were predicted total lean body mass (predLBM), total body fat mass (predFM), and trunk fat mass (predTFM) compared with reference values from DXA scans. Results: Participants were randomly partitioned into 70% training (12 901) data and 30% validation (5529) data. The prediction model for predLBM compared with lean body mass measured by DXA (DXALBM) had a Pearson’s correlation coefficient of R = 0.99 with a standard error of estimate (SEE) = 1.88 kg ( P < .001). The prediction model for predFM compared with fat mass measured by DXA (DXAFM) had a Pearson’s coefficient of R = 0.98 with a SEE = 1.91 kg ( P < .001). The prediction model for predTFM compared with DXA measured trunk fat mass (DXAFM) had a Pearson’s coefficient of R = 0.98 with a SEE = 1.13 kg ( P < .001). Conclusions: In this study, neural network models based on anthropometric and demographic data could precisely predict body muscle and fat composition. Precise body estimations are relevant in a broad range of clinical diabetes applications, prevention, and epidemiological research.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Raheel Siddiqui ◽  
Hafeez Anwar ◽  
Farman Ullah ◽  
Rehmat Ullah ◽  
Muhammad Abdul Rehman ◽  
...  

Power prediction is important not only for the smooth and economic operation of a combined cycle power plant (CCPP) but also to avoid technical issues such as power outages. In this work, we propose to utilize machine learning algorithms to predict the hourly-based electrical power generated by a CCPP. For this, the generated power is considered a function of four fundamental parameters which are relative humidity, atmospheric pressure, ambient temperature, and exhaust vacuum. The measurements of these parameters and their yielded output power are used to train and test the machine learning models. The dataset for the proposed research is gathered over a period of six years and taken from a standard and publicly available machine learning repository. The utilized machine algorithms are K -nearest neighbors (KNN), gradient-boosted regression tree (GBRT), linear regression (LR), artificial neural network (ANN), and deep neural network (DNN). We report state-of-the-art performance where GBRT outperforms not only the utilized algorithms but also all the previous methods on the given CCPP dataset. It achieves the minimum values of root mean square error (RMSE) of 2.58 and absolute error (AE) of 1.85.


2021 ◽  
Vol 21 (5) ◽  
pp. 221-228
Author(s):  
Byungsik Lee

Neural network models based on deep learning algorithms are increasingly used for estimating pile load capacities as supplements of bearing capacity equations and field load tests. A series of hyperparameter tuning is required to improve the performance and reliability of developing a neural network model. In this study, the number of hidden layers and neurons, the activation functions, the optimizing algorithms of the gradient descent method, and the learning rates were tuned. The grid search method was applied for the tuning, which is a hyperpameter optimizer supplied by the developing platform. The cross-validation method was applied to enhance reliability for model validation. An appropriate number of epochs was determined using the early stopping method to prevent the overfitting of the model to the training data. The performance of the tuned optimum model evaluated for the test data set revealed that the model could estimate pile load capacities approximately with an average absolute error of 3,000 kN and a coefficient of determinant of 0.5.


Sign in / Sign up

Export Citation Format

Share Document