scholarly journals Comparative Analysis of Steering Angle Prediction For Automated Object Using Deep Neural Network

Author(s):  
Md Khairul Islam ◽  
Mst. Nilufa Yeasmin ◽  
Chetna Kaushal ◽  
Md Al Amin ◽  
Md Rakibul Islam ◽  
...  

Deep learning's rapid gains in automation are making it more popular in a variety of complex jobs. The self-driving object is an emerging technology that has the potential to transform the entire planet. The steering control of an automated item is critical to ensuring a safe and secure voyage. Consequently, in this study, we developed a methodology for predicting the steering angle only by looking at the front images of a vehicle. In addition, we used an Internet of Things-based system for collecting front images and steering angles. A Raspberry Pi (RP) camera is used in conjunction with a Raspberry Pi (RP) processing unit to capture images from vehicles, and the RP processing unit is used to collect the angles associated with each image. Apart from that, we've made use of deep learning-based algorithms such as VGG16, ResNet-152, DenseNet-201, and Nvidia's models, all of which were trained using labeled training data. Our models are End-to-End CNN models, which do not require extracting elements from data such as roads, lanes, or other objects before predicting steering angle. As a result of our comparative investigation, we can conclude that the Nvidia model's performance was satisfactory, with a Mean Squared Error (MSE) value of 0. But the Nvidia model outperforms the other pre-trained models, even though other models work well.<br>

2021 ◽  
Author(s):  
Md Khairul Islam ◽  
Mst. Nilufa Yeasmin ◽  
Chetna Kaushal ◽  
Md Al Amin ◽  
Md Rakibul Islam ◽  
...  

Deep learning's rapid gains in automation are making it more popular in a variety of complex jobs. The self-driving object is an emerging technology that has the potential to transform the entire planet. The steering control of an automated item is critical to ensuring a safe and secure voyage. Consequently, in this study, we developed a methodology for predicting the steering angle only by looking at the front images of a vehicle. In addition, we used an Internet of Things-based system for collecting front images and steering angles. A Raspberry Pi (RP) camera is used in conjunction with a Raspberry Pi (RP) processing unit to capture images from vehicles, and the RP processing unit is used to collect the angles associated with each image. Apart from that, we've made use of deep learning-based algorithms such as VGG16, ResNet-152, DenseNet-201, and Nvidia's models, all of which were trained using labeled training data. Our models are End-to-End CNN models, which do not require extracting elements from data such as roads, lanes, or other objects before predicting steering angle. As a result of our comparative investigation, we can conclude that the Nvidia model's performance was satisfactory, with a Mean Squared Error (MSE) value of 0. But the Nvidia model outperforms the other pre-trained models, even though other models work well.<br>


2021 ◽  
Author(s):  
Md Khairul Islam ◽  
Mst. Nilufa Yeasmin ◽  
Chetna Kaushal ◽  
Md Al Amin ◽  
Md Rakibul Islam ◽  
...  

Deep learning's rapid gains in automation are making it more popular in a variety of complex jobs. The self-driving object is an emerging technology that has the potential to transform the entire planet. The steering control of an automated item is critical to ensuring a safe and secure voyage. Consequently, in this study, we developed a methodology for predicting the steering angle only by looking at the front images of a vehicle. In addition, we used an Internet of Things-based system for collecting front images and steering angles. A Raspberry Pi (RP) camera is used in conjunction with a Raspberry Pi (RP) processing unit to capture images from vehicles, and the RP processing unit is used to collect the angles associated with each image. Apart from that, we've made use of deep learning-based algorithms such as VGG16, ResNet-152, DenseNet-201, and Nvidia's models, all of which were trained using labeled training data. Our models are End-to-End CNN models, which do not require extracting elements from data such as roads, lanes, or other objects before predicting steering angle. As a result of our comparative investigation, we can conclude that the Nvidia model's performance was satisfactory, with a Mean Squared Error (MSE) value of 0. But the Nvidia model outperforms the other pre-trained models, even though other models work well.<br>


2020 ◽  
Author(s):  
Japheth E. Gado ◽  
Gregg T. Beckham ◽  
Christina M. Payne

ABSTRACTAccurate prediction of the optimal catalytic temperature (Topt) of enzymes is vital in biotechnology, as enzymes with high Topt values are desired for enhanced reaction rates. Recently, a machine-learning method (TOME) for predicting Topt was developed. TOME was trained on a normally-distributed dataset with a median Topt of 37°C and less than five percent of Topt values above 85°C, limiting the method’s predictive capabilities for thermostable enzymes. Due to the distribution of the training data, the mean squared error on Topt values greater than 85°C is nearly an order of magnitude higher than the error on values between 30 and 50°C. In this study, we apply ensemble learning and resampling strategies that tackle the data imbalance to significantly decrease the error on high Topt values (>85°C) by 60% and increase the overall R2 value from 0.527 to 0.632. The revised method, TOMER, and the resampling strategies applied in this work are freely available to other researchers as a Python package on GitHub.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Cédric Arisdakessian ◽  
Olivier Poirion ◽  
Breck Yunits ◽  
Xun Zhu ◽  
Lana X. Garmire

Abstract Single-cell RNA sequencing (scRNA-seq) offers new opportunities to study gene expression of tens of thousands of single cells simultaneously. We present DeepImpute, a deep neural network-based imputation algorithm that uses dropout layers and loss functions to learn patterns in the data, allowing for accurate imputation. Overall, DeepImpute yields better accuracy than other six publicly available scRNA-seq imputation methods on experimental data, as measured by the mean squared error or Pearson’s correlation coefficient. DeepImpute is an accurate, fast, and scalable imputation tool that is suited to handle the ever-increasing volume of scRNA-seq data, and is freely available at https://github.com/lanagarmire/DeepImpute.


2021 ◽  
Vol 1 (2) ◽  
pp. 54-58
Author(s):  
Ninta Liana Br Sitepu

Backpropagationcial neural networks are one of the artificial representations of the human brain that are always trying to stimulate the learning process of the human brain. Backpropagation is a gradient descent method to minimize the squared of the output error. Backprorpagation works through an iterative process using a set of sample data (training data), comparing the predicted value of the network with each sample data. In each process, the weight of the relation in the network is modified to minimize the Mean Squared Error value between the predicted value from the network and the actual value. The purpose of this thesis is to be able to help teachers at SMP Negeri 1 Salakaran to predict the value of student learning. In the calculation using the maximum epouch = 10000, the target error is 0.01, and the learning rate is 0.3, then there is a calculation result where the need ratio A has a value of 0.7517, which means that the value has decreased and D has a value of 0.9202 which means that this value has increased..


Author(s):  
Lei Feng ◽  
Senlin Shu ◽  
Zhuoyi Lin ◽  
Fengmao Lv ◽  
Li Li ◽  
...  

Trained with the standard cross entropy loss, deep neural networks can achieve great performance on correctly labeled data. However, if the training data is corrupted with label noise, deep models tend to overfit the noisy labels, thereby achieving poor generation performance. To remedy this issue, several loss functions have been proposed and demonstrated to be robust to label noise. Although most of the robust loss functions stem from Categorical Cross Entropy (CCE) loss, they fail to embody the intrinsic relationships between CCE and other loss functions. In this paper, we propose a general framework dubbed Taylor cross entropy loss to train deep models in the presence of label noise. Specifically, our framework enables to weight the extent of fitting the training labels by controlling the order of Taylor Series for CCE, hence it can be robust to label noise. In addition, our framework clearly reveals the intrinsic relationships between CCE and other loss functions, such as Mean Absolute Error (MAE) and Mean Squared Error (MSE). Moreover, we present a detailed theoretical analysis to certify the robustness of this framework. Extensive experimental results on benchmark datasets demonstrate that our proposed approach significantly outperforms the state-of-the-art counterparts.


Water ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1879 ◽  
Author(s):  
Xin Huang ◽  
Lei Gao ◽  
Russell S. Crosbie ◽  
Nan Zhang ◽  
Guobin Fu ◽  
...  

As the largest freshwater storage in the world, groundwater plays an important role in maintaining ecosystems and helping humans adapt to climate change. However, groundwater dynamics, such as groundwater recharge, cannot be measured directly and is influenced by spatially and temporally complex processes, models are therefore required to capture the dynamics and provide scientific advice for decision-making. This paper developed, estimated and compared the performance of linear regression, multi-layer perception (MLP) and Long Short-Term Memory (LSTM) models in predicting groundwater recharge. The experimental dataset consists of time series of annual recharge from the year 1970 to 2012, based on water table fluctuation estimates from 465 bores in the states of South Australia and Victoria, Australia. We identified the factors that influenced groundwater recharge and found that the correlation between rainfall and groundwater recharge was strongest. The linear regression model had the poorest fitting performance, with the root mean squared error (RMSE) being greater than 0.19 when various proportions of training data were considered. The MLP model outperformed the linear regression in the prediction capability, achieving RMSE = 0.11 when 80% of training data was considered. The LSTM model was found to have the best performance, whose root mean squared errors were less than 0.12 when various proportions of training data were applied. The relative importance of influential predictors was evaluated using the above three models.


2021 ◽  
Vol 6 (5) ◽  
pp. 171-176
Author(s):  
Jonah Sokipriala

Autonomous driving is one promising research area that would not only revolutionize the transportation industry but would as well save thousands of lives. accurate correct Steering angle prediction plays a crucial role in the development of the autonomous vehicle .This research attempts to design a model that would be able to clone a drivers behavior using transfer learning from pretrained VGG16, the results showed that the model was able to use less training parameters and achieved a low mean squared error(MSE) of less than 2% without overfitting to the training set hence was able to drive on new road it was not trained on.


Author(s):  
Budi Raharjo ◽  
Nurul Farida ◽  
Purwo Subekti ◽  
Rima Herlina S Siburian ◽  
Putu Doddy Heka Ardana ◽  
...  

The purpose of this study was to evaluate the back-propagation model by optimizing the parameters for the prediction of broiler chicken populations by provinces in Indonesia. Parameter optimization is changing the learning rate (lr) of the backpropagation prediction model. Data sourced from the Directorate General of Animal Husbandry and Animal Health processed by the Central Statistics Agency (BPS). Data is the population of Broiler Chickens from 2017 to 2019 (34 records). The analysis process uses the help of RapidMiner software. Data is divided into 2 parts, namely training data (2017-2018) and testing data (2018-2019). The backpropagation model used is 1-2-1; 1-25-1 and 1-45-1 with a learning rate (0.1; 0.01; 0.001; 0.2; 0.02; 0.002; 0.3; 0.03; 0.003). From the three models tested, the 1-45-1 model (lr = 0.3) is the best model with Root Mean Squared Error = 0.028 in the training data. With this model, the prediction results obtained with an accuracy value of 91% and Root Mean Squared Error = 0.00555 in the testing data


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4633
Author(s):  
Daniel Hung Kay Chow ◽  
Luc Tremblay ◽  
Chor Yin Lam ◽  
Adrian Wai Yin Yeung ◽  
Wilson Ho Wu Cheng ◽  
...  

Wearable sensors facilitate running kinematics analysis of joint kinematics in real running environments. The use of a few sensors or, ideally, a single inertial measurement unit (IMU) is preferable for accurate gait analysis. This study aimed to use a convolutional neural network (CNN) to predict level-ground running kinematics (measured by four IMUs on the lower extremities) by using treadmill running kinematics training data measured using a single IMU on the anteromedial side of the right tibia and to compare the performance of level-ground running kinematics predictions between raw accelerometer and gyroscope data. The CNN model performed regression for intraparticipant and interparticipant scenarios and predicted running kinematics. Ten recreational runners were recruited. Accelerometer and gyroscope data were collected. Intraparticipant and interparticipant R2 values of actual and predicted running kinematics ranged from 0.85 to 0.96 and from 0.7 to 0.92, respectively. Normalized root mean squared error values of actual and predicted running kinematics ranged from 3.6% to 10.8% and from 7.4% to 10.8% in intraparticipant and interparticipant tests, respectively. Kinematics predictions in the sagittal plane were found to be better for the knee joint than for the hip joint, and predictions using the gyroscope as the regressor were demonstrated to be significantly better than those using the accelerometer as the regressor.


Sign in / Sign up

Export Citation Format

Share Document