scholarly journals Simultaneous Estimation of Vehicle Roll and Sideslip Angles through a Deep Learning Approach

Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3679
Author(s):  
Lisardo Prieto González ◽  
Susana Sanz Sánchez ◽  
Javier Garcia-Guzman ◽  
María Jesús L. Boada ◽  
Beatriz L. Boada

Presently, autonomous vehicles are on the rise and are expected to be on the roads in the coming years. In this sense, it becomes necessary to have adequate knowledge about its states to design controllers capable of providing adequate performance in all driving scenarios. Sideslip and roll angles are critical parameters in vehicular lateral stability. The later has a high impact on vehicles with an elevated center of gravity, such as trucks, buses, and industrial vehicles, among others, as they are prone to rollover. Due to the high cost of the current sensors used to measure these angles directly, much of the research is focused on estimating them. One of the drawbacks is that vehicles are strong non-linear systems that require specific methods able to tackle this feature. The evolution in Artificial Intelligence models, such as the complex Artificial Neural Network architectures that compose the Deep Learning paradigm, has shown to provide excellent performance for complex and non-linear control problems. In this paper, the authors propose an inexpensive but powerful model based on Deep Learning to estimate the roll and sideslip angles simultaneously in mass production vehicles. The model uses input signals which can be obtained directly from onboard vehicle sensors such as the longitudinal and lateral accelerations, steering angle and roll and yaw rates. The model was trained using hundreds of thousands of data provided by Trucksim® and validated using data captured from real driving maneuvers using a calibrated ground truth device such as VBOX3i dual-antenna GPS from Racelogic®. The use of both Trucksim® software and the VBOX measuring equipment is recognized and widely used in the automotive sector, providing robust data for the research shown in this article.

Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1405
Author(s):  
Jasjit S. Suri ◽  
Sushant Agarwal ◽  
Rajesh Pathak ◽  
Vedmanvitha Ketireddy ◽  
Marta Columbu ◽  
...  

Background: COVID-19 lung segmentation using Computed Tomography (CT) scans is important for the diagnosis of lung severity. The process of automated lung segmentation is challenging due to (a) CT radiation dosage and (b) ground-glass opacities caused by COVID-19. The lung segmentation methodologies proposed in 2020 were semi- or automated but not reliable, accurate, and user-friendly. The proposed study presents a COVID Lung Image Analysis System (COVLIAS 1.0, AtheroPoint™, Roseville, CA, USA) consisting of hybrid deep learning (HDL) models for lung segmentation. Methodology: The COVLIAS 1.0 consists of three methods based on solo deep learning (SDL) or hybrid deep learning (HDL). SegNet is proposed in the SDL category while VGG-SegNet and ResNet-SegNet are designed under the HDL paradigm. The three proposed AI approaches were benchmarked against the National Institute of Health (NIH)-based conventional segmentation model using fuzzy-connectedness. A cross-validation protocol with a 40:60 ratio between training and testing was designed, with 10% validation data. The ground truth (GT) was manually traced by a radiologist trained personnel. For performance evaluation, nine different criteria were selected to perform the evaluation of SDL or HDL lung segmentation regions and lungs long axis against GT. Results: Using the database of 5000 chest CT images (from 72 patients), COVLIAS 1.0 yielded AUC of ~0.96, ~0.97, ~0.98, and ~0.96 (p-value < 0.001), respectively within 5% range of GT area, for SegNet, VGG-SegNet, ResNet-SegNet, and NIH. The mean Figure of Merit using four models (left and right lung) was above 94%. On benchmarking against the National Institute of Health (NIH) segmentation method, the proposed model demonstrated a 58% and 44% improvement in ResNet-SegNet, 52% and 36% improvement in VGG-SegNet for lung area, and lung long axis, respectively. The PE statistics performance was in the following order: ResNet-SegNet > VGG-SegNet > NIH > SegNet. The HDL runs in <1 s on test data per image. Conclusions: The COVLIAS 1.0 system can be applied in real-time for radiology-based clinical settings.


2021 ◽  
pp. 152-152
Author(s):  
Aleksandra Sretenovic ◽  
Radisa Jovanovic ◽  
Vojislav Novakovic ◽  
Natasa Nord ◽  
Branislav Zivkovic

Currently, in the building sector there is an increase in energy use due to the increased demand for indoor thermal comfort. Proper energy planning based on a real measurement data is a necessity. In this study, we developed and evaluated hybrid artificial intelligence models for the prediction of the daily heating energy use. Building energy use is defined by significant number of influencing factors, while many of them are hard to define and quantify. For heating energy use modelling, complex relationship between the input and output variables is not strictly linear nor non-linear. The main idea of this paper was to divide the heat demand prediction problem into the linear and the non-linear part (residuals) by using different statistical methods for the prediction. The expectations were that the joint hybrid model, could outperform the individual predictors. Multiple Linear Regression (MLR) was selected for the linear modelling, while the non-linear part was predicted using Feedforward (FFNN) and Radial Basis (RBFN) neural network. The hybrid model prediction consisted of the sum of the outputs of the linear and the non-linear model. The results showed that the hybrid FFNN model and the hybrid RBFN model achieved better results than each of the individual FFNN and RBFN neural networks and MLR on the same dataset. It was shown that this hybrid approach improved the accuracy of artificial intelligence models.


2020 ◽  
Vol 2 (2) ◽  
Author(s):  
Mangor Pedersen ◽  
Karin Verspoor ◽  
Mark Jenkinson ◽  
Meng Law ◽  
David F Abbott ◽  
...  

Abstract Artificial intelligence is one of the most exciting methodological shifts in our era. It holds the potential to transform healthcare as we know it, to a system where humans and machines work together to provide better treatment for our patients. It is now clear that cutting edge artificial intelligence models in conjunction with high-quality clinical data will lead to improved prognostic and diagnostic models in neurological disease, facilitating expert-level clinical decision tools across healthcare settings. Despite the clinical promise of artificial intelligence, machine and deep-learning algorithms are not a one-size-fits-all solution for all types of clinical data and questions. In this article, we provide an overview of the core concepts of artificial intelligence, particularly contemporary deep-learning methods, to give clinician and neuroscience researchers an appreciation of how artificial intelligence can be harnessed to support clinical decisions. We clarify and emphasize the data quality and the human expertise needed to build robust clinical artificial intelligence models in neurology. As artificial intelligence is a rapidly evolving field, we take the opportunity to iterate important ethical principles to guide the field of medicine is it moves into an artificial intelligence enhanced future.


Author(s):  
Behzad Soleymanian ◽  
Razieh Solgi

Distortion of financial statements is recognized as one of the most important issues in the field of accounting and auditing, which is also one of the most common issues today. In this regard, the present research was conducted, in which stock exchange information was used to investigate, predict, and model accounting distortions. For this purpose, financial performance, non-financial metrics, market-based metrics and commitment, or selection items were reviewed over a 6-year period. For collecting data of distorting companies, database of the Society of Certified Public Accountants in Iran was used and the information was analyzed using data mining methods (decision tree, neural networks, and Bayesian method). The results showed that analysis of financial statements҆ information has a high accuracy in determining and identifying the distorted financial statements. Using this information, it is possible to get better acquainted with the methods of document distortion and to take necessary measures in order to control and prevent administrative violations at national and international levels. Given frequent occurrence of these violations, artificial intelligence models can be used to identify these papers.


Author(s):  
Mahanazuddin Syed ◽  
Shorabuddin Syed ◽  
Kevin Sexton ◽  
Melody L. Greer ◽  
Meredith Zozus ◽  
...  

The ongoing COVID-19 pandemic has become the most impactful pandemic of the past century. The SARS-CoV-2 virus has spread rapidly across the globe affecting and straining global health systems. More than 2 million people have died from COVID-19 (as of 30 January 2021). To lessen the pandemic’s impact, advanced methods such as Artificial Intelligence models are proposed to predict mortality, morbidity, disease severity, and other outcomes and sequelae. We performed a rapid scoping literature review to identify the deep learning techniques that have been applied to predict hospital mortality in COVID-19 patients. Our review findings provide insights on the important deep learning models, data types, and features that have been reported in the literature. These summary findings will help scientists build reliable and accurate models for better intervention strategies for predicting mortality in current and future pandemic situations.


Author(s):  
Anand Koirala ◽  
Kerry Brian Walsh ◽  
Zhenglin Wang

Imaging systems mounted to ground vehicles are used to image fruit tree canopies for estimation of fruit load, but frequently need correction for fruit occluded by branches, foliage or other fruits. This can be achieved using an orchard &lsquo;occlusion factor&rsquo;, estimated from a manual count of fruit load on a sample of trees (referred to as the reference method). It was hypothesised that canopy images could hold information related to the number of occluded fruit. Five approaches to correct for occluded fruit based on canopy images were compared using data of three mango orchards in two seasons. However, no attribute correlates to the number of hidden fruit were identified. Several image features obtained through segmentation of fruit and canopy areas, such as the proportion of fruit that were partly occluded, were used in training Random forest and multi-layered perceptron (MLP) models for estimation of a correction factor per tree. In another approach, deep learning convolutional neural networks (CNNs) were directly trained against harvest fruit count on trees. The supervised machine learning methods for direct estimation of fruit load per tree delivered an improved prediction outcome over the reference method for data of the season/orchard from which training data was acquired. For a set of 2017 season tree images (n=98 trees), a R2 of 0.98 was achieved for the correlation of the number of fruits predicted by a Random forest model and the ground truth fruit count on the trees, compared to a R2 of 0.68 for the reference method. The best prediction of whole orchard (n = 880 trees) fruit load, in the season of the training data, was achieved by the MLP model, with an error to packhouse count of 1.6% compared to the reference method error of 13.6%. However, the performance of these models on new season data (test set images) was at best equivalent and generally poorer than the reference method. This result indicates that training on one season of data was insufficient for the development of a robust model. This outcome was attributed to variability in tree architecture and foliage density between seasons and between orchards, such that the characters of the canopy visible from the interrow that relate to the proportion of hidden fruit are not consistent. Training of these models across several seasons and orchards is recommended.


2021 ◽  
Author(s):  
Hamed Jelodar

BACKGROUND Given the limitations of medical diagnosis of early emotional change signs during the COVID-19 quarantine period, artificial intelligence models provide effective mechanisms in uncovering early signs, symptoms and escalating trend. OBJECTIVE The main purpose of this project is to demonstrate the effectiveness of Artificial Intelligence, and in particular Natural Language Processing and Machine Learning in detecting and analyzing emotions from tweets talking about COVID-19 social confinement. METHODS We developed a systematic framework that can be directly applied to COVID-19 related mood discovery, using eight types of emotional reaction and designing a deep learning model to uncover emotions based on the first wave of the pandemic public health restriction of mandatory social segregation. We argue that the framework can discover semantic trends of COVID-19 tweets during the first wave of the pandemic to predict new concerns that may be associated with furthering into the new waves of COVID-19 quarantine orders and other related public health regulations. RESULTS Our findings revealed Stay-At-Home restrictions result in people expressing on twitter both negative and positive based on emotional and semantics aspects. Moreover, the statistical results of the emotion classification is show that our framework based on CNN deep learning has predicted the emotion levels or target labels with more F1-socore than the LSTM model, which are 0.95% and 0.93%, respectively. However, these results have potential to impact public health policy decisions through monitoring trends of emotional feelings of those who are quarantined. CONCLUSIONS The research shows that the framework is effective in capturing the emotions and semantics trends in social media messages during the pandemic. Moreover, the framework can be applied to uncover reactions to similar public health policies that affect people’s well-being.


2020 ◽  
Vol 34 (08) ◽  
pp. 13255-13260
Author(s):  
Mahdi Elhousni ◽  
Yecheng Lyu ◽  
Ziming Zhang ◽  
Xinming Huang

In a world where autonomous driving cars are becoming increasingly more common, creating an adequate infrastructure for this new technology is essential. This includes building and labeling high-definition (HD) maps accurately and efficiently. Today, the process of creating HD maps requires a lot of human input, which takes time and is prone to errors. In this paper, we propose a novel method capable of generating labelled HD maps from raw sensor data. We implemented and tested our methods on several urban scenarios using data collected from our test vehicle. The results show that the proposed deep learning based method can produce highly accurate HD maps. This approach speeds up the process of building and labeling HD maps, which can make meaningful contribution to the deployment of autonomous vehicles.


Author(s):  
Balasriram Kodi ◽  
Manimozhi M

In the field of autonomous vehicles, lane detection and control plays an important role. In autonomous driving the vehicle has to follow the path to avoid the collision. A deep learning technique is used to detect the curved path in autonomous vehicles. In this paper a customized lane detection algorithm was implemented to detect the curvature of the lane. A ground truth labelling tool box for deep learning is used to detect the curved path in autonomous vehicle. By mapping point to point in each frame 80-90% computing efficiency and accuracy is achieved in detecting path.


Sign in / Sign up

Export Citation Format

Share Document