scholarly journals Phenomenological Modelling of Camera Performance for Road Marking Detection

Energies ◽  
2021 ◽  
Vol 15 (1) ◽  
pp. 194
Author(s):  
Hexuan Li ◽  
Kanuric Tarik ◽  
Sadegh Arefnezhad ◽  
Zoltan Ferenc Magosi ◽  
Christoph Wellershaus ◽  
...  

With the development of autonomous driving technology, the requirements for machine perception have increased significantly. In particular, camera-based lane detection plays an essential role in autonomous vehicle trajectory planning. However, lane detection is subject to high complexity, and it is sensitive to illumination variation, appearance, and age of lane marking. In addition, the sheer infinite number of test cases for highly automated vehicles requires an increasing portion of test and validation to be performed in simulation and X-in-the-loop testing. To model the complexity of camera-based lane detection, physical models are often used, which consider the optical properties of the imager as well as image processing itself. This complexity results in high efforts for the simulation in terms of modelling as well as computational costs. This paper presents a Phenomenological Lane Detection Model (PLDM) to simulate camera performance. The innovation of the approach is the modelling technique using Multi-Layer Perceptron (MLP), which is a class of Neural Network (NN). In order to prepare input data for our neural network model, massive driving tests have been performed on the M86 highway road in Hungary. The model’s inputs include vehicle dynamics signals (such as speed and acceleration, etc.). In addition, the difference between the reference output from the digital-twin map of the highway and camera lane detection results is considered as the target of the NN. The network consists of four hidden layers, and scaled conjugate gradient backpropagation is used for training the network. The results demonstrate that PLDM can sufficiently replicate camera detection performance in the simulation. The modelling approach improves the realism of camera sensor simulation as well as computational effort for X-in-the-loop applications and thereby supports safety validation of camera-based functionality in automated driving, which decreases the energy consumption of vehicles.

Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 694-705
Author(s):  
T. Kirthiga Devi ◽  
Akshat Srivatsava ◽  
Kritesh Kumar Mudgal ◽  
Ranjnish Raj Jayanti ◽  
T. Karthick

The objective of this project is to automate the process of driving a car. The result of this project will surely reduce the number of hazards happening everyday. Our world is in progress and self driving car is on its way to reach consumer‟s door-step but the big question still lies that will people accept such a car which is fully automated and driverless. The idea is to create an autonomous Vehicle that uses only some sensors (collision detectors, temperature detectors etc.) and camera module to travel between destinations with minimal/no human intervention. The car will be using a trained Convolutional Neural Network (CNN) which would control the parameters that are required for smoothly driving a car. They are directly connected to the main steering mechanism and the output of the deep learning model will control the steering angle of the vehicle. Many algorithms like Lane Detection, Object Detection are used in tandem to provide the necessary functionalities in the car.


Author(s):  
Yuan Shi ◽  
Wenhui Huang ◽  
Federico Cheli ◽  
Monica Bordegoni ◽  
Giandomenico Caruso

Abstract A bursting number of achievements in the autonomous vehicle industry have been obtained during the past decades. Various systems have been developed to make automated driving possible. Due to the algorithm used in the autonomous vehicle system, the performance of the vehicle differs from one to another. However, very few studies have given insight into the influence caused by implementing different algorithms from a human factors point of view. Two systems based on two algorithms with different characteristics are utilized to generate the two driving styles of the autonomous vehicle, which are implemented into a driving simulator in order to create the autonomous driving experience. User’s skin conductance (SC) data, which enables the evaluation of user’s cognitive workload and mental stress were recorded and analyzed. Subjective measures were applied by filling out Swedish occupational fatigue inventory (SOFI-20) to get a user self-reporting perspective view of their behavior changes along with the experiments. The results showed that human’s states were affected by the driving styles of different autonomous systems, especially in the period of speed variation. By analyzing users’ self-assessment data, a correlation was observed between the user “Sleepiness” and the driving style of the autonomous vehicle. These results would be meaningful for the future development of the autonomous vehicle systems, in terms of balancing the performance of the vehicle and user’s experience.


Author(s):  
Andrey Azarchenkov ◽  
Maksim Lyubimov

The problem of creating a fully autonomous vehicle is one of the most urgent in the field of artificial intelligence. Many companies claim to sell such cars in certain working conditions. The task of interacting with other road users is to detect them, determine their physical properties, and predict their future states. The result of this prediction is the trajectory of road users’ movement for a given period of time in the near future. Based on such trajectories, the planning system determines the behavior of an autonomous-driving vehicle. This paper demonstrates a multi-agent method for determining the trajectories of road users, by means of a road map of the surrounding area, working with the use of convolutional neural networks. In addition, the input of the neural network gets an agent state vector containing additional information about the object. A number of experiments are conducted for the selected neural architecture in order to attract its modifications to the prediction result. The results are estimated using metrics showing the spatial deviation of the predicted trajectory. The method is trained using the nuscenes test dataset obtained from lgsvl-simulator.


Author(s):  
Rui Li ◽  
Weitian Wang ◽  
Yi Chen ◽  
Srivatsan Srinivasan ◽  
Venkat N. Krovi

Fully automatic parking (FAP) is a key step towards the age of autonomous vehicle. Motivated by the contribution of human vision to human parking, in this paper, we propose a computer vision based FAP method for the autonomous vehicles. Based on the input images from a rear camera on the vehicle, a convolutional neural network (CNN) is trained to automatically output the steering and velocity commands for the vehicle controlling. The CNN is trained by Caffe deep learning framework. A 1/10th autonomous vehicle research platform (1/10-SAVRP), which configured with a vehicle controller unit, an automated driving processor, and a rear camera, is used for demonstrating the parking maneuver. The experimental results suggested that the proposed approach enabled the vehicle to gain the ability of parking independently without human input in different driving settings.


2020 ◽  
Vol 48 (4) ◽  
pp. 334-340 ◽  
Author(s):  
András Rövid ◽  
Viktor Remeli ◽  
Norbert Paufler ◽  
Henrietta Lengyel ◽  
Máté Zöldy ◽  
...  

Autonomous driving poses numerous challenging problems, one of which is perceiving and understanding the environment. Since self-driving is safety critical and many actions taken during driving rely on the outcome of various perception algorithms (for instance all traffic participants and infrastructural objects in the vehicle's surroundings must reliably be recognized and localized), thus the perception might be considered as one of the most critical subsystems in an autonomous vehicle. Although the perception itself might further be decomposed into various sub-problems, such as object detection, lane detection, traffic sign detection, environment modeling, etc. In this paper the focus is on fusion models in general (giving support for multisensory data processing) and some related automotive applications such as object detection, traffic sign recognition, end-to-end driving models and an example of taking decisions in multi-criterial traffic situations that are complex for both human drivers and for the self-driving vehicles as well.


2018 ◽  
Vol 189 ◽  
pp. 03001
Author(s):  
Jie Liu ◽  
Xiang Cao ◽  
Diangang Wang ◽  
Kejia Pan ◽  
Cheng Zhang ◽  
...  

This paper tackles a new challenge in abnormal electricity detection: how to promptly detect stealing electricity behavior by a large-scale data from power users. Proposed scheme firstly forms power consumption gradient model by extracting daily trend indicators of electricity consumption, which can exactly reflect the short-term power consumption trend for each user. Furthermore, we design the line-losing model by analyzing the difference between power supplying and actual power consumption. Finally, a hybrid deep neural network detection model is built by combining with the power consumption gradient model and the line-losing model, which can quickly pin down to the abnormal electricity users. Comprehensive experiments are implemented by large-scale user samples from the State Grid Corporation and Tensorflow framework. Extensive results show that comparing with the state-of-the-arts, proposed scheme has a superior detection performance, and therefore is believed to be able to give a better guidance to abnormal electricity detection.


Author(s):  
Balasriram Kodi ◽  
Manimozhi M

In the field of autonomous vehicles, lane detection and control plays an important role. In autonomous driving the vehicle has to follow the path to avoid the collision. A deep learning technique is used to detect the curved path in autonomous vehicles. In this paper a customized lane detection algorithm was implemented to detect the curvature of the lane. A ground truth labelling tool box for deep learning is used to detect the curved path in autonomous vehicle. By mapping point to point in each frame 80-90% computing efficiency and accuracy is achieved in detecting path.


Insects ◽  
2020 ◽  
Vol 11 (9) ◽  
pp. 565
Author(s):  
Zhiliang Zhang ◽  
Wei Zhan ◽  
Zhangzhang He ◽  
Yafeng Zou

Statistical analysis and research on insect grooming behavior can find more effective methods for pest control. Traditional manual insect grooming behavior statistical methods are time-consuming, labor-intensive, and error-prone. Based on computer vision technology, this paper uses spatio-temporal context to extract video features, uses self-built Convolution Neural Network (CNN) to train the detection model, and proposes a simple and effective Bactrocera minax grooming behavior detection method, which automatically detects the grooming behaviors of the flies and analysis results by a computer program. Applying the method training detection model proposed in this paper, the videos of 22 adult flies with a total of 1320 min of grooming behavior were detected and analyzed, and the total detection accuracy was over 95%, the standard error of the accuracy of the behavior detection of each adult flies was less than 3%, and the difference was less than 15% when compared with the results of manual observation. The experimental results show that the method in this paper greatly reduces the time of manual observation and at the same time ensures the accuracy of insect behavior detection and analysis, which proposes a new informatization analysis method for the behavior statistics of Bactrocera minax and also provides a new idea for related insect behavior identification research.


Sign in / Sign up

Export Citation Format

Share Document