LED flicker measurement: Challenges, considerations, and updates from IEEE P2020 working group

2020 ◽  
Vol 2020 (16) ◽  
pp. 1-1-1-6
Author(s):  
Brian Michael Deegan

The introduction of pulse width modulated LED lighting in automotive applications has created the phenomenon of LED flicker. In essence, LED flicker is an imaging artifact, whereby a light source will appear to flicker when image by a camera system, even though the light will appear constant to a human observer. The implications of LED flicker vary, depending on the imaging application. In some cases, it can simply degrade image quality by causing annoying flicker to a human observer. However, LED flicker has the potential to significantly impact the performance of critical autonomous driving functions. In this paper, the root cause of LED flicker is reviewed, and its impact on automotive use cases is explored. Guidelines on measurement and assessment of LED flicker are also provided.

2018 ◽  
Author(s):  
Andrew Sabate ◽  
Rommel Estores

Abstract The advent of lock-in thermal imaging application on semiconductor failure analysis added capability to localize failures through thermal activity (emission) of the die. When coupled with creative electrical set-up and material preparations, lock-in thermography (LIT) [1, 2] application gives more possibility in exploring the failure of the device using low power settings. This gives higher probability of preserving the defect which leads to a more conclusive root cause determination.


2020 ◽  
Vol 2020 (16) ◽  
pp. 149-1-149-8
Author(s):  
Patrick Mueller ◽  
Matthias Lehmann ◽  
Alexander Braun

Simulation is an established tool to develop and validate camera systems. The goal of autonomous driving is pushing simulation into a more important and fundamental role for safety, validation and coverage of billions of miles. Realistic camera models are moving more and more into focus, as simulations need to be more then photo-realistic, they need to be physical-realistic, representing the actual camera system onboard the self-driving vehicle in all relevant physical aspects – and this is not only true for cameras, but also for radar and lidar. But when the camera simulations are becoming more and more realistic, how is this realism tested? Actual, physical camera samples are tested in laboratories following norms like ISO12233, EMVA1288 or the developing P2020, with test charts like dead leaves, slanted edge or OECF-charts. In this article we propose to validate the realism of camera simulations by simulating the physical test bench setup, and then comparing the synthetical simulation result with physical results from the real-world test bench using the established normative metrics and KPIs. While this procedure is used sporadically in industrial settings we are not aware of a rigorous presentation of these ideas in the context of realistic camera models for autonomous driving. After the description of the process we give concrete examples for several different measurement setups using MTF and SFR, and show how these can be used to characterize the quality of different camera models.


Author(s):  
Vivian Nguyen ◽  
Kevin McFall

End-to-end neural networks (EENN) utilize machine learning to make predictions or decisions without being explicitly programmed to perform tasks by considering the inputs and outputs directly. In contrast, traditional hard coded algorithmic autonomous robotics require every possibility programmed. Existing research with EENN and autonomous driving demonstrates level-two autonomy where the vehicle can assist with acceleration, braking, and environment monitoring with a human observer, such as NVIDIA's DAVE-2 autonomous car system by utilizing case-specific computing hardware, and DeepPiCar by scaling technology down to a low power embedded computer (Raspberry Pi). The goal of this study is to recreate previous findings on a different platform and in different environments through EENN application by scaling up DeepPiCar with a NVIDIA Jetson TX2 computing board and hobbyist grade parts (e.g. 12V DC motor, Arduino) to represent 'off-the-shelf' components when compared to DAVE-2. This advancement validates that the concept is scalable to using more generalized data, therefore easing the training process for an EENN by avoiding dataset overfitting and production of a system with a level of 'common sense'. Training data is collected via camera input and associating velocity and encoder values from a differential drive ground vehicle (DDGV) with quadrature motors at 320x240 resolution with a CSV database. Created datasets are fed into an EENN analogous to the DAVE-2 EENN layered structure: one normalization, five convolutional, three fully connected layers. The EENN is considered a convolutional neural network (assumes inputs are images and learns filters, e.g. edge detection, independently from a human programmer), and accuracy is measured by comparing produced velocity values against actual values from a collected validation dataset. An expected result is the DDGV navigates a human space and obstacles by the EENN only inputting sensor data and outputting velocities for each motor


2021 ◽  
Vol 9 (5) ◽  
pp. 33-43
Author(s):  
Ashraf Nabil ◽  
Ayman Kassem

Autonomous Driving is one of the difficult problems faced the automotive applications. Nowadays, it is restricted due to the presence of some laws that prevent cars from being fully autonomous for the fear of accidents occurrence. Researchers try to improve the accuracy and safety of their models with the aim of having a strong push against these restricted Laws. Autonomous driving is a sought-after solution which isn’t easily solved by classical approaches. Deep Learning is considered as a strong Artificial Intelligence paradigm which can teach machines how to behave in difficult situations. It proved its success in many differ domains, but it still has sometime in the automotive applications. The presented work will use the end-to-end deep machine learning field in order to reach to our goal of having Full Autonomous Driving Vehicle that can behave correctly in different scenarios. CARLA simulator will be used to learn and test the deep neural networks. Results will show not only performance on CARLA’s simulator as an end-to-end solution for autonomous driving, but also how the same approach can be used on one of the most popular real datasets of automotive that includes camera images with the corresponding driver’s control action.


2020 ◽  
Vol 48 (4) ◽  
pp. 334-340 ◽  
Author(s):  
András Rövid ◽  
Viktor Remeli ◽  
Norbert Paufler ◽  
Henrietta Lengyel ◽  
Máté Zöldy ◽  
...  

Autonomous driving poses numerous challenging problems, one of which is perceiving and understanding the environment. Since self-driving is safety critical and many actions taken during driving rely on the outcome of various perception algorithms (for instance all traffic participants and infrastructural objects in the vehicle's surroundings must reliably be recognized and localized), thus the perception might be considered as one of the most critical subsystems in an autonomous vehicle. Although the perception itself might further be decomposed into various sub-problems, such as object detection, lane detection, traffic sign detection, environment modeling, etc. In this paper the focus is on fusion models in general (giving support for multisensory data processing) and some related automotive applications such as object detection, traffic sign recognition, end-to-end driving models and an example of taking decisions in multi-criterial traffic situations that are complex for both human drivers and for the self-driving vehicles as well.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5626
Author(s):  
Jie Chen ◽  
Tao Wu ◽  
Meiping Shi ◽  
Wei Jiang

Autonomous driving with artificial intelligence technology has been viewed as promising for autonomous vehicles hitting the road in the near future. In recent years, considerable progress has been made with Deep Reinforcement Learnings (DRLs) for realizing end-to-end autonomous driving. Still, driving safely and comfortably in real dynamic scenarios with DRL is nontrivial due to the reward functions being typically pre-defined with expertise. This paper proposes a human-in-the-loop DRL algorithm for learning personalized autonomous driving behavior in a progressive learning way. Specifically, a progressively optimized reward function (PORF) learning model is built and integrated into the Deep Deterministic Policy Gradient (DDPG) framework, which is called PORF-DDPG in this paper. PORF consists of two parts: the first part of the PORF is a pre-defined typical reward function on the system state, the second part is modeled as a Deep Neural Network (DNN) for representing driving adjusting intention by the human observer, which is the main contribution of this paper. The DNN-based reward model is progressively learned using the front-view images as the input and via active human supervision and intervention. The proposed approach is potentially useful for driving in dynamic constrained scenarios when dangerous collision events might occur frequently with classic DRLs. The experimental results show that the proposed autonomous driving behavior learning method exhibits online learning capability and environmental adaptability.


Sign in / Sign up

Export Citation Format

Share Document