scholarly journals Detection of Road Images Containing a Counterlight Using Multilevel Analysis

Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2210
Author(s):  
JongBae Kim

In this paper, a method for detecting real-time images that include counterlight produced by the sun, is proposed. It involves applying a multistep analysis of the size, location, and distribution of bright areas in the image. In general, images containing counterlight have a symmetrically high brightness value at a specific location spread over an extremely large region. In addition, the distribution and change in brightness in that specific region have a symmetrically large difference compared with other regions. Through a multistep analysis of these symmetrical features, it is determined whether counterlight is included in the image. The proposed method presents a processing time of approximately 0.7 s and a detection accuracy of 88%, suggesting that the approach can be applied to a safe driving support system for autonomous vehicles.

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2536
Author(s):  
Jason Nataprawira ◽  
Yanlei Gu ◽  
Igor Goncharenko ◽  
Shunsuke Kamijo

Pedestrian fatalities and injuries most likely occur in vehicle-pedestrian crashes. Meanwhile, engineers have tried to reduce the problems by developing a pedestrian detection function in Advanced Driver-Assistance Systems (ADAS) and autonomous vehicles. However, the system is still not perfect. A remaining problem in pedestrian detection is the performance reduction at nighttime, although pedestrian detection should work well regardless of lighting conditions. This study presents an evaluation of pedestrian detection performance in different lighting conditions, then proposes to adopt multispectral image and deep neural network to improve the detection accuracy. In the evaluation, different image sources including RGB, thermal, and multispectral format are compared for the performance of the pedestrian detection. In addition, the optimizations of the architecture of the deep neural network are performed to achieve high accuracy and short processing time in the pedestrian detection task. The result implies that using multispectral images is the best solution for pedestrian detection at different lighting conditions. The proposed deep neural network accomplishes a 6.9% improvement in pedestrian detection accuracy compared to the baseline method. Moreover, the optimization for processing time indicates that it is possible to reduce 22.76% processing time by only sacrificing 2% detection accuracy.


Author(s):  
Chen Guoqiang ◽  
Yi Huailong ◽  
Mao Zhuangzhuang

Aims: The factors including light, weather, dynamic objects, seasonal effects and structures bring great challenges for the autonomous driving algorithm in the real world. Autonomous vehicles can detect different object obstacles in complex scenes to ensure safe driving. Background: The ability to detect vehicles and pedestrians is critical to the safe driving of autonomous vehicles. Automated vehicle vision systems must handle extremely wide and challenging scenarios. Objective: The goal of the work is to design a robust detector to detect vehicles and pedestrians. The main contribution is that the Multi-level Feature Fusion Block (MFFB) and the Detector Cascade Block (DCB) are designed. The multi-level feature fusion and multi-step prediction are used which greatly improve the detection object precision. Methods: The paper proposes a vehicle and pedestrian object detector, which is an end-to-end deep convolutional neural network. The key parts of the paper are to design the Multi-level Feature Fusion Block (MFFB) and Detector Cascade Block (DCB). The former combines inherent multi-level features by combining contextual information with useful multi-level features that combine high resolution but low semantics and low resolution but high semantic features. The latter uses multi-step prediction, cascades a series of detectors, and combines predictions of multiple feature maps to handle objects of different sizes. Results: The experiments on the RobotCar dataset and the KITTI dataset show that our algorithm can achieve high precision results through real-time detection. The algorithm achieves 84.61% mAP on the RobotCar dataset and is evaluated on the well-known KITTI benchmark dataset, achieving 81.54% mAP. In particular, the detection accuracy of a single-category vehicle reaches 90.02%. Conclusion: The experimental results show that the proposed algorithm has a good trade-off between detection accuracy and detection speed, which is beyond the current state-of-the-art RefineDet algorithm. The 2D object detector is proposed in the paper, which can solve the problem of vehicle and pedestrian detection and improve the accuracy, robustness and generalization ability in autonomous driving.


2021 ◽  
Vol 10 (6) ◽  
pp. 377
Author(s):  
Chiao-Ling Kuo ◽  
Ming-Hua Tsai

The importance of road characteristics has been highlighted, as road characteristics are fundamental structures established to support many transportation-relevant services. However, there is still huge room for improvement in terms of types and performance of road characteristics detection. With the advantage of geographically tiled maps with high update rates, remarkable accessibility, and increasing availability, this paper proposes a novel simple deep-learning-based approach, namely joint convolutional neural networks (CNNs) adopting adaptive squares with combination rules to detect road characteristics from roadmap tiles. The proposed joint CNNs are responsible for the foreground and background image classification and various types of road characteristics classification from previous foreground images, raising detection accuracy. The adaptive squares with combination rules help efficiently focus road characteristics, augmenting the ability to detect them and provide optimal detection results. Five types of road characteristics—crossroads, T-junctions, Y-junctions, corners, and curves—are exploited, and experimental results demonstrate successful outcomes with outstanding performance in reality. The information of exploited road characteristics with location and type is, thus, converted from human-readable to machine-readable, the results will benefit many applications like feature point reminders, road condition reports, or alert detection for users, drivers, and even autonomous vehicles. We believe this approach will also enable a new path for object detection and geospatial information extraction from valuable map tiles.


Energies ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 1788
Author(s):  
Gomatheeshwari Balasekaran ◽  
Selvakumar Jayakumar ◽  
Rocío Pérez de Prado

With the rapid development of the Internet of Things (IoT) and artificial intelligence, autonomous vehicles have received much attention in recent years. Safe driving is one of the essential concerns of self-driving cars. The main problem in providing better safe driving requires an efficient inference system for real-time task management and autonomous control. Due to limited battery life and computing power, reducing execution time and resource consumption can be a daunting process. This paper addressed these challenges and developed an intelligent task management system for IoT-based autonomous vehicles. For each task processing, a supervised resource predictor is invoked for optimal hardware cluster selection. Tasks are executed based on the earliest hyper period first (EHF) scheduler to achieve optimal task error rate and schedule length performance. The single-layer feedforward neural network (SLFN) and lightweight learning approaches are designed to distribute each task to the appropriate processor based on their emergency and CPU utilization. We developed this intelligent task management module in python and experimentally tested it on multicore SoCs (Odroid Xu4 and NVIDIA Jetson embedded platforms).Connected Autonomous Vehicles (CAV) and Internet of Medical Things (IoMT) benchmarks are used for training and testing purposes. The proposed modules are validated by observing the task miss rate, resource utilization, and energy consumption metrics compared with state-of-art heuristics. SLFN-EHF task scheduler achieved better results in an average of 98% accuracy, and in an average of 20–27% reduced in execution time and 32–45% in task miss rate metric than conventional methods.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


1995 ◽  
Vol 151 ◽  
pp. 32-35
Author(s):  
Meil Abada-Simon ◽  
Alain Lecacheux ◽  
Monique Aubier ◽  
Jay A. Bookbinder

AD Leonis is a very active, single dMe flare star. The similarities between this type of star and the Sun has led to study their radio radiation, which originates from their corona. The high brightness temperatures and other characteristics of most dMe radio bursts can be attributed to a non-thermal, coherent mechanism: plasma radiation or a cyclotron maser instability (CMI) are both plausible explanations. Even for the strongest burst of AD Leo which reached 940 mJy at 21 cm, it was not possible to discriminate between these two mechanisms (Bastian et al. 1990).Here we present an intense burst from AD Leo, exhibiting strong spikes for which the CMI seems to be the only reasonable explanation. In Sect. 2 we describe the observations, and in Sect. 3 we give an interpretation for this event.


2012 ◽  
Vol 21 (4) ◽  
Author(s):  
D. A. Bezrukov ◽  
B. I. Ryabov ◽  
K. Shibasaki

AbstractOn the base of the 17 GHz radio maps of the Sun taken with the Nobeyama Radio Heliograph we estimate plasma parameters in the specific region of the sunspot atmosphere in the active region AR 11312. This region of the sunspot atmosphere is characterized by the depletion in coronal emission (soft X-ray and EUV lines) and the reduced absorption in the a chromospheric line (He I 1.083 μm). In the ordinary normal mode of 17 GHz emission the corresponding dark patch has the largest visibility near the central solar meridian. We infer that the reduced coronal plasma density of about ~ 5 × 10


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 95779-95792
Author(s):  
Yuan-Ying Wang ◽  
Hung-Yu Wei

2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


2019 ◽  
Vol 9 (15) ◽  
pp. 3174 ◽  
Author(s):  
Zhou ◽  
Li ◽  
Shen

The in-vehicle controller area network (CAN) bus is one of the essential components for autonomous vehicles, and its safety will be one of the greatest challenges in the field of intelligent vehicles in the future. In this paper, we propose a novel system that uses a deep neural network (DNN) to detect anomalous CAN bus messages. We treat anomaly detection as a cross-domain modelling problem, in which three CAN bus data packets as a group are directly imported into the DNN architecture for parallel training with shared weights. After that, three data packets are represented as three independent feature vectors, which corresponds to three different types of data sequences, namely anchor, positive and negative. The proposed DNN architecture is an embedded triplet loss network that optimizes the distance between the anchor example and the positive example, makes it smaller than the distance between the anchor example and the negative example, and realizes the similarity calculation of samples, which were originally used in face detection. Compared to traditional anomaly detection methods, the proposed method to learn the parameters with shared-weight could improve detection efficiency and detection accuracy. The whole detection system is composed of the front-end and the back-end, which correspond to deep network and triplet loss network, respectively, and are trainable in an end-to-end fashion. Experimental results demonstrate that the proposed technology can make real-time responses to anomalies and attacks to the CAN bus, and significantly improve the detection ratio. To the best of our knowledge, the proposed method is the first used for anomaly detection in the in-vehicle CAN bus.


Sign in / Sign up

Export Citation Format

Share Document