scholarly journals Visible-Thermal Image Object Detection via the Combination of Illumination Conditions and Temperature Information

2021 ◽  
Vol 13 (18) ◽  
pp. 3656
Author(s):  
Hang Zhou ◽  
Min Sun ◽  
Xiang Ren ◽  
Xiuyuan Wang

Object detection plays an important role in autonomous driving, disaster rescue, robot navigation, intelligent video surveillance, and many other fields. Nonetheless, visible images are poor under weak illumination conditions, and thermal infrared images are noisy and have low resolution. Consequently, neither of these two data sources yields satisfactory results when used alone. While some scholars have combined visible and thermal images for object detection, most did not consider the illumination conditions and the different contributions of diverse data sources to the results. In addition, few studies have made use of the temperature characteristics of thermal images. Therefore, in the present study, visible and thermal images are utilized as the dataset, and RetinaNet is used as the baseline to fuse features from different data sources for object detection. Moreover, a dynamic weight fusion method, which is based on channel attention according to different illumination conditions, is used in the fusion component, and the channel attention and a priori temperature mask (CAPTM) module is proposed; the CAPTM can be applied to a deep learning network as a priori knowledge and maximizes the advantage of temperature information from thermal images. The main innovations of the present research include the following: (1) the consideration of different illumination conditions and the use of different fusion parameters for different conditions in the feature fusion of visible and thermal images; (2) the dynamic fusion of different data sources in the feature fusion of visible and thermal images; (3) the use of temperature information as a priori knowledge (CAPTM) in feature extraction. To a certain extent, the proposed methods improve the accuracy of object detection at night or under other weak illumination conditions and with a single data source. Compared with the state-of-the-art (SOTA) method, the proposed method is found to achieve superior detection accuracy with an overall mean average precision (mAP) improvement of 0.69%, including an AP improvement of 2.55% for the detection of the Person category. The results demonstrate the effectiveness of the research methods for object detection, especially temperature information-rich object detection.

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Rui Wang ◽  
Ziyue Wang ◽  
Zhengwei Xu ◽  
Chi Wang ◽  
Qiang Li ◽  
...  

Object detection is an important part of autonomous driving technology. To ensure the safe running of vehicles at high speed, real-time and accurate detection of all the objects on the road is required. How to balance the speed and accuracy of detection is a hot research topic in recent years. This paper puts forward a one-stage object detection algorithm based on YOLOv4, which improves the detection accuracy and supports real-time operation. The backbone of the algorithm doubles the stacking times of the last residual block of CSPDarkNet53. The neck of the algorithm replaces the SPP with the RFB structure, improves the PAN structure of the feature fusion module, adds the attention mechanism CBAM and CA structure to the backbone and neck structure, and finally reduces the overall width of the network to the original 3/4, so as to reduce the model parameters and improve the inference speed. Compared with YOLOv4, the algorithm in this paper improves the average accuracy on KITTI dataset by 2.06% and BDD dataset by 2.95%. When the detection accuracy is almost unchanged, the inference speed of this algorithm is increased by 9.14%, and it can detect in real time at a speed of more than 58.47 FPS.


Author(s):  
Mika Gustafsson ◽  
Michael Hörnquist

In this chapter we outline a methodology to reverse engineer GRNs from various data sources within an ODE framework. The methodology is generally applicable and is suitable to handle the broad error distribution present in microarrays. The main effort of this chapter is the exploration of a fully data driven approach to the integration problem in a “soft evidence” based way. Integration is here seen as the process of incorporation of uncertain a priori knowledge and is therefore only relied upon if it lowers the prediction error. An efficient implementation is carried out by a linear programming formulation. This LP problem is solved repeatedly with small modifications, from which we can benefit by restarting the primal simplex method from nearby solutions, which enables a computational efficient execution. We perform a case study for data from the yeast cell cycle, where all verified genes are putative regulators and the a priori knowledge consists of several types of binding data, text-mining and annotation knowledge.


Author(s):  
Robert Audi

This book provides an overall theory of perception and an account of knowledge and justification concerning the physical, the abstract, and the normative. It has the rigor appropriate for professionals but explains its main points using concrete examples. It accounts for two important aspects of perception on which philosophers have said too little: its relevance to a priori knowledge—traditionally conceived as independent of perception—and its role in human action. Overall, the book provides a full-scale account of perception, presents a theory of the a priori, and explains how perception guides action. It also clarifies the relation between action and practical reasoning; the notion of rational action; and the relation between propositional and practical knowledge. Part One develops a theory of perception as experiential, representational, and causally connected with its objects: as a discriminative response to those objects, embodying phenomenally distinctive elements; and as yielding rich information that underlies human knowledge. Part Two presents a theory of self-evidence and the a priori. The theory is perceptualist in explicating the apprehension of a priori truths by articulating its parallels to perception. The theory unifies empirical and a priori knowledge by clarifying their reliable connections with their objects—connections many have thought impossible for a priori knowledge as about the abstract. Part Three explores how perception guides action; the relation between knowing how and knowing that; the nature of reasons for action; the role of inference in determining action; and the overall conditions for rational action.


Author(s):  
Donald C. Williams

This chapter begins with a systematic presentation of the doctrine of actualism. According to actualism, all that exists is actual, determinate, and of one way of being. There are no possible objects, nor is there any indeterminacy in the world. In addition, there are no ways of being. It is proposed that actual entities stand in three fundamental relations: mereological, spatiotemporal, and resemblance relations. These relations govern the fundamental entities. Each fundamental entity stands in parthood relations, spatiotemporal relations, and resemblance relations to other entities. The resulting picture is one that represents the world as a four-dimensional manifold of actual ‘qualitied contents’—upon which all else supervenes. It is then explained how actualism accounts for classes, quantity, number, causation, laws, a priori knowledge, necessity, and induction.


Author(s):  
Keith DeRose

In this chapter the contextualist Moorean account of how we know by ordinary standards that we are not brains in vats (BIVs) utilized in Chapter 1 is developed and defended, and the picture of knowledge and justification that emerges is explained. The account (a) is based on a double-safety picture of knowledge; (b) has it that our knowledge that we’re not BIVs is in an important way a priori; and (c) is knowledge that is easily obtained, without any need for fancy philosophical arguments to the effect that we’re not BIVs; and the account is one that (d) utilizes a conservative approach to epistemic justification. Special attention is devoted to defending the claim that we have a priori knowledge of the deeply contingent fact that we’re not BIVs, and to distinguishing this a prioritist account of this knowledge from the kind of “dogmatist” account prominently championed by James Pryor.


2021 ◽  
Vol 11 (8) ◽  
pp. 3531
Author(s):  
Hesham M. Eraqi ◽  
Karim Soliman ◽  
Dalia Said ◽  
Omar R. Elezaby ◽  
Mohamed N. Moustafa ◽  
...  

Extensive research efforts have been devoted to identify and improve roadway features that impact safety. Maintaining roadway safety features relies on costly manual operations of regular road surveying and data analysis. This paper introduces an automatic roadway safety features detection approach, which harnesses the potential of artificial intelligence (AI) computer vision to make the process more efficient and less costly. Given a front-facing camera and a global positioning system (GPS) sensor, the proposed system automatically evaluates ten roadway safety features. The system is composed of an oriented (or rotated) object detection model, which solves an orientation encoding discontinuity problem to improve detection accuracy, and a rule-based roadway safety evaluation module. To train and validate the proposed model, a fully-annotated dataset for roadway safety features extraction was collected covering 473 km of roads. The proposed method baseline results are found encouraging when compared to the state-of-the-art models. Different oriented object detection strategies are presented and discussed, and the developed model resulted in improving the mean average precision (mAP) by 16.9% when compared with the literature. The roadway safety feature average prediction accuracy is 84.39% and ranges between 91.11% and 63.12%. The introduced model can pervasively enable/disable autonomous driving (AD) based on safety features of the road; and empower connected vehicles (CV) to send and receive estimated safety features, alerting drivers about black spots or relatively less-safe segments or roads.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


1995 ◽  
Vol 31 (22) ◽  
pp. 1930-1931 ◽  
Author(s):  
D. Anguita ◽  
S. Rovetta ◽  
S. Ridella ◽  
R. Zunino

Sign in / Sign up

Export Citation Format

Share Document