video sensors
Recently Published Documents


TOTAL DOCUMENTS

99
(FIVE YEARS 16)

H-INDEX

13
(FIVE YEARS 2)

2021 ◽  
Vol 1202 (1) ◽  
pp. 012033
Author(s):  
Gernot Sauter ◽  
Marcel Doring ◽  
Rik Nuyttens

Abstract It is well known that camera and video sensors have limitations in detecting pavement markings under certain conditions e.g. glare from sunlight or other vehicles, rain, fog etc. First generations of lane keeping systems depend on visual light. Erroneous detection is also resulting from irregular road surfaces such as glossy bitumen sealing strips, rain puddles or simply worn asphalt. The role of higher performing markings and better visual camera detection has been studied with Vedecom France. LiDAR (light detection and ranging) technology could help to fill remaining gaps, as it actively sends out IR (infrared) light, that returns reliable images of the road scenario and pavement markings both day and nighttime. In order to evaluate the opportunities of LiDAR technology for the detection of road markings, 3M Company and the University of Applied Sciences in Dresden decided to work together in a joint research project. All-Weather Elements AWE, are the latest development of high-performance optics, using high index beads to provide reflectivity both in dry and wet condition. It could be determined that high performance markings help to increase the level of detection by both camera and LiDAR sensors. The AWE marking was detected from significantly longer distances, especially in wet and rainy conditions. In combination with common camera based LKA and LDW systems, the LiDAR sensors can increase the overall detection rate of pavement markings. This is especially important for vehicles with higher SAE levels of automated driving and can support the overall safety of vehicles. The research also evaluated existing test methods for wet and rain reflectivity in EN 1436 and ASTM E 2832 and how measured performance correlates with LiDAR detection.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3656
Author(s):  
Antonio Lazaro ◽  
Marc Lazaro ◽  
Ramon Villarino ◽  
David Girbau ◽  
Pedro de Paco

This work proposes the use of a modulated tag for direct communication between two vehicles using as a carrier the wave emitted by an FMCW radar installed in the vehicle for advanced driver assistance. The system allows for real-time signals detection and classification, such as stop signal, turn signals and emergency lights, adding redundancy to computer video sensors and without incorporating additional communication systems. A proof-of-concept tag has been designed at the microwave frequency of 24 GHz, consisting of an amplifier connected between receiving and transmitting antennas. The modulation is performed by switching the power supply of the amplifier. The tag is installed on the rear of the car and it answers when it is illuminated by the radar by modulating the backscattered field. The information is encoded in the modulation switching rate used. Simulated and experimental results are given showing the feasibility of the proposed solution.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3112
Author(s):  
Tan-Hsu Tan ◽  
Jin-Hao Hus ◽  
Shing-Hong Liu ◽  
Yung-Fa Huang ◽  
Munkhjargal Gochoo

Research on the human activity recognition could be utilized for the monitoring of elderly people living alone to reduce the cost of home care. Video sensors can be easily deployed in the different zones of houses to achieve monitoring. The goal of this study is to employ a linear-map convolutional neural network (CNN) to perform action recognition with RGB videos. To reduce the amount of the training data, the posture information is represented by skeleton data extracted from the 300 frames of one film. The two-stream method was applied to increase the accuracy of recognition by using the spatial and motion features of skeleton sequences. The relations of adjacent skeletal joints were employed to build the direct acyclic graph (DAG) matrices, source matrix, and target matrix. Two features were transferred by DAG matrices and expanded as color texture images. The linear-map CNN had a two-dimensional linear map at the beginning of each layer to adjust the number of channels. A two-dimensional CNN was used to recognize the actions. We applied the RGB videos from the action recognition datasets of the NTU RGB+D database, which was established by the Rapid-Rich Object Search Lab, to execute model training and performance evaluation. The experimental results show that the obtained precision, recall, specificity, F1-score, and accuracy were 86.9%, 86.1%, 99.9%, 86.3%, and 99.5%, respectively, in the cross-subject source, and 94.8%, 94.7%, 99.9%, 94.7%, and 99.9%, respectively, in the cross-view source. An important contribution of this work is that by using the skeleton sequences to produce the spatial and motion features and the DAG matrix to enhance the relation of adjacent skeletal joints, the computation speed was faster than the traditional schemes that utilize single frame image convolution. Therefore, this work exhibits the practical potential of real-life action recognition.


2020 ◽  
pp. 3-20
Author(s):  
Oleksandr. M. Golovin ◽  

Recently, video analytics systems are rapidly evolving, and the effectiveness of their work depends primarily on the quality of operations at the initial level of the entire processing process, namely the quality of segmentation of objects in the scene and their recognition. Successful performance of these procedures is primarily due to image quality, which depends on many factors: technical parameters of video sensors, low or uneven lighting, changes in lighting levels of the scene due to weather conditions, time changes in illumination, or changes in scenarios in the scene. This paper presents a new, accurate, and practical method for assessing the improvement of image quality in automatic mode. The method is based on the use of nonlinear transformation function, namely, gamma correction, which reflects properties of a human visual system, effectively reduces the negative impact of changes in scene illumination and due to simple adjustment and effective implementation is widely used in practice. The technique of selection in an automatic mode of the optimum value of the gamma parameter at which the corrected image reaches the maximum quality is developed.


Author(s):  
R Hofmeyr ◽  
A Elhouni

The use of advanced endoscopic airway equipment has become increasingly important to the provision of safe anaesthesia for patients with complex anatomical and pathological conditions. Fundamental to the correct selection and use of the equipment is an understanding of the physical properties underlying its construction and function. This relies primarily on conventional optics, fibreoptics, video sensors and light-emitting diode technology.


2020 ◽  
Vol 56 (4) ◽  
pp. 2910-2921
Author(s):  
Ehsan Taghavi ◽  
Dan Song ◽  
Ratnasingham Tharmarasa ◽  
Thiagalingam Kirubarajan ◽  
Mike McDonald ◽  
...  
Keyword(s):  

2019 ◽  
Vol 2 (2) ◽  
pp. 5-10
Author(s):  
Laurențiu-Bogdan Dudu ◽  
◽  
Florin Popescu ◽  
Petrică Ciotîrnae ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document