A new high speed cmos camera for real-time tracking applications

Author(s):  
U. Muehlmann ◽  
M. Ribo ◽  
P. Lang ◽  
A. Pinz
2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Sushil Raut ◽  
Kohei Shimasaki ◽  
Sanjay Singh ◽  
Takeshi Takaki ◽  
Idaku Ishii

AbstractIn this study, the novel approach of real-time video stabilization system using a high-frame-rate (HFR) jitter sensing device is demonstrated to realize the computationally efficient technique of digital video stabilization for high-resolution image sequences. This system consists of a high-speed camera to extract and track feature points in gray-level $$512\times 496$$512×496 image sequences at 1000 fps and a high-resolution CMOS camera to capture $$2048\times 2048$$2048×2048 image sequences considering their hybridization to achieve real-time stabilization. The high-speed camera functions as a real-time HFR jitter sensing device to measure an apparent jitter movement of the system by considering two ways of computational acceleration; (1) feature point extraction with a parallel processing circuit module of the Harris corner detection and (2) corresponding hundreds of feature points at the current frame to those in the neighbor ranges at the previous frame on the assumption of small frame-to-frame displacement in high-speed vision. The proposed hybrid-camera system can digitally stabilize the $$2048\times 2048$$2048×2048 images captured with the high-resolution CMOS camera by compensating the sensed jitter-displacement in real time for displaying to human eyes on a computer display. The experiments were conducted to demonstrate the effectiveness of hybrid-camera-based digital video stabilization such as (a) verification when the hybrid-camera system in the pan direction in front of a checkered pattern, (b) stabilization in video shooting a photographic pattern when the system moved with a mixed-displacement motion of jitter and constant low-velocity in the pan direction, and (c) stabilization in video shooting a real-world outdoor scene when an operator holding hand-held hybrid-camera module while walking on the stairs.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Tao Hong ◽  
Qiye Yang ◽  
Peng Wang ◽  
Jinmeng Zhang ◽  
Wenbo Sun ◽  
...  

Unmanned aerial vehicles (UAVs) have increased the convenience of urban life. Representing the recent rapid development of drone technology, UAVs have been widely used in fifth-generation (5G) cellular networks and the Internet of Things (IoT), such as drone aerial photography, express drone delivery, and drone traffic supervision. However, owing to low altitude and low speed, drones can only limitedly monitor and detect small target objects, resulting in frequent intrusion and collision. Traditional methods of monitoring the safety of drones are mostly expensive and difficult to implement. In smart city construction, a large number of smart IoT cameras connected to 5G networks are installed in the city. Captured drone images are transmitted to the cloud via a high-speed and low-latency 5G network, and machine learning algorithms are used for target detection and tracking. In this study, we propose a method for real-time tracking of drone targets by using the existing monitoring network to obtain drone images in real time and employing deep learning methods by which drones in urban environments can be guided. To achieve real-time tracking of UAV targets, we employed the tracking-by-detection mode in machine learning, with the network-modified YOLOv3 (you only look once v3) as the target detector and Deep SORT as the target tracking correlation algorithm. We established a drone tracking dataset that contains four types of drones and 2800 pictures in different environments. The tracking model we trained achieved 94.4% tracking accuracy in real-time UAV target tracking and a tracking speed of 54 FPS. These results comprehensively demonstrate that our tracking model achieves high-precision real-time UAV target tracking at a reduced cost.


Author(s):  
Peter Gemeiner ◽  
Wolfgang Ponweiser ◽  
Peter Einramhof ◽  
Markus Vincze
Keyword(s):  

Author(s):  
Hooman Farkhani ◽  
Mohammad Tohidi ◽  
Sadaf Farkhani ◽  
Jens Kargaard Madsen ◽  
Farshad Moradi

Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 731 ◽  
Author(s):  
Guanyu Piao ◽  
Jingbo Guo ◽  
Tiehua Hu ◽  
Yiming Deng

Real-time tracking of pipeline inspection gauges (PIGs) is an important aspect of ensuring the safety of oil and gas pipeline inline inspections (ILIs). Transmitting and receiving extremely low frequency (ELF) magnetic signals is one of the preferred methods of tracking. Due to the increase in physical parameters of the pipeline including transportation speed, wall thickness and burial depth, the ELF magnetic signals received are short transient (1-second duration) and very weak (10 pT), making the existing above-ground-marker (AGM) systems difficult to operate correctly. Based on the short transient very weak characteristics of ELF signals studied with a 2-D finite-element method (FEM) simulation, a data fusion model was derived to fuse the envelope decay rates of ELF signals by a least square (LS) criterion. Then, a fast-decision-tree (FDT) method is proposed to estimate the fused envelope decay rate to output the maximized orthogonal signal power for the signal detection through a determined topology and a fast calculation process, which was demonstrated to have excellent real-time detection performance. We show that simulation and experimental results validated the effectiveness of the proposed FDT method, and describe the high-sensitivity detection and real-time implementation of a high-speed PIG tracking system, including a transmitter, a receiver, and a pair of orthogonal search coil sensors.


2018 ◽  
Vol 11 (1) ◽  
Author(s):  
Pieter Blignaut

Following a patent owned by Tobii, the framerate of a CMOS camera can be increased by reducing the size of the recording window so that it fits the eyes with minimum room to spare. The position of the recording window can be dynamically adjusted within the camera sensor area to follow the eyes as the participant moves the head. Since only a portion of the camera sensor data is communicated to the computer and processed, much higher framerates can be achieved with the same CPU and camera.Eye trackers can be expected to present data at a high speed, with good accuracy and precision, small latency and with minimal loss of data while allowing participants to behave as normally as possible. In this study, the effect of headbox adjustments in real-time is investigated with respect to the above-mentioned parameters.It was found that, for the specific camera model and tracking algorithm, one or two headbox adjustments per second, as would normally be the case during recording of human participants, could be tolerated in favour of a higher framerate. The effect of adjustment of the recording window can be reduced by using a larger recording window at the cost of the framerate.


Sign in / Sign up

Export Citation Format

Share Document