single camera
Recently Published Documents


TOTAL DOCUMENTS

1013
(FIVE YEARS 191)

H-INDEX

38
(FIVE YEARS 7)

2022 ◽  
pp. 9-18
Author(s):  
Luca Lonini ◽  
Yaejin Moon ◽  
Kyle Embry ◽  
R. James Cotton ◽  
Kelly McKenzie ◽  
...  

Recent advancements in deep learning have produced significant progress in markerless human pose estimation, making it possible to estimate human kinematics from single camera videos without the need for reflective markers and specialized labs equipped with motion capture systems. Such algorithms have the potential to enable the quantification of clinical metrics from videos recorded with a handheld camera. Here we used DeepLabCut, an open-source framework for markerless pose estimation, to fine-tune a deep network to track 5 body keypoints (hip, knee, ankle, heel, and toe) in 82 below-waist videos of 8 patients with stroke performing overground walking during clinical assessments. We trained the pose estimation model by labeling the keypoints in 2 frames per video and then trained a convolutional neural network to estimate 5 clinically relevant gait parameters (cadence, double support time, swing time, stance time, and walking speed) from the trajectory of these keypoints. These results were then compared to those obtained from a clinical system for gait analysis (GAITRite®, CIR Systems). Absolute accuracy (mean error) and precision (standard deviation of error) for swing, stance, and double support time were within 0.04 ± 0.11 s; Pearson’s correlation with the reference system was moderate for swing times (<i>r</i> = 0.4–0.66), but stronger for stance and double support time (<i>r</i> = 0.93–0.95). Cadence mean error was −0.25 steps/min ± 3.9 steps/min (<i>r</i> = 0.97), while walking speed mean error was −0.02 ± 0.11 m/s (<i>r</i> = 0.92). These preliminary results suggest that single camera videos and pose estimation models based on deep networks could be used to quantify clinically relevant gait metrics in individuals poststroke, even while using assistive devices in uncontrolled environments. Such development opens the door to applications for gait analysis both inside and outside of clinical settings, without the need of sophisticated equipment.


Author(s):  
D. Minola Davids ◽  
C. Seldev Christopher

The visual data attained from surveillance single-camera or multi-view camera networks is exponentially increasing every day. Identifying the important shots in the presented video which faithfully signify the original video is the major task in video summarization. For executing efficient video summarization of the surveillance systems, optimization algorithm like LFOB-COA is proposed in this paper. Data collection, pre-processing, deep feature extraction (FE), shot segmentation JSFCM, classification using Rectified Linear Unit activated BLSTM, and LFOB-COA are the proposed method’s five steps. Finally a post-processing step is utilized. For recognizing the proposed method’s effectiveness, the results are then contrasted with the existent methods.


2021 ◽  
Vol 13 (14) ◽  
pp. 20284-20287
Author(s):  
Bhuwan Singh Bist ◽  
Prashant Ghimire ◽  
Basant Sharma ◽  
Chiranjeevi Khanal ◽  
Anoj Subedi

Latrine sites are the places used for urination and defecation, which mostly act as a signaling agent for multiple purposes like territorial marking, confrontation with extruders or potential predators, delivering different inter and intra-communication messages. To understand latrine site visit pattern, a single camera trap was deployed for 91 trap nights at the latrine site of Large Indian Civet during the months of December 2016 and February & March 2017. Latrine site was found under the tree with abundant crown cover and bushes. At least two individuals were found to be using a single latrine site in an irregular manner between 1800 h and 0600 h with higher activity between 1800 h and 2300 h. Our results indicated an irregular latrine site visit pattern, hence similar studies with a robust research design in larger areas are required to understand specific latrine use patterns.


2021 ◽  
Vol 147 ◽  
pp. 106743
Author(s):  
Han Tu ◽  
Zeren Gao ◽  
Chuanbiao Bai ◽  
Shihai Lan ◽  
Yaru Wang ◽  
...  
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7879
Author(s):  
Jinyeong Heo ◽  
Yongjin (James) Kwon

The 3D vehicle trajectory in complex traffic conditions such as crossroads and heavy traffic is practically very useful in autonomous driving. In order to accurately extract the 3D vehicle trajectory from a perspective camera in a crossroad where the vehicle has an angular range of 360 degrees, problems such as the narrow visual angle in single-camera scene, vehicle occlusion under conditions of low camera perspective, and lack of vehicle physical information must be solved. In this paper, we propose a method for estimating the 3D bounding boxes of vehicles and extracting trajectories using a deep convolutional neural network (DCNN) in an overlapping multi-camera crossroad scene. First, traffic data were collected using overlapping multi-cameras to obtain a wide range of trajectories around the crossroad. Then, 3D bounding boxes of vehicles were estimated and tracked in each single-camera scene through DCNN models (YOLOv4, multi-branch CNN) combined with camera calibration. Using the abovementioned information, the 3D vehicle trajectory could be extracted on the ground plane of the crossroad by calculating results obtained from the overlapping multi-camera with a homography matrix. Finally, in experiments, the errors of extracted trajectories were corrected through a simple linear interpolation and regression, and the accuracy of the proposed method was verified by calculating the difference with ground-truth data. Compared with other previously reported methods, our approach is shown to be more accurate and more practical.


2021 ◽  
Vol 22 (4) ◽  
pp. 461-470
Author(s):  
Jozsef Suto

Abstract Autonomous navigation is important not only in autonomous cars but also in other transportation systems. In many applications, an autonomous vehicle has to follow the curvature of a real or artificial road or in other words lane lines. In those application, the key is the lane detection. In this paper, we present a real-time lane line tracking algorithm mainly designed to mini vehicles with relatively low computation capacity and single camera sensor. The proposed algorithm exploits computer vision techniques in combination with digital filtering. To demonstrate the performance of the method, experiments are conducted in an indoor, self-made test track where the effect of several external influencing factors can be observed. Experimental results show that the proposed algorithm works well independently of shadows, bends, reflection and lighting changes.


Measurement ◽  
2021 ◽  
pp. 110439
Author(s):  
Jie Bai ◽  
Pingjuan Niu ◽  
Shinan Cao

Sign in / Sign up

Export Citation Format

Share Document