scholarly journals A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

Author(s):  
M. Adduci ◽  
K. Amplianitis ◽  
R. Reulke

Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

2019 ◽  
Vol E102.B (4) ◽  
pp. 708-721
Author(s):  
Toshihiro KITAJIMA ◽  
Edwardo Arata Y. MURAKAMI ◽  
Shunsuke YOSHIMOTO ◽  
Yoshihiro KURODA ◽  
Osamu OSHIRO

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yipeng Zhu ◽  
Tao Wang ◽  
Shiqiang Zhu

Purpose This paper aims to develop a robust person tracking method for human following robots. The tracking system adopts the multimodal fusion results of millimeter wave (MMW) radars and monocular cameras for perception. A prototype of human following robot is developed and evaluated by using the proposed tracking system. Design/methodology/approach Limited by angular resolution, point clouds from MMW radars are too sparse to form features for human detection. Monocular cameras can provide semantic information for objects in view, but cannot provide spatial locations. Considering the complementarity of the two sensors, a sensor fusion algorithm based on multimodal data combination is proposed to identify and localize the target person under challenging conditions. In addition, a closed-loop controller is designed for the robot to follow the target person with expected distance. Findings A series of experiments under different circumstances are carried out to validate the fusion-based tracking method. Experimental results show that the average tracking errors are around 0.1 m. It is also found that the robot can handle different situations and overcome short-term interference, continually track and follow the target person. Originality/value This paper proposed a robust tracking system with the fusion of MMW radars and cameras. Interference such as occlusion and overlapping are well handled with the help of the velocity information from the radars. Compared to other state-of-the-art plans, the sensor fusion method is cost-effective and requires no additional tags with people. Its stable performance shows good application prospects in human following robots.


2019 ◽  
Vol 13 (3) ◽  
pp. 2998-3009 ◽  
Author(s):  
Apidet Booranawong ◽  
Nattha Jindapetch ◽  
Hiroshi Saito

2020 ◽  
Vol 4 (4) ◽  
pp. 27
Author(s):  
Liang Cheng Chang ◽  
Shreya Pare ◽  
Mahendra Singh Meena ◽  
Deepak Jain ◽  
Dong Lin Li ◽  
...  

At present, traditional visual-based surveillance systems are becoming impractical, inefficient, and time-consuming. Automation-based surveillance systems appeared to overcome these limitations. However, the automatic systems have some challenges such as occlusion and retaining images smoothly and continuously. This research proposes a weighted resampling particle filter approach for human tracking to handle these challenges. The primary functions of the proposed system are human detection, human monitoring, and camera control. We used the codebook matching algorithm to define the human region as a target and track it, and we used the practical filter algorithm to follow and extract the target information. Consequently, the obtained information was used to configure the camera control. The experiments were tested in various environments to prove the stability and performance of the proposed system based on the active camera.


Mekatronika ◽  
2020 ◽  
Vol 2 (2) ◽  
pp. 55-61
Author(s):  
Venketaramana Balachandran ◽  
Muhammad Nur Aiman Shapiee ◽  
Ahmad Fakhri Ab. Nasir ◽  
Mohd Azraai Mohd Razman ◽  
Anwar P.P. Abdul Majeed

Human detection and tracking have been progressively demanded in various industries. The concern over human safety has inhibited the deployment of advanced and collaborative robotics, mainly attributed to the dimensionality limitation of present safety sensing. This study entails developing a deep learning-based human presence detector for deployment in smart factory environments to overcome dimensionality limitations. The objective is to develop a suitable human presence detector based on state-of-the-art YOLO variation to achieve real-time detection with high inference accuracy for feasible deployment at TT Vision Holdings Berhad. It will cover the fundamentals of modern deep learning based object detectors and the methods to accomplish the human presence detection task. The YOLO family of object detectors have truly revolutionized the Computer Vision and object detection industry and have continuously evolved since its development. At present, the most recent variation of YOLO includes YOLOv4 and YOLOv4 - Tiny. These models are acquired and pre-evaluated on the public CrowdHuman benchmark dataset. These algorithms mentioned are pre-trained on the CrowdHuman models and benchmarked at the preliminary stage. YOLOv4 and YOLOv4 – Tiny are trained on the CrowdHuman dataset for 4000 iterations and achieved a mean Average Precision of 78.21% at 25FPS and 55.59% 80FPS, respectively. The models are further fine-tuned on a  Custom CCTV dataset and achieved significant precision improvements up to 88.08% at 25 FPS and 77.70% at 80FPS, respectively. The final evaluation justified YOLOv4 as the most feasible model for deployment.  


2005 ◽  
Vol 24 (3) ◽  
pp. 201-206 ◽  
Author(s):  
Mario Plebani

External Quality Assurance (EQA) and Proficiency Testing (PT) programs are fundamental tools for quality evaluation and improvement in clinical laboratories. A growing body of evidence has been collected to demonstrate the usefulness of these programs for reducing inter-laboratory variation, analytical errors and for improving the "state-of-the art". The validity of EQA/PT programs is strongly affected by the quality of control materials, the design of the program, namely the ability to estimate analytical bias and imprecision, and the commitment of providers to assist in the education participant laboratories. Future perspectives of EQA/PT are the possibility to evaluate pre- and post-analytical steps, the utilization of Internet for receiving and communicating results to participant laboratories and the accreditation/certification of the programs. .


Author(s):  
Robert Niederheiser ◽  
Martin Mokroš ◽  
Julia Lange ◽  
Helene Petschko ◽  
Günther Prasicek ◽  
...  

Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. <br><br> We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. <br><br> While PhotoScan and Pix4D offer the user-friendliest workflows, they are also “black-box” programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3122 ◽  
Author(s):  
Elizabeth Cabrera ◽  
Luis Ortiz ◽  
Bruno Silva ◽  
Esteban Clua ◽  
Luiz Gonçalves

We propose a versatile method for estimating the RMS error of depth data provided by generic 3D sensors with the capability of generating RGB and depth (D) data of the scene, i.e., the ones based on techniques such as structured light, time of flight and stereo. A common checkerboard is used, the corners are detected and two point clouds are created, one with the real coordinates of the pattern corners and one with the corner coordinates given by the device. After a registration of these two clouds, the RMS error is computed. Then, using curve fittings methods, an equation is obtained that generalizes the RMS error as a function of the distance between the sensor and the checkerboard pattern. The depth errors estimated by our method are compared to those estimated by state-of-the-art approaches, validating its accuracy and utility. This method can be used to rapidly estimate the quality of RGB-D sensors, facilitating robotics applications as SLAM and object recognition.


Sign in / Sign up

Export Citation Format

Share Document