Construction of Multiple Camera Position Measurable Space

Author(s):  
N. Ando ◽  
K. Morioka ◽  
S. Takatsuka ◽  
J. Lee ◽  
H. Hashimoto
Author(s):  
Rolands Kromanis ◽  
Prakash Kripakaran

AbstractEngineers can today capture high-resolution video recordings of bridge movements during routine visual inspections using modern smartphones and compile a historical archive over time. However, the recordings are likely to be from cameras of different makes, placed at varying positions. Previous studies have not explored whether such recordings can support monitoring of bridge condition. This is the focus of this study. It evaluates the feasibility of an imaging approach for condition assessment that is independent of the camera positions used for individual recordings. The proposed approach relies on the premise that spatial relationships between multiple structural features remain the same even when images of the structure are taken from different angles or camera positions. It employs coordinate transformation techniques, which use the identified features, to compute structural displacements from images. The proposed approach is applied to a laboratory beam, subject to static loading under various damage scenarios and recorded using multiple cameras in a range of positions. Results show that the response computed from the recordings are accurate, with 5% discrepancy in computed displacements relative to the mean. The approach is also demonstrated on a full-scale pedestrian suspension bridge. Vertical bridge movements, induced by forced excitations, are collected with two smartphones and an action camera. Analysis of the images shows that the measurement discrepancy in computed displacements is 6%.


Author(s):  
Sunita Nadella ◽  
Lloyd A. Herman

Video traffic data were collected in 24 combinations of four different camera position parameters. A machine vision processor was used to detect vehicle speeds and volumes from the videotapes. The machine vision results were then compared with the actual vehicle volumes and speeds to give the percentage errors in each case. The results of the study provide a procedure with which to establish camera position parameters with specific reference points to help machine vision users select suitable camera positions and develop appropriate measurement error expectations. The camera position parameters that were most likely to produce the least overall volume and speed errors, for the specific site and field setup with the parameter ranges used in this study, were the low height of approximately 7.6 m (25 ft), with an upstream orientation (traffic moving toward the camera), a 50-mm (midangle) focal length, and a 15° vertical angle.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Matthias Ivantsits ◽  
Lennart Tautz ◽  
Simon Sündermann ◽  
Isaac Wamala ◽  
Jörg Kempfert ◽  
...  

AbstractMinimally invasive surgery is increasingly utilized for mitral valve repair and replacement. The intervention is performed with an endoscopic field of view on the arrested heart. Extracting the necessary information from the live endoscopic video stream is challenging due to the moving camera position, the high variability of defects, and occlusion of structures by instruments. During such minimally invasive interventions there is no time to segment regions of interest manually. We propose a real-time-capable deep-learning-based approach to detect and segment the relevant anatomical structures and instruments. For the universal deployment of the proposed solution, we evaluate them on pixel accuracy as well as distance measurements of the detected contours. The U-Net, Google’s DeepLab v3, and the Obelisk-Net models are cross-validated, with DeepLab showing superior results in pixel accuracy and distance measurements.


1975 ◽  
Vol 84 (8) ◽  
pp. 596-599 ◽  
Author(s):  
Gunter Bevier

2014 ◽  
Vol 40 ◽  
pp. 206-213 ◽  
Author(s):  
Simone Moretti ◽  
Sergio Cicalò ◽  
Matteo Mazzotti ◽  
Velio Tralli ◽  
Marco Chiani

2021 ◽  
Author(s):  
Michela Zaccaria ◽  
Mikhail Giorgini ◽  
Riccardo Monica ◽  
Jacopo Aleotti

Sign in / Sign up

Export Citation Format

Share Document