Calibrated bubble depth determination using a single camera

2020 ◽  
Vol 164 ◽  
pp. 11-22
Author(s):  
Francois Noelle ◽  
Matthew R. Molteno ◽  
Robert W.M. Pott
2010 ◽  
Vol 1 (1) ◽  
pp. 51-62
Author(s):  
Marta Braun

Eadweard Muybridge's 1887 photographic atlas Animal Locomotion is a curious mixture of art and science, a polysemic text that has been subject to a number of readings. This paper focuses on Muybridge's technology. It seeks to understand his commitment to making photographs with a battery of cameras rather than a single camera. It suggests reasons for his choice of apparatus and shows how his final work, The Human Figure in Motion (1901), justifies the choices he made.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2232
Author(s):  
Antonio Albiol ◽  
Alberto Albiol ◽  
Carlos Sánchez de Merás

Automated fruit inspection using cameras involves the analysis of a collection of views of the same fruit obtained by rotating a fruit while it is transported. Conventionally, each view is analyzed independently. However, in order to get a global score of the fruit quality, it is necessary to match the defects between adjacent views to prevent counting them more than once and assert that the whole surface has been examined. To accomplish this goal, this paper estimates the 3D rotation undergone by the fruit using a single camera. A 3D model of the fruit geometry is needed to estimate the rotation. This paper proposes to model the fruit shape as a 3D spheroid. The spheroid size and pose in each view is estimated from the silhouettes of all views. Once the geometric model has been fitted, a single 3D rotation for each view transition is estimated. Once all rotations have been estimated, it is possible to use them to propagate defects to neighbor views or to even build a topographic map of the whole fruit surface, thus opening the possibility to analyze a single image (the map) instead of a collection of individual views. A large effort was made to make this method as fast as possible. Execution times are under 0.5 ms to estimate each 3D rotation on a standard I7 CPU using a single core.


2020 ◽  
Vol 11 (1) ◽  
pp. 3
Author(s):  
Laura Gonçalves Ribeiro ◽  
Olli J. Suominen ◽  
Ahmed Durmush ◽  
Sari Peltonen ◽  
Emilio Ruiz Morales ◽  
...  

Visual technologies have an indispensable role in safety-critical applications, where tasks must often be performed through teleoperation. Due to the lack of stereoscopic and motion parallax depth cues in conventional images, alignment tasks pose a significant challenge to remote operation. In this context, machine vision can provide mission-critical information to augment the operator’s perception. In this paper, we propose a retro-reflector marker-based teleoperation aid to be used in hostile remote handling environments. The system computes the remote manipulator’s position with respect to the target using a set of one or two low-resolution cameras attached to its wrist. We develop an end-to-end pipeline of calibration, marker detection, and pose estimation, and extensively study the performance of the overall system. The results demonstrate that we have successfully engineered a retro-reflective marker from materials that can withstand the extreme temperature and radiation levels of the environment. Furthermore, we demonstrate that the proposed maker-based approach provides robust and reliable estimates and significantly outperforms a previous stereo-matching-based approach, even with a single camera.


Author(s):  
Heshan Fernando ◽  
Vedang Chauhan ◽  
Brian Surgenor

This paper presents the results of a comparative study that investigated the use of image-based and signal-based sensors for fault detection and fault isolation of visually-cued faults on an automated assembly machine. The machine assembles 8 mm circular parts, from a bulk-supply, onto continuously moving carriers at a rate of over 100 assemblies per minute. Common faults on the machine include part jams and ejected parts that occur at different locations on the machine. Two sensor systems are installed on the machine for detecting and isolating these faults: an image-based system consisting of a single camera and a signal-based sensor system consisting of multiple greyscale sensors and limit switches. The requirements and performance of both systems are compared for detecting six faults on the assembly machine. It is found that both methods are able to effectively detect the faults but they differ greatly in terms of cost, ease of implementation, detection time and fault isolation capability. The conventional signal-based sensors are low in cost, simple to implement and require little computing power, but the installation is intrusive to the machine and readings from multiple sensors are required for faster fault detection and isolation. The more sophisticated image-based system requires an expensive, high-resolution, high-speed camera and significantly more processing power to detect the same faults; however, the system is not intrusive to the machine, fault isolation becomes a simpler problem with video data, and the single camera is able to detect multiple faults in its field of view.


Sign in / Sign up

Export Citation Format

Share Document