Compact, low cost, large field-of-view self-referencing digital holographic interference microscope

Optik ◽  
2021 ◽  
pp. 167615
Author(s):  
Mugdha Joglekar ◽  
Vismay Trivedi ◽  
Ritu Bhatt ◽  
Vani Chhaniwal ◽  
Satish Dubey ◽  
...  
Author(s):  
L. Barazzetti ◽  
M. Previtali ◽  
F. Roncoroni

360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.


Nanoscale ◽  
2017 ◽  
Vol 9 (37) ◽  
pp. 14172-14183 ◽  
Author(s):  
Astrid Gesper ◽  
Philipp Hagemann ◽  
Patrick Happel

We present an improved Scanning Ion Conductance Microscope that allows high-resolution studies of the interaction of nanoparticles and the cell membrane.


Proceedings ◽  
2018 ◽  
Vol 4 (1) ◽  
pp. 44 ◽  
Author(s):  
Ankit Ravankar ◽  
Abhijeet Ravankar ◽  
Yukinori Kobayashi ◽  
Takanori Emaru

Mapping and exploration are important tasks of mobile robots for various applications such as search and rescue, inspection, and surveillance. Unmanned aerial vehicles (UAVs) are more suited for such tasks because they have a large field of view compared to ground robots. Autonomous operation of UAVs is desirable for exploration in unknown environments. In such environments, the UAV must make a map of the environment and simultaneously localize itself in it which is commonly known as the SLAM (simultaneous localization and mapping) problem. This is also required to safely navigate between open spaces, and make informed decisions about the exploration targets. UAVs have physical constraints including limited payload, and are generally equipped with low-spec embedded computational devices and sensors. Therefore, it is often challenging to achieve robust SLAM on UAVs which also affects exploration. In this paper, we present an autonomous exploration of UAVs in completely unknown environments using low cost sensors such as LIDAR and an RGBD camera. A sensor fusion method is proposed to build a dense 3D map of the environment. Multiple images from the scene are geometrically aligned as the UAV explores the environment, and then a frontier exploration technique is used to search for the next target in the mapped area to explore the maximum area possible. The results show that the proposed algorithm can build precise maps even with low-cost sensors, and explore the environment efficiently.


Author(s):  
C. Chen ◽  
B. S. Yang ◽  
S. Song

Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.


Author(s):  
C. Chen ◽  
B. S. Yang ◽  
S. Song

Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.


2021 ◽  
Author(s):  
Shuangjiu Fu ◽  
Michael Vronsky ◽  
Mohammad-Reza Alam

Abstract Accurately determining water surface elevation and wave shapes in the hydraulic laboratory is critical for experimental research and physical understanding of ocean waves. Existing technologies such as wave gauges cannot capture the continuous wave profile across both space and time. This poses an issue, as nonlinear wave characteristics vary as a function of position and cannot be fully described using such point measurements. Furthermore, wave gauges are intrusive to the flow field. Alternative single-camera methods can’t capture wave characteristics in a large field-of-view properly without sacrificing resolution. In this paper, the authors propose an easy-to-use, low-cost method for measuring wave height and shape along the length of the flume over time. The method utilizes stitching of multiple web-cameras and the application of a Canny-based edge detection algorithm with experimentally determined thresholds and additional filters for maximum robustness and efficacy. Additionally, distortion correction is implemented in a computationally efficient manner. Video is acquired by three Logitech C920 PRO HD cameras recording at a resolution of 1280 × 720 at 24fps. The wave generator can generate waves with frequency between 0.1Hz and 1Hz. The experimental results show that wave height measurements can be obtained with a maximum resolution of 0.83mm with a relative error of ±1.5% when compared with a reference wave gauge measurement. This work demonstrates the ability to arbitrarily extend the horizontal field-of-view while providing more accurate measurement results.


Author(s):  
Jianheng Huang ◽  
Yaohu Lei ◽  
Xin Liu ◽  
Jinchuan Guo ◽  
Ji Li ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document