Autonomous Navigation in Three-Dimensional Urban Environments Using Wide-Field Integration of Optic Flow

2010 ◽  
Vol 33 (1) ◽  
pp. 147-159 ◽  
Author(s):  
Andrew M. Hyslop ◽  
J. Sean Humbert
2012 ◽  
Vol 107 (12) ◽  
pp. 3446-3457 ◽  
Author(s):  
Pei Liang ◽  
Jochen Heitwerth ◽  
Roland Kern ◽  
Rafael Kurtz ◽  
Martin Egelhaaf

Three motion-sensitive key elements of a neural circuit, presumably involved in processing object and distance information, were analyzed with optic flow sequences as experienced by blowflies in a three-dimensional environment. This optic flow is largely shaped by the blowfly's saccadic flight and gaze strategy, which separates translational flight segments from fast saccadic rotations. By modifying this naturalistic optic flow, all three analyzed neurons could be shown to respond during the intersaccadic intervals not only to nearby objects but also to changes in the distance to background structures. In the presence of strong background motion, the three types of neuron differ in their sensitivity for object motion. Object-induced response increments are largest in FD1, a neuron long known to respond better to moving objects than to spatially extended motion patterns, but weakest in VCH, a neuron that integrates wide-field motion from both eyes and, by inhibiting the FD1 cell, is responsible for its object preference. Small but significant object-induced response increments are present in HS cells, which serve both as a major input neuron of VCH and as output neurons of the visual system. In both HS and FD1, intersaccadic background responses decrease with increasing distance to the animal, although much more prominently in FD1. This strong dependence of FD1 on background distance is concluded to be the consequence of the activity of VCH that dramatically increases its activity and, thus, its inhibitory strength with increasing distance.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mechiel van Manen ◽  
Léon olde Scholtenhuis ◽  
Hans Voordijk

PurposeThis study aims to empirically validate five propositions about the benefits of three-dimensional (3D) visualizations for the management of subsurface utility projects. Specifically, the authors validate whether benefits from 3D in the literature of building construction project management also apply to subsurface utility projects and map them using a taxonomy of project complexity levels.Design/methodology/approachA multiple case study of three utility construction projects was carried out during which the first author was involved in the daily work practices at a utility contractor. 3D visualizations of existing project models were developed, and design and construction meetings were conducted. Practitioners' interactions with and reflections on these 3D visualizations were noted. Observational data from the three project types were matched with the five propositions to determine where benefits of 3D visualizations manifested themselves.FindingsPractitioners found that 3D visualizations had most merit in crowded urban environments when constructing rigid pipelines. All propositions were validated and evaluated as beneficial in subsurface utility projects of complexity level C3. It is shown that in urban projects with rigid pipelines (project with the highest complexity level), 3D visualization prevents misunderstanding or misinterpretations and increases efficiency of coordination. It is recommended to implement 3D visualization approaches in such complex projectsOriginality/valueThere is only limited evidence on the value 3D visualizations in managing utility projects. This study contributes rich empirical evidence on this value based on a six-month observation period at a subsurface contractor. Their merit was assessed by associating 3D approaches with project complexity levels, which may help utility contractors in strategically implementing 3D applications.


2001 ◽  
Vol 85 (2) ◽  
pp. 724-734 ◽  
Author(s):  
Holger G. Krapp ◽  
Roland Hengstenberg ◽  
Martin Egelhaaf

Integrating binocular motion information tunes wide-field direction-selective neurons in the fly optic lobe to respond preferentially to specific optic flow fields. This is shown by measuring the local preferred directions (LPDs) and local motion sensitivities (LMSs) at many positions within the receptive fields of three types of anatomically identifiable lobula plate tangential neurons: the three horizontal system (HS) neurons, the two centrifugal horizontal (CH) neurons, and three heterolateral connecting elements. The latter impart to two of the HS and to both CH neurons a sensitivity to motion from the contralateral visual field. Thus in two HS neurons and both CH neurons, the response field comprises part of the ipsi- and contralateral visual hemispheres. The distributions of LPDs within the binocular response fields of each neuron show marked similarities to the optic flow fields created by particular types of self-movements of the fly. Based on the characteristic distributions of local preferred directions and motion sensitivities within the response fields, the functional role of the respective neurons in the context of behaviorally relevant processing of visual wide-field motion is discussed.


Author(s):  
Masamune Oguri ◽  
Satoshi Miyazaki ◽  
Chiaki Hikage ◽  
Rachel Mandelbaum ◽  
Yousuke Utsumi ◽  
...  

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2918 ◽  
Author(s):  
Junseong Eom ◽  
Sangjun Moon

The digital in-line holographic microscope (DIHM) was developed for a 2D imaging technology and has recently been adapted to 3D imaging methods, providing new approaches to obtaining volumetric images with both a high resolution and wide field-of-view (FOV), which allows the physical limitations to be overcome. However, during the sectioning process of 3D image generation, the out-of-focus image of the object becomes a significant impediment to obtaining evident 3D features in the 2D sectioning plane of a thick biological sample. Based on phase retrieved high-resolution holographic imaging and a 3D deconvolution technique, we demonstrate that a high-resolution 3D volumetric image, which significantly reduces wave-front reconstruction and out-of-focus artifacts, can be achieved. The results show a 3D volumetric image that is more finely focused compared to a conventional 3D stacked image from 2D reconstructed images in relation to micron-size polystyrene beads, a whole blood smear, and a kidney tissue sample. We believe that this technology can be applicable for medical-grade images of smeared whole blood or an optically cleared tissue sample for mobile phytological microscopy and laser sectioning microscopy.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4236
Author(s):  
Woosik Lee ◽  
Hyojoo Cho ◽  
Seungho Hyeong ◽  
Woojin Chung

Autonomous navigation technology is used in various applications, such as agricultural robots and autonomous vehicles. The key technology for autonomous navigation is ego-motion estimation, which uses various sensors. Wheel encoders and global navigation satellite systems (GNSSs) are widely used in localization for autonomous vehicles, and there are a few quantitative strategies for handling the information obtained through their sensors. In many cases, the modeling of uncertainty and sensor fusion depends on the experience of the researchers. In this study, we address the problem of quantitatively modeling uncertainty in the accumulated GNSS and in wheel encoder data accumulated in anonymous urban environments, collected using vehicles. We also address the problem of utilizing that data in ego-motion estimation. There are seven factors that determine the magnitude of the uncertainty of a GNSS sensor. Because it is impossible to measure each of these factors, in this study, the uncertainty of the GNSS sensor is expressed through three variables, and the exact uncertainty is calculated. Using the proposed method, the uncertainty of the sensor is quantitatively modeled and robust localization is performed in a real environment. The approach is validated through experiments in urban environments.


Robotica ◽  
2018 ◽  
Vol 36 (8) ◽  
pp. 1225-1243 ◽  
Author(s):  
Jose-Pablo Sanchez-Rodriguez ◽  
Alejandro Aceves-Lopez

SUMMARYThis paper presents an overview of the most recent vision-based multi-rotor micro unmanned aerial vehicles (MUAVs) intended for autonomous navigation using a stereoscopic camera. Drone operation is difficult because pilots need the expertise to fly the drones. Pilots have a limited field of view, and unfortunate situations, such as loss of line of sight or collision with objects such as wires and branches, can happen. Autonomous navigation is an even more difficult challenge than remote control navigation because the drones must make decisions on their own in real time and simultaneously build maps of their surroundings if none is available. Moreover, MUAVs are limited in terms of useful payload capability and energy consumption. Therefore, a drone must be equipped with small sensors, and it must carry low weight. In addition, a drone requires a sufficiently powerful onboard computer so that it can understand its surroundings and navigate accordingly to achieve its goal safely. A stereoscopic camera is considered a suitable sensor because of its three-dimensional (3D) capabilities. Hence, a drone can perform vision-based navigation through object recognition and self-localise inside a map if one is available; otherwise, its autonomous navigation creates a simultaneous localisation and mapping problem.


2021 ◽  
Author(s):  
Xuepeng Chen ◽  
Weihua Guo ◽  
Jiangcheng Feng ◽  
Yang Su ◽  
Yan Sun ◽  
...  

Abstract Located at a distance of about 300 pc, Perseus OB2 (or Per~OB2 for short) is one of the major OB associations in the solar vicinity\cite{Zeeuw99,Belikov2002}, which has blown a supershell with a diameter of about 15 degree seen in the atomic hydrogen line surveys\cite{Sancisi1974,Heiles1984,Hartmann1997}. It was long considered that stellar feedback from the Per~OB2 association had formed a superbubble that swept up the surrounding interstellar medium into the observed supershell\cite{Bally2008}. Here we report the three-dimensional structure of the Per~OB2 superbubble, based on wide-field atomic hydrogen and molecular gas (traced by CO) surveys. The measured diameter of the superbubble is roughly 330 pc. Multiple atomic hydrogen shells/loops with expansion velocities of about 10 km/s are revealed in the superbubble, suggesting a complicated evolution history of the superbubble. Furthermore, the inspections of the morphology, kinematics and timescale of the Taurus-Auriga, California, and Perseus molecular clouds shows that the cloud complex is a super molecular cloud loop circling around and co-expanding with the Per~OB2 superbubble. We conclude that the Taurus-Auriga-California-Perseus loop, the largest star-forming molecular cloud complex in the solar neighborhood, is formed from the feedback of the Per~OB2 superbubble.


2021 ◽  
Author(s):  
Sven Gastauer ◽  
Jeffrey S. Ellen ◽  
Mark D. Ohman

<p><em>Zooglider</em> is an autonomous buoyancy-driven ocean glider designed and built by the Instrument Development Group at Scripps. <em>Zooglider</em> includes a low power camera with a telecentric lens for shadowgraph imaging and two custom active acoustics echosounders (operated at 200/1000 kHz).  A passive acoustic hydrophone records vocalizations from marine mammals, fishes, and ambient noise.  The imaging system (<em>Zoocam</em>) quantifies zooplankton and ‘marine snow’ as they flow through a sampling tunnel within a well-defined sampling volume. Other sensors include a pumped Conductivity-Temperature-Depth probe and Chl-<em>a</em> fluorometer.  An acoustic altimeter permits autonomous navigation across regions of abrupt seafloor topography, including submarine canyons and seamounts.  Vertical sampling resolution is typically 5 cm, maximum operating depth is ~500 m, and mission duration up to 50 days.  Adaptive sampling is enabled by telemetry of measurements at each surfacing.  Our post-deployment processing methodology classifies the optical images using advanced Deep Learning methods that utilize context metadata.  <em>Zooglider</em> permits in situ measurements of mesozooplankton and marine snow - and their natural, three dimensional orientation - in relation to other biotic and physical properties of the ocean water column.  <em>Zooglider</em> resolves micro-scale patches, which are important for predator-prey interactions and biogeochemical cycling. </p><p> </p>


2020 ◽  
Vol 10 (3) ◽  
pp. 1140 ◽  
Author(s):  
Jorge L. Martínez ◽  
Mariano Morán ◽  
Jesús Morales ◽  
Alfredo Robles ◽  
Manuel Sánchez

Autonomous navigation of ground vehicles on natural environments requires looking for traversable terrain continuously. This paper develops traversability classifiers for the three-dimensional (3D) point clouds acquired by the mobile robot Andabata on non-slippery solid ground. To this end, different supervised learning techniques from the Python library Scikit-learn are employed. Training and validation are performed with synthetic 3D laser scans that were labelled point by point automatically with the robotic simulator Gazebo. Good prediction results are obtained for most of the developed classifiers, which have also been tested successfully on real 3D laser scans acquired by Andabata in motion.


Sign in / Sign up

Export Citation Format

Share Document