scholarly journals Spatio-temporal voxel layer: A view on robot perception for the dynamic world

2020 ◽  
Vol 17 (2) ◽  
pp. 172988142091053
Author(s):  
Steve Macenski ◽  
David Tsai ◽  
Max Feinberg

The spatio-temporal voxel grid is an actively maintained open-source project providing an improved three-dimensional environmental representation that has been garnering increased adoption in large, dynamic, and complex environments. We provide a voxel grid and the Costmap 2-D layer plug-in, Spatio-Temporal Voxel Layer, powered by a real-time sparse occupancy grid with constant time access to voxels which does not scale with the environment’s size. We replace ray-casting with a new clearing technique we dub frustum acceleration that does not assume a static environment and in practice, represents moving environments better. Our method operates at nearly 400% less CPU load on average while processing 9 QVGA resolution depth cameras as compared to the voxel layer. This technique also supports sensors such as three-dimensional laser scanners, radars, and additional modern sensors that were previously unsupported in the available ROS Navigation framework that has become staples in the roboticists’ toolbox. These sensors are becoming more widely used in robotics as sensor prices are driven down and mobile compute capabilities improve. The Spatio-Temporal Voxel Layer was developed in the open with community feedback over its development life cycle and continues to have additional features and capabilities added by the community. As of February 2019, the Spatio-Temporal Voxel Layer is being used on over 600 robots worldwide in warehouses, factories, hospitals, hotels, stores, and libraries. The open-source software can be viewed and installed on its GitHub page at https://github.com/SteveMacenski/spatio_temporal_voxel_layer .

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3493
Author(s):  
Gahyeon Lim ◽  
Nakju Doh

Remarkable progress in the development of modeling methods for indoor spaces has been made in recent years with a focus on the reconstruction of complex environments, such as multi-room and multi-level buildings. Existing methods represent indoor structure models as a combination of several sub-spaces, which are constructed by room segmentation or horizontal slicing approach that divide the multi-room or multi-level building environments into several segments. In this study, we propose an automatic reconstruction method of multi-level indoor spaces with unique models, including inter-room and inter-floor connections from point cloud and trajectory. We construct structural points from registered point cloud and extract piece-wise planar segments from the structural points. Then, a three-dimensional space decomposition is conducted and water-tight meshes are generated with energy minimization using graph cut algorithm. The data term of the energy function is expressed as a difference in visibility between each decomposed space and trajectory. The proposed method allows modeling of indoor spaces in complex environments, such as multi-room, room-less, and multi-level buildings. The performance of the proposed approach is evaluated for seven indoor space datasets.


2021 ◽  
Vol 13 (3) ◽  
pp. 402
Author(s):  
Pablo Rodríguez-Gonzálvez ◽  
Manuel Rodríguez-Martín

The thermography as a methodology to quantitative data acquisition is not usually addressed in the degrees of university programs. The present manuscript proposes a novel approach for the acquisition of advanced competences in engineering courses associated with the use of thermographic images via free/open-source software solutions. This strategy is established from a research based on the statistical and three-dimensional visualization techniques over thermographic imagery to improve the interpretation and comprehension of the different sources of error affecting the measurements and, thereby, the conclusions and analysis arising from them. The novelty is focused on the detection of non-normalities in thermographic images, which is illustrates in the experimental section. Additionally, the specific workflow for the generation of learning material related with this aim is raised for asynchronous and e-learning programs. These virtual materials can be easily deployed in an institutional learning management system, allowing the students to work with the models by means of free/open-source solutions easily. Subsequently, the present approach will give new tools to improve the application of professional techniques, will improve the students’ critical sense to know how to interpret the uncertainties in thermography using a single thermographic image, therefore they will be better prepared to face future challenges with more critical thinking.


Author(s):  
Cengiz Yeker ◽  
Ibrahim Zeid

Abstract A fully automatic three-dimensional mesh generation method is developed by modifying the well-known ray casting technique. The method is capable of meshing objects modeled using the CSG representation scheme. The input to the method consists of solid geometry information, and mesh attributes such as element size. The method starts by casting rays in 3D space to classify the empty and full parts of the solid. This information is then used to create a cell structure that closely models the solid object. The next step is to further process the cell structure to make it more succinct, so that the cells close to the boundary of the solid object can model the topology with enough fidelity. Moreover, neighborhood relations between cells in the structure are developed and implemented. These relations help produce better conforming meshes. Each cell in the cell structure is identified with respect to a set of pre-defined types of cells. After the identification process, a normalization process is developed and applied to the cell structure in order to ensure that the finite elements generated from each cell conform to each other and to other elements produced from neighboring cells. The last step is to mesh each cell in the structure with valid finite elements.


Author(s):  
Kathryne M Allen ◽  
Angeles Salles ◽  
Sanwook Park ◽  
Mounya Elhilali ◽  
Cynthia F. Moss

The discrimination of complex sounds is a fundamental function of the auditory system. This operation must be robust in the presence of noise and acoustic clutter. Echolocating bats are auditory specialists that discriminate sonar objects in acoustically complex environments. Bats produce brief signals, interrupted by periods of silence, rendering echo snapshots of sonar objects. Sonar object discrimination requires that bats process spatially and temporally overlapping echoes to make split-second decisions. The mechanisms that enable this discrimination are not well understood, particularly in complex environments. We explored the neural underpinnings of sonar object discrimination in the presence of acoustic scattering caused by physical clutter. We performed electrophysiological recordings in the inferior colliculus of awake big brown bats, to broadcasts of pre-recorded echoes from physical objects. We acquired single unit responses to echoes and discovered a sub-population of IC neurons that encode acoustic features that can be used to discriminate between sonar objects. We further investigated the effects of environmental clutter on this population's encoding of acoustic features. We discovered that the effect of background clutter on sonar object discrimination is highly variable and depends on object properties and target-clutter spatio-temporal separation. In many conditions, clutter impaired discrimination of sonar objects. However, in some instances clutter enhanced acoustic features of echo returns, enabling higher levels of discrimination. This finding suggests that environmental clutter may augment acoustic cues used for sonar target discrimination and provides further evidence in a growing body of literature that noise is not universally detrimental to sensory encoding.


2016 ◽  
Vol 9 (11) ◽  
pp. 4071-4085 ◽  
Author(s):  
Esteban Acevedo-Trejos ◽  
Gunnar Brandt ◽  
S. Lan Smith ◽  
Agostino Merico

Abstract. Biodiversity is one of the key mechanisms that facilitate the adaptive response of planktonic communities to a fluctuating environment. How to allow for such a flexible response in marine ecosystem models is, however, not entirely clear. One particular way is to resolve the natural complexity of phytoplankton communities by explicitly incorporating a large number of species or plankton functional types. Alternatively, models of aggregate community properties focus on macroecological quantities such as total biomass, mean trait, and trait variance (or functional trait diversity), thus reducing the observed natural complexity to a few mathematical expressions. We developed the PhytoSFDM modelling tool, which can resolve species discretely and can capture aggregate community properties. The tool also provides a set of methods for treating diversity under realistic oceanographic settings. This model is coded in Python and is distributed as open-source software. PhytoSFDM is implemented in a zero-dimensional physical scheme and can be applied to any location of the global ocean. We show that aggregate community models reduce computational complexity while preserving relevant macroecological features of phytoplankton communities. Compared to species-explicit models, aggregate models are more manageable in terms of number of equations and have faster computational times. Further developments of this tool should address the caveats associated with the assumptions of aggregate community models and about implementations into spatially resolved physical settings (one-dimensional and three-dimensional). With PhytoSFDM we embrace the idea of promoting open-source software and encourage scientists to build on this modelling tool to further improve our understanding of the role that biodiversity plays in shaping marine ecosystems.


Inventions ◽  
2018 ◽  
Vol 3 (4) ◽  
pp. 78 ◽  
Author(s):  
Aubrey Woern ◽  
Joshua Pearce

Although distributed additive manufacturing can provide high returns on investment, the current markup on commercial filament over base polymers limits deployment. These cost barriers can be surmounted by eliminating the entire process of fusing filament by three-dimensional (3-D) printing products directly from polymer granules. Fused granular fabrication (FGF) (or fused particle fabrication (FPF)) is being held back in part by the accessibility of low-cost pelletizers and choppers. An open-source 3-D printable invention disclosed here allows for precisely controlled pelletizing of both single thermopolymers as well as composites for 3-D printing. The system is designed, built, and tested for its ability to provide high-tolerance thermopolymer pellets with a number of sizes capable of being used in an FGF printer. In addition, the chopping pelletizer is tested for its ability to chop multi-materials simultaneously for color mixing and composite fabrication as well as precise fractional measuring back to filament. The US$185 open-source 3-D printable pelletizer chopper system was successfully fabricated and has a 0.5 kg/h throughput with one motor, and 1.0 kg/h throughput with two motors using only 0.24 kWh/kg during the chopping process. Pellets were successfully printed directly via FGF as well as indirectly after being converted into high-tolerance filament in a recyclebot.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Lauren Hazlett ◽  
Alexander K. Landauer ◽  
Mohak Patel ◽  
Hadley A. Witt ◽  
Jin Yang ◽  
...  

Abstract We introduce a novel method to compute three-dimensional (3D) displacements and both in-plane and out-of-plane tractions on nominally planar transparent materials using standard epifluorescence microscopy. Despite the importance of out-of-plane components to fully understanding cell behavior, epifluorescence images are generally not used for 3D traction force microscopy (TFM) experiments due to limitations in spatial resolution and measuring out-of-plane motion. To extend an epifluorescence-based technique to 3D, we employ a topology-based single particle tracking algorithm to reconstruct high spatial-frequency 3D motion fields from densely seeded single-particle layer images. Using an open-source finite element (FE) based solver, we then compute the 3D full-field stress and strain and surface traction fields. We demonstrate this technique by measuring tractions generated by both single human neutrophils and multicellular monolayers of Madin–Darby canine kidney cells, highlighting its acuity in reconstructing both individual and collective cellular tractions. In summary, this represents a new, easily accessible method for calculating fully three-dimensional displacement and 3D surface tractions at high spatial frequency from epifluorescence images. We released and support the complete technique as a free and open-source code package.


Sign in / Sign up

Export Citation Format

Share Document