object localization
Recently Published Documents


TOTAL DOCUMENTS

733
(FIVE YEARS 219)

H-INDEX

41
(FIVE YEARS 5)

2022 ◽  
pp. 108523
Author(s):  
Zhengquan Piao ◽  
Junbo Wang ◽  
Linbo Tanga ◽  
Baojun Zhao ◽  
Wenzheng Wang

2021 ◽  
Author(s):  
Jing Zou ◽  
Simon Trinh ◽  
Andrew Erskine ◽  
Miao Jing ◽  
Jennifer Yao ◽  
...  

Numerous cognitive functions including attention, learning, and plasticity are influenced by the dynamic patterns of acetylcholine release across the brain. How acetylcholine mediates these functions in cortex remains unclear, as the spatiotemporal relationship between cortical acetylcholine and behavioral events has not been precisely measured across task learning. To dissect this relationship, we quantified motor behavior and sub-second acetylcholine dynamics in primary somatosensory cortex during acquisition and performance of a tactile-guided object localization task. We found that acetylcholine dynamics were spatially homogenous and directly attributable to whisker motion and licking, rather than sensory cues or reward delivery. As task performance improved across training, acetylcholine release to the first lick in a trial became dramatically and specifically potentiated, paralleling the emergence of a choice-signalling basis for this motor action. These results show that acetylcholine dynamics in sensory cortex are driven by directed motor actions to gather information and act upon it.


2021 ◽  
Vol 33 (6) ◽  
pp. 1326-1337
Author(s):  
Alfin Junaedy ◽  
◽  
Hiroyuki Masuta ◽  
Kei Sawai ◽  
Tatsuo Motoyoshi ◽  
...  

In this study, the teleoperation robot control on a mobile robot with 2D SLAM and object localization using LPWAN is proposed. The mobile robot is a technology gaining popularity due to flexibility and robustness in a variety of terrains. In search and rescue activities, the mobile robots can be used to perform some missions, assist and preserve human life. However, teleoperation control becomes a challenging problem for this implementation. The robust wireless communication not only allows the operator to stay away from dangerous area, but also increases the mobility of the mobile robot itself. Most of teleoperation mobile robots use Wi-Fi having high-bandwidth, yet short communication range. LoRa as LPWAN, on the other hand, has much longer range but low-bandwidth communication speed. Therefore, the combination of them complements each other’s weaknesses. The use of a two-LoRa configuration also enhances the teleoperation capabilities. All information from the mobile robot can be sent to the PC controller in relatively fast enough for real-time SLAM implementation. Furthermore, the mobile robot is also capable of real-time object detection, localization, and transmitting images. Another problem of LoRa communication is a timeout. We apply timeout recovery algorithms to handle this issue, resulting in more stable data. All data have been confirmed by real-time trials and the proposed method can approach the Wi-Fi performance with a low waiting time or delay.


2021 ◽  
Vol 25 (4) ◽  
pp. 57-66
Author(s):  
Grzegorz Bieszczad ◽  
Tomasz Sosnowski ◽  
Krzysztof Sawicki ◽  
Sławomir Gogler ◽  
Andrzej Ligienza ◽  
...  

This paper presents a concept and implementation of an infrared imaging sensor network for object localization and tracking. The sensor network uses multiple low-resolution (80× 80 pixels) microbolometric thermal cameras to detect, track and locate an object within the area of observation. The network uses information simultaneously acquired from multiple sensors to detect and extract additional information about object’s location. The use of thermal-imaging systems responsive to objects’ natural infrared radiation, makes the system resistant to external illumination and environmental conditions. At the same time, the use of infrared sensor requires application of specially designed, dedicated image processing techniques appropriate for this kind of sensor. The paper describes: image processing techniques, means of object localization, accuracy measurements, comparison to other known solutions and final conclusions.


2021 ◽  
Vol 40 (12-14) ◽  
pp. 1510-1546
Author(s):  
Antoni Rosinol ◽  
Andrew Violette ◽  
Marcus Abate ◽  
Nathan Hughes ◽  
Yun Chang ◽  
...  

Humans are able to form a complex mental model of the environment they move in. This mental model captures geometric and semantic aspects of the scene, describes the environment at multiple levels of abstractions (e.g., objects, rooms, buildings), includes static and dynamic entities and their relations (e.g., a person is in a room at a given time). In contrast, current robots’ internal representations still provide a partial and fragmented understanding of the environment, either in the form of a sparse or dense set of geometric primitives (e.g., points, lines, planes, and voxels), or as a collection of objects. This article attempts to reduce the gap between robot and human perception by introducing a novel representation, a 3D dynamic scene graph (DSG), that seamlessly captures metric and semantic aspects of a dynamic environment. A DSG is a layered graph where nodes represent spatial concepts at different levels of abstraction, and edges represent spatiotemporal relations among nodes. Our second contribution is Kimera, the first fully automatic method to build a DSG from visual–inertial data. Kimera includes accurate algorithms for visual–inertial simultaneous localization and mapping (SLAM), metric–semantic 3D reconstruction, object localization, human pose and shape estimation, and scene parsing. Our third contribution is a comprehensive evaluation of Kimera in real-life datasets and photo-realistic simulations, including a newly released dataset, uHumans2, which simulates a collection of crowded indoor and outdoor scenes. Our evaluation shows that Kimera achieves competitive performance in visual–inertial SLAM, estimates an accurate 3D metric–semantic mesh model in real-time, and builds a DSG of a complex indoor environment with tens of objects and humans in minutes. Our final contribution is to showcase how to use a DSG for real-time hierarchical semantic path-planning. The core modules in Kimera have been released open source.


Author(s):  
Taekyeong Jeong ◽  
Janggon Yoo ◽  
Daegyoum Kim

Abstract Inspired by the lateral line systems of various aquatic organisms that are capable of hydrodynamic imaging using ambient flow information, this study develops a deep learning-based object localization model that can detect the location of objects using flow information measured from a moving sensor array. In numerical simulations with the assumption of a potential flow, a two-dimensional hydrofoil navigates around four stationary cylinders in a uniform flow and obtains two types of sensory data during a simulation, namely flow velocity and pressure, from an array of sensors located on the surface of the hydrofoil. Several neural network models are constructed using the flow velocity and pressure data, and these are used to detect the positions of the hydrofoil and surrounding objects. The model based on a long short-term memory network, which is capable of learning order dependence in sequence prediction problems, outperforms the other models. The number of sensors is then optimized using feature selection techniques. This sensor optimization leads to a new object localization model that achieves impressive accuracy in predicting the locations of the hydrofoil and objects with only 40$\%$ of the sensors used in the original model.


Sign in / Sign up

Export Citation Format

Share Document