2020 ◽  
Vol 12 ◽  
pp. 175682932092452
Author(s):  
Liang Lu ◽  
Alexander Yunda ◽  
Adrian Carrio ◽  
Pascual Campoy

This paper presents a novel collision-free navigation system for the unmanned aerial vehicle based on point clouds that outperform compared to baseline methods, enabling high-speed flights in cluttered environments, such as forests or many indoor industrial plants. The algorithm takes the point cloud information from physical sensors (e.g. lidar, depth camera) and then converts it to an occupied map using Voxblox, which is then used by a rapid-exploring random tree to generate finite path candidates. A modified Covariant Hamiltonian Optimization for Motion Planning objective function is used to select the best candidate and update it. Finally, the best candidate trajectory is generated and sent to a Model Predictive Control controller. The proposed navigation strategy is evaluated in four different simulation environments; the results show that the proposed method has a better success rate and a shorter goal-reaching distance than the baseline method.


2021 ◽  
Vol 11 (8) ◽  
pp. 3439
Author(s):  
Debashis Das Chakladar ◽  
Pradeep Kumar ◽  
Shubham Mandal ◽  
Partha Pratim Roy ◽  
Masakazu Iwamura ◽  
...  

Sign language is a visual language for communication used by hearing-impaired people with the help of hand and finger movements. Indian Sign Language (ISL) is a well-developed and standard way of communication for hearing-impaired people living in India. However, other people who use spoken language always face difficulty while communicating with a hearing-impaired person due to lack of sign language knowledge. In this study, we have developed a 3D avatar-based sign language learning system that converts the input speech/text into corresponding sign movements for ISL. The system consists of three modules. Initially, the input speech is converted into an English sentence. Then, that English sentence is converted into the corresponding ISL sentence using the Natural Language Processing (NLP) technique. Finally, the motion of the 3D avatar is defined based on the ISL sentence. The translation module achieves a 10.50 SER (Sign Error Rate) score.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


2015 ◽  
Vol 75 (22) ◽  
pp. 14991-15015 ◽  
Author(s):  
Giulio Marin ◽  
Fabio Dominio ◽  
Pietro Zanuttigh

2018 ◽  
Vol 75 (5) ◽  
pp. 797-812 ◽  
Author(s):  
Beau Doherty ◽  
Samuel D.N. Johnson ◽  
Sean P. Cox

Bottom longline hook and trap fishing gear can potentially damage sensitive benthic areas (SBAs) in the ocean; however, the large-scale risks to these habitats are poorly understood because of the difficulties in mapping SBAs and in measuring the bottom-contact area of longline gear. In this paper, we describe a collaborative academic–industry–government approach to obtaining direct presence–absence data for SBAs and to measuring gear interactions with seafloor habitats via a novel deepwater trap camera and motion-sensing systems on commercial longline traps for sablefish (Anoplopoma fimbria) within SGaan Kinghlas – Bowie Seamount Marine Protected Area. We obtained direct presence–absence observations of cold-water corals (Alcyonacea, Antipatharia, Pennatulacea, Stylasteridae) and sponges (Hexactinellida, Demospongiae) at 92 locations over three commercial fishing trips. Video, accelerometer, and depth sensor data were used to estimate a mean bottom footprint of 53 m2 for a standard sablefish trap, which translates to 3200 m2 (95% CI = 2400–3900 m2) for a 60-trap commercial sablefish longline set. Our successful collaboration demonstrates how research partnerships with commercial fisheries have potential for massive improvements in the quantity and quality of data needed for conducting SBA risk assessments over large spatial and temporal scales.


Sign in / Sign up

Export Citation Format

Share Document