Imaging laser radar simulation software and experiments for enhanced and synthetic vision system design

1997 ◽  
Author(s):  
Arno von der Fecht ◽  
Hendrik Rothe
2004 ◽  
Author(s):  
Michael D. Byrne ◽  
Alex Kirlik ◽  
Michael D. Fleetwood ◽  
David G. Huss ◽  
Alex Kosorukoff ◽  
...  

2021 ◽  
Author(s):  
Yiyu Chen ◽  
Abhinav Pandey ◽  
Zhiwei Deng ◽  
Anthony Nguyen ◽  
Ruiqi Wang ◽  
...  

Abstract The global COVID-19 pandemic has inevitably made disinfection a daily routine to ensure the safety of public and private spaces. However, the existing disinfection procedures are time-consuming and require intensive human labor to apply chemical-based disinfectant onto contaminated surfaces. In this paper, a robot disinfection system is presented to increase the automation of the disinfection task to assist humans in performing routine disinfection safely and efficiently. This paper presents a semi-autonomous quadruped robot called LASER-D for performing disinfection in cluttered environments. The robot is equipped with a spray-based disinfection system and leverages the body motion to control the spray action without an extra stabilization mechanism. The spraying unit is mounted on the robot’s back and controlled by the robot computer. The control architecture is designed based on force control, resulting in navigating rough terrains and the flexibility in controlling the body motion during standing and walking for the disinfection task. The robot also uses the vision system to improve localization and maintain desired distance to the disinfection surface. The system incorporates image processing capability to evaluate disinfected regions with high accuracy. This feedback is then used to adjust the disinfection plan to guarantee that all assigned areas are disinfected properly. The system is also equipped with highly integrated simulation software to design, simulate and evaluate disinfection plans effectively. This work has allowed the robot to successfully carry out effective disinfection experiments while safely traversing through cluttered environments, climb stairs/slopes, and navigate on slippery surfaces.


Author(s):  
Miguel Lozano ◽  
Rafael Lucia ◽  
Fernando Barber ◽  
Fran Grimaldo ◽  
Antonio Lucas ◽  
...  

2008 ◽  
Vol 17 (5) ◽  
pp. 338-345
Author(s):  
Su-Woo Park ◽  
Yoon-Su Kim ◽  
Sang-Ok Lee ◽  
Byung-Hun Lim ◽  
Tae-Gyun Kim ◽  
...  

Author(s):  
Andrea Menegolo ◽  
Roberto Bussola ◽  
Diego Tosi

The following study deals with the on-line motion planning of an innovative SCARA like robot with unlimited joint rotations. The application field is the robotic interception of moving objects randomly distributed on a conveyor and detected by a vision system. A motion planning algorithm was developed in order to achieve a satisfactory cycle time and energy consumption. The algorithm is based on the evaluation of the inertial actions arisen in the robot structure during the pick and place motions and it aims to keep constant the rotation velocity of the first joint during the motion, the grasping and the discarding phases. Since the algorithm must be applied run time and the number of the reachable pieces can be high, a particular care was dedicated to the computational burden reduction. Subsequently to an analytic study of the kinematical constraints and the criteria definition for the choice of which piece to grasp, a devoted simulation software was developed. The software allows the control and the evaluation of the effects of all the main parameters on the system behavior and a comparison of the cycle time and the energy consumption between the proposed algorithm and a standard point-to-point motion strategy.


2002 ◽  
Author(s):  
Norah K. Link ◽  
Ronald V. Kruk ◽  
David McKay ◽  
Sion A. Jennings ◽  
Greg Craig

1999 ◽  
Author(s):  
Andrew K. Barrows ◽  
Keith W. Alter ◽  
Chad W. Jennings ◽  
J. D. Powell

2011 ◽  
Author(s):  
Hiroka Tsuda ◽  
Kohei Funabiki ◽  
Tomoko Iijima ◽  
Kazuho Tawada ◽  
Takashi Yoshida

Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3802 ◽  
Author(s):  
Ahmed F. Fadhil ◽  
Raghuveer Kanneganti ◽  
Lalit Gupta ◽  
Henry Eberle ◽  
Ravi Vaidyanathan

Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications.


Sign in / Sign up

Export Citation Format

Share Document