Imitation Learning of Path-Planned Driving Using Disparity-Depth Images

Author(s):  
Sascha Hornauer ◽  
Karl Zipser ◽  
Stella Yu
2005 ◽  
Author(s):  
Frderick L. Crabbe ◽  
Rebecca Hwa

Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


2021 ◽  
Author(s):  
Markku Suomalainen ◽  
Fares J. Abu-dakka ◽  
Ville Kyrki

AbstractWe present a novel method for learning from demonstration 6-D tasks that can be modeled as a sequence of linear motions and compliances. The focus of this paper is the learning of a single linear primitive, many of which can be sequenced to perform more complex tasks. The presented method learns from demonstrations how to take advantage of mechanical gradients in in-contact tasks, such as assembly, both for translations and rotations, without any prior information. The method assumes there exists a desired linear direction in 6-D which, if followed by the manipulator, leads the robot’s end-effector to the goal area shown in the demonstration, either in free space or by leveraging contact through compliance. First, demonstrations are gathered where the teacher explicitly shows the robot how the mechanical gradients can be used as guidance towards the goal. From the demonstrations, a set of directions is computed which would result in the observed motion at each timestep during a demonstration of a single primitive. By observing which direction is included in all these sets, we find a single desired direction which can reproduce the demonstrated motion. Finding the number of compliant axes and their directions in both rotation and translation is based on the assumption that in the presence of a desired direction of motion, all other observed motion is caused by the contact force of the environment, signalling the need for compliance. We evaluate the method on a KUKA LWR4+ robot with test setups imitating typical tasks where a human would use compliance to cope with positional uncertainty. Results show that the method can successfully learn and reproduce compliant motions by taking advantage of the geometry of the task, therefore reducing the need for localization accuracy.


2021 ◽  
Vol 13 (5) ◽  
pp. 935
Author(s):  
Matthew Varnam ◽  
Mike Burton ◽  
Ben Esse ◽  
Giuseppe Salerno ◽  
Ryunosuke Kazahaya ◽  
...  

SO2 cameras are able to measure rapid changes in volcanic emission rate but require accurate calibrations and corrections to convert optical depth images into slant column densities. We conducted a test at Masaya volcano of two SO2 camera calibration approaches, calibration cells and co-located spectrometer, and corrected both calibrations for light dilution, a process caused by light scattering between the plume and camera. We demonstrate an advancement on the image-based correction that allows the retrieval of the scattering efficiency across a 2D area of an SO2 camera image. When appropriately corrected for the dilution, we show that our two calibration approaches produce final calculated emission rates that agree with simultaneously measured traverse flux data and each other but highlight that the observed distribution of gas within the image is different. We demonstrate that traverses and SO2 camera techniques, when used together, generate better plume speed estimates for traverses and improved knowledge of wind direction for the camera, producing more reliable emission rates. We suggest combining traverses and the SO2 camera should be adopted where possible.


Author(s):  
Alireza Shamsoshoara ◽  
Fatemeh Afghah ◽  
Erik Blasch ◽  
Jonathan Ashdown ◽  
Mehdi Bennis
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document