Coordinating robotic sensors in a complex environment for data collection and object recognition

1994 ◽  
Author(s):  
Kelly A. Korzeniowski
2019 ◽  
Vol 1 (3) ◽  
pp. 883-903 ◽  
Author(s):  
Daulet Baimukashev ◽  
Alikhan Zhilisbayev ◽  
Askat Kuzdeuov ◽  
Artemiy Oleinikov ◽  
Denis Fadeyev ◽  
...  

Recognizing objects and estimating their poses have a wide range of application in robotics. For instance, to grasp objects, robots need the position and orientation of objects in 3D. The task becomes challenging in a cluttered environment with different types of objects. A popular approach to tackle this problem is to utilize a deep neural network for object recognition. However, deep learning-based object detection in cluttered environments requires a substantial amount of data. Collection of these data requires time and extensive human labor for manual labeling. In this study, our objective was the development and validation of a deep object recognition framework using a synthetic depth image dataset. We synthetically generated a depth image dataset of 22 objects randomly placed in a 0.5 m × 0.5 m × 0.1 m box, and automatically labeled all objects with an occlusion rate below 70%. Faster Region Convolutional Neural Network (R-CNN) architecture was adopted for training using a dataset of 800,000 synthetic depth images, and its performance was tested on a real-world depth image dataset consisting of 2000 samples. Deep object recognizer has 40.96% detection accuracy on the real depth images and 93.5% on the synthetic depth images. Training the deep learning model with noise-added synthetic images improves the recognition accuracy for real images to 46.3%. The object detection framework can be trained on synthetically generated depth data, and then employed for object recognition on the real depth data in a cluttered environment. Synthetic depth data-based deep object detection has the potential to substantially decrease the time and human effort required for the extensive data collection and labeling.


2008 ◽  
Vol 392-394 ◽  
pp. 596-600 ◽  
Author(s):  
Hong Jun Wang ◽  
Xiang Jun Zou ◽  
D.J. Zou ◽  
J. Liu ◽  
Tian Hu Liu

In picking manipulator location system, it is the key problem that the positions of obi-object and picking manipulator are exactly determined in complex environment. Based on multi-sensor information fusion method, a data fusion system of multi-sensor integrating laser-sensor for absolute location with ultrasonic-sensor for inspection impediment was presented. Firstly, data collection and fusion were implemented employing a two- level distribution system. Secondly, the method of data collection and fusion in virtual environment was discussed, and the result data could drive picking manipulator 3D model to dynamically move in real-time using event and route mechanisms provided by virtual environment, which could simulate the process of picking manipulator being accurately located. Finally, a location simulation system was developed by VC++ and EON SDK.


Author(s):  
S.W. Hui ◽  
D.F. Parsons

The development of the hydration stages for electron microscopes has opened up the application of electron diffraction in the study of biological membranes. Membrane specimen can now be observed without the artifacts introduced during drying, fixation and staining. The advantages of the electron diffraction technique, such as the abilities to observe small areas and thin specimens, to image and to screen impurities, to vary the camera length, and to reduce data collection time are fully utilized. Here we report our pioneering work in this area.


Author(s):  
Weiping Liu ◽  
Jennifer Fung ◽  
W.J. de Ruijter ◽  
Hans Chen ◽  
John W. Sedat ◽  
...  

Electron tomography is a technique where many projections of an object are collected from the transmission electron microscope (TEM), and are then used to reconstruct the object in its entirety, allowing internal structure to be viewed. As vital as is the 3-D structural information and with no other 3-D imaging technique to compete in its resolution range, electron tomography of amorphous structures has been exercised only sporadically over the last ten years. Its general lack of popularity can be attributed to the tediousness of the entire process starting from the data collection, image processing for reconstruction, and extending to the 3-D image analysis. We have been investing effort to automate all aspects of electron tomography. Our systems of data collection and tomographic image processing will be briefly described.To date, we have developed a second generation automated data collection system based on an SGI workstation (Fig. 1) (The previous version used a micro VAX). The computer takes full control of the microscope operations with its graphical menu driven environment. This is made possible by the direct digital recording of images using the CCD camera.


1997 ◽  
Vol 6 (4) ◽  
pp. 34-47 ◽  
Author(s):  
Steven H. Long ◽  
Lesley B. Olswang ◽  
Julianne Brian ◽  
Philip S. Dale

This study investigated whether young children with specific expressive language impairment (SELI) learn to combine words according to general positional rules or specific, grammatic relation rules. The language of 20 children with SELI (4 females, 16 males, mean age of 33 months, mean MLU of 1.34) was sampled weekly for 9 weeks. Sixteen of these children also received treatment for two-word combinations (agent+action or possessor+possession). Two different metrics were used to determine the productivity of combinatorial utterances. One metric assessed productivity based on positional consistency alone; another assessed productivity based on positional and semantic consistency. Data were analyzed session-by-session as well as cumulatively. The results suggest that these children learned to combine words according to grammatic relation rules. Results of the session-by-session analysis were less informative than those of the cumulative analysis. For children with SELI ready to make the transition to multiword utterances, these findings support a cumulative method of data collection and a treatment approach that targets specific grammatic relation rules rather than general word combinations.


2019 ◽  
Vol 4 (2) ◽  
pp. 356-362
Author(s):  
Jennifer W. Means ◽  
Casey McCaffrey

Purpose The use of real-time recording technology for clinical instruction allows student clinicians to more easily collect data, self-reflect, and move toward independence as supervisors continue to provide continuation of supportive methods. This article discusses how the use of high-definition real-time recording, Bluetooth technology, and embedded annotation may enhance the supervisory process. It also reports results of graduate students' perception of the benefits and satisfaction with the types of technology used. Method Survey data were collected from graduate students about their use and perceived benefits of advanced technology to support supervision during their 1st clinical experience. Results Survey results indicate that students found the use of their video recordings useful for self-evaluation, data collection, and therapy preparation. The students also perceived an increase in self-confidence through the use of the Bluetooth headsets as their supervisors could provide guidance and encouragement without interrupting the flow of their therapy sessions by entering the room to redirect them. Conclusions The use of video recording technology can provide opportunities for students to review: videos of prospective clients they will be treating, their treatment videos for self-assessment purposes, and for additional data collection. Bluetooth technology provides immediate communication between the clinical educator and the student. Students reported that the result of that communication can improve their self-confidence, perceived performance, and subsequent shift toward independence.


Sign in / Sign up

Export Citation Format

Share Document