A New Approach to Enhance Artificial Intelligence for Robot Picking System Using Auto Picking Point Annotation

Author(s):  
Cheng-Han (Lance) Tsai ◽  
Jen-Yuan (James) Chang

Abstract Artificial Intelligence (AI) has been widely used in different domains such as self-driving, automated optical inspection, and detection of object locations for the robotic pick and place operations. Although the current results of using AI in the mentioned fields are good, the biggest bottleneck for AI is the need for a vast amount of data and labeling of the corresponding answers for a sufficient training. Evidentially, these efforts still require significant manpower. If the quality of the labelling is unstable, the trained AI model becomes unstable and as consequence, so do the results. To resolve this issue, the auto annotation system is proposed in this paper with methods including (1) highly realistic model generation with real texture, (2) domain randomization algorithm in the simulator to automatically generate abundant and diverse images, and (3) visibility tracking algorithm to calculate the occlusion effect objects cause on each other for different picking strategy labels. From our experiments, we will show 10,000 images can be generated per hour, each having multiple objects and each object being labelled in different classes based on their visibility. Instance segmentation AI models can also be trained with these methods to verify the gaps between performance synthetic data for training and real data for testing, indicating that even at mAP 70 the mean average precision can reach 70%!

2021 ◽  
Vol 15 (4) ◽  
pp. 1-20
Author(s):  
Georg Steinbuss ◽  
Klemens Böhm

Benchmarking unsupervised outlier detection is difficult. Outliers are rare, and existing benchmark data contains outliers with various and unknown characteristics. Fully synthetic data usually consists of outliers and regular instances with clear characteristics and thus allows for a more meaningful evaluation of detection methods in principle. Nonetheless, there have only been few attempts to include synthetic data in benchmarks for outlier detection. This might be due to the imprecise notion of outliers or to the difficulty to arrive at a good coverage of different domains with synthetic data. In this work, we propose a generic process for the generation of datasets for such benchmarking. The core idea is to reconstruct regular instances from existing real-world benchmark data while generating outliers so that they exhibit insightful characteristics. We propose and describe a generic process for the benchmarking of unsupervised outlier detection, as sketched so far. We then describe three instantiations of this generic process that generate outliers with specific characteristics, like local outliers. To validate our process, we perform a benchmark with state-of-the-art detection methods and carry out experiments to study the quality of data reconstructed in this way. Next to showcasing the workflow, this confirms the usefulness of our proposed process. In particular, our process yields regular instances close to the ones from real data. Summing up, we propose and validate a new and practical process for the benchmarking of unsupervised outlier detection.


Author(s):  
Hoon Kim ◽  
Kangwook Lee ◽  
Gyeongjo Hwang ◽  
Changho Suh

Developing a computer vision-based algorithm for identifying dangerous vehicles requires a large amount of labeled accident data, which is difficult to collect in the real world. To tackle this challenge, we first develop a synthetic data generator built on top of a driving simulator. We then observe that the synthetic labels that are generated based on simulation results are very noisy, resulting in poor classification performance. In order to improve the quality of synthetic labels, we propose a new label adaptation technique that first extracts internal states of vehicles from the underlying driving simulator, and then refines labels by predicting future paths of vehicles based on a well-studied motion model. Via real-data experiments, we show that our dangerous vehicle classifier can reduce the missed detection rate by at least 18.5% compared with those trained with real data when time-to-collision is between 1.6s and 1.8s.


1995 ◽  
Vol 167 ◽  
pp. 371-372
Author(s):  
Steve B. Howell ◽  
William J. Merline

We have constructed a computer model for simulation of point sources imaged on CCDs. An attempt has been made to ensure that the model produces “data” that mimic real data taken with two-D detectors. To be realistic, such simulations must include randomly generated noise of the appropriate type from all sources. The synthetic data are output as simple one-D integrations, as two-D radial slices, and as three-D intensity plots. Each noise source can be turned on or off so they can be studied independently as well as in combination to provide insight into the image components.


2020 ◽  
Vol 13 (3) ◽  
pp. 142-155
Author(s):  
Youness Madani ◽  
Mohammed Erritali ◽  
Jamaa Bengourram ◽  
Francoise Sailhan

Sentiment analysis has become an important field in scientific research in recent years. The goal is to extract opinions and sentiments from written text using artificial intelligence algorithms. In this article, we propose a new approach for classifying Twitter data into classes (positive, negative, and neutral). The proposed method is based on two approaches, a dictionary-based approach using the sentimental dictionary SentiWordNet, and an approach based on the fuzzy logic system (fuzzification, rule inference, and defuzzification). Experimental results show that our approach outperforms some other approaches in the literature and that by using the fuzzy logic we improve the quality of the classification.


2018 ◽  
Vol 226 ◽  
pp. 04042
Author(s):  
Marko Petkovic ◽  
Marija Blagojevic ◽  
Vladimir Mladenovic

In this paper, we introduce a new approach in food processing using an artificial intelligence. The main focus is simulation of production of spreads and chocolate as representative confectionery products. This approach aids to speed up, model, optimize, and predict the parameters of food processing trying to increase quality of final products. An artificial intelligence is used in field of neural networks and methods of decisions.


2021 ◽  
Vol 28 (1) ◽  
pp. 163-172
Author(s):  
Józef Lisowski

Abstract This paper presents a new approach to the existing training of marine control engineering professionals using artificial intelligence. We use optimisation strategies, neural networks and game theory to support optimal, safe ship control by applying the latest scientific achievements to the current process of educating students as future marine officers. Recent advancements in shipbuilding, equipment for robotised ships, the high quality of shipboard game plans, the cost of overhauling, dependability, the fixing of the shipboard equipment and the requesting of the safe shipping and environmental protection, requires constant information on recent equipment and programming for computational intelligence by marine officers. We carry out an analysis to determine which methods of artificial intelligence can allow us to eliminate human subjectivity and uncertainty from real navigational situations involving manoeuvring decisions made by marine officers. Trainees learn by using computer simulation methods to calculate the optimal safe traverse of the ship in the event of a possible collision with other ships, which are mapped using neural networks that take into consideration the subjectivity of the navigator. The game-optimal safe trajectory for the ship also considers the uncertainty in the navigational situation, which is measured in terms of the risk of collision. The use of artificial intelligence methods in the final stage of training on ship automation can improve the practical education of marine officers and allow for safer and more effective ship operation.


Author(s):  
P. Agrafiotis ◽  
D. Skarlatos ◽  
T. Forbes ◽  
C. Poullis ◽  
M. Skamantzari ◽  
...  

In this paper, main challenges of underwater photogrammetry in shallow waters are described and analysed. The very short camera to object distance in such cases, as well as buoyancy issues, wave effects and turbidity of the waters are challenges to be resolved. Additionally, the major challenge of all, caustics, is addressed by a new approach for caustics removal (Forbes et al., 2018) which is applied in order to investigate its performance in terms of SfM-MVS and 3D reconstruction results. In the proposed approach the complex problem of removing caustics effects is addressed by classifying and then removing them from the images. We propose and test a novel solution based on two small and easily trainable Convolutional Neural Networks (CNNs). Real ground truth for caustics is not easily available. We show how a small set of synthetic data can be used to train the network and later transfer the learning to real data with robustness to intra-class variation. The proposed solution results in caustic-free images which can be further used for other tasks as may be needed.


2010 ◽  
Vol 36 ◽  
pp. 297-302 ◽  
Author(s):  
Rong Sheng Lu ◽  
Yan Qiong Shi ◽  
Qi Li ◽  
Qing Ping Yu

Recent years, automated optical inspection (AOI) is developed very fast along with the rapid development of the emerging industries of semiconductor, LCD, PCB, optical communication and precision assembly, and also widely used in the industries of robot, automobile, steel, textile, printing, medicine, etc. In this paper, we will take a review of the AOI techniques, which are used for defect inspection on a large surface, such as inspecting the quality of TFT-LCD glass substrate and filter. The AOI system architecture having high inspection speed is illustrated. Some key techniques of light illumination, distributed image processing and convey mechanism, are explained.


Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. S33-S40 ◽  
Author(s):  
S. Rentsch ◽  
S. Buske ◽  
S. Lüth ◽  
S. A. Shapiro

We propose a new approach for the location of seismic sources using a technique inspired by Gaussian-beam migration of three-component data. This approach requires only the preliminary picking of time intervals around a detected event and is much less sensitive to the picking precision than standard location procedures. Furthermore, this approach is characterized by a high degree of automation. The polarization information of three-component data is estimated and used to perform initial-value ray tracing. By weighting the energy of the signal using Gaussian beams around these rays, the stacking is restricted to physically relevant regions only. Event locations correspond to regions of maximum energy in the resulting image. We have successfully applied the method to synthetic data examples with 20%–30% white noise and to real data of a hydraulic-fracturing experiment, where events with comparatively small magnitudes [Formula: see text] were recorded.


Geophysics ◽  
1991 ◽  
Vol 56 (7) ◽  
pp. 1071-1080 ◽  
Author(s):  
Mark Sams

A long‐spaced sonic survey may be thought of as a special case of ray theoretical tomographic imaging. With such an approach estimates of borehole properties at a resolution of 6 inches (0.15 m) have been obtained by inversion compared with a resolution of 2 ft (0.6 m) from standard borehole‐compensated techniques (BHC). The inversion scheme employs the conjugate gradient technique which is fast and efficient. Unlike BHC, the method compensates for variable refraction angles and provides estimates of errors in the measurements. Results from synthetic data show that these factors greatly improve the imaging of the properties of a finely layered medium, though amplitude decay and coupling are less well defined than velocity and mud traveltime. Results from real data confirm the superior quality of logs from inversion. Furthermore, they indicate that measured amplitudes can be dominated by errors that cause deterioration of BHC estimates of amplitude decay and coupling.


Sign in / Sign up

Export Citation Format

Share Document