scholarly journals HMM-based Address Parsing with Massive Synthetic Training Data Generation

Author(s):  
Xiang Li ◽  
Hakan Kardes ◽  
Xin Wang ◽  
Ang Sun
Procedia CIRP ◽  
2021 ◽  
Vol 104 ◽  
pp. 1257-1262
Author(s):  
Daniel Schoepflin ◽  
Dirk Holst ◽  
Martin Gomse ◽  
Thorsten Schüppstuhl

Author(s):  
Daniel Schoepflin ◽  
Karthik Iyer ◽  
Martin Gomse ◽  
Thorsten Schüppstuhl

Abstract Obtaining annotated data for proper training of AI image classifiers remains a challenge for successful deployment in industrial settings. As a promising alternative to handcrafted annotations, synthetic training data generation has grown in popularity. However, in most cases the pipelines used to generate this data are not of universal nature and have to be redesigned for different domain applications. This requires a detailed formulation of the domain through a semantic scene grammar. We aim to present such a grammar that is based on domain knowledge for the production-supplying transport of components in intralogistic settings. We present a use-case analysis for the domain of production supplying logistics and derive a scene grammar, which can be used to formulate similar problem statements in the domain for the purpose of data generation. We demonstrate the use of this grammar to feed a scene generation pipeline and obtain training data for an AI based image classifier.


2021 ◽  
Vol 18 (4) ◽  
pp. 378-381 ◽  
Author(s):  
Luis A. Bolaños ◽  
Dongsheng Xiao ◽  
Nancy L. Ford ◽  
Jeff M. LeDue ◽  
Pankaj K. Gupta ◽  
...  

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Christine Dewi ◽  
Rung-Ching Chen ◽  
Yan-Ting Liu ◽  
Xiaoyi Jiang ◽  
Kristoko Dwi Hartomo

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


Sign in / Sign up

Export Citation Format

Share Document