Synthesizing images using parameterized models for automated optical inspection (AOI)

2015 ◽  
Vol 82 (5) ◽  
Author(s):  
Max-Gerd Retzlaff ◽  
Josua Stabenow ◽  
Jürgen Beyerer ◽  
Carsten Dachsbacher

AbstractWhen designing or improving systems for automated optical inspection (AOI), systematic evaluation is an important but costly necessity to achieve and ensure high quality. Computer graphics methods can be used to quickly create large virtual sets of samples of test objects and to simulate image acquisition setups. We use procedural modeling techniques to generate virtual objects with varying appearance and properties, mimicking real objects and sample sets. Physical simulation of rigid bodies is deployed to simulate the placement of virtual objects, and using physically-based rendering techniques we create synthetic images. These are used as input to an AOI system instead of physically acquired images. This enables the development, optimization, and evaluation of the image processing and classification steps of an AOI system independently of a physical realization. We demonstrate this approach for shards of glass, as sorting glass is one challenging practical application for AOI.


2008 ◽  
Vol 02 (02) ◽  
pp. 207-233
Author(s):  
SATORU MEGA ◽  
YOUNES FADIL ◽  
ARATA HORIE ◽  
KUNIAKI UEHARA

Human-computer interaction systems have been developed in recent years. These systems use multimedia techniques to create Mixed-Reality environments where users can train themselves. Although most of these systems rely strongly on interactivity with the users, taking into account users' states, they still lack the possibility of considering users preferences when they help them. In this paper, we introduce an Action Support System for Interactive Self-Training (ASSIST) in cooking. ASSIST focuses on recognizing users' cooking actions as well as real objects related to these actions to be able to provide them with accurate and useful assistance. Before the recognition and instruction processes, it takes users' cooking preferences and suggests one or more recipes that are likely to satisfy their preferences by collaborative filtering. When the cooking process starts, ASSIST recognizes users' hands movement using a similarity measure algorithm called AMSS. When the recognized cooking action is correct, ASSIST instructs the user on the next cooking procedure through virtual objects. When a cooking action is incorrect, the cause of its failure is analyzed and ASSIST provides the user with support information according to the cause to improve the user's incorrect cooking action. Furthermore, we construct parallel transition models from cooking recipes for more flexible instructions. This enables users to perform necessary cooking actions in any order they want, allowing more flexible learning.



2021 ◽  
Vol 11 (13) ◽  
pp. 6017
Author(s):  
Gerivan Santos Junior ◽  
Janderson Ferreira ◽  
Cristian Millán-Arias ◽  
Ramiro Daniel ◽  
Alberto Casado Junior ◽  
...  

Cracks are pathologies whose appearance in ceramic tiles can cause various damages due to the coating system losing water tightness and impermeability functions. Besides, the detachment of a ceramic plate, exposing the building structure, can still reach people who move around the building. Manual inspection is the most common method for addressing this problem. However, it depends on the knowledge and experience of those who perform the analysis and demands a long time and a high cost to map the entire area. This work focuses on automated optical inspection to find faults in ceramic tiles performing the segmentation of cracks in ceramic images using deep learning to segment these defects. We propose an architecture for segmenting cracks in facades with Deep Learning that includes an image pre-processing step. We also propose the Ceramic Crack Database, a set of images to segment defects in ceramic tiles. The proposed model can adequately identify the crack even when it is close to or within the grout.



2021 ◽  
Author(s):  
D.V. Kraynov ◽  
O.A. Sosnina


Author(s):  
Carlos Gonzalez-Morcillo ◽  
Gerhard Weiss ◽  
Luis Jimenez ◽  
David Vallejo ◽  
Javier Albusac


2003 ◽  
Vol 12 (6) ◽  
pp. 615-628 ◽  
Author(s):  
Benjamin Lok ◽  
Samir Naik ◽  
Mary Whitton ◽  
Frederick P. Brooks

Immersive virtual environments (VEs) provide participants with computer-generated environments filled with virtual objects to assist in learning, training, and practicing dangerous and/or expensive tasks. But does having every object being virtual inhibit the interactivity and level of immersion? If participants spend most of their time and cognitive load on learning and adapting to interacting with virtual objects, does this reduce the effectiveness of the VE? We conducted a study that investigated how handling real objects and self-avatar visual fidelity affects performance and sense of presence on a spatial cognitive manual task. We compared participants' performance of a block arrangement task in both a real-space environment and several virtual and hybrid environments. The results showed that manipulating real objects in a VE brings task performance closer to that of real space, compared to manipulating virtual objects. There was no signifi-cant difference in reported sense of presence, regardless of the self-avatar's visual fidelity or the presence of real objects.



2013 ◽  
Vol 12 (1) ◽  
pp. 30-43
Author(s):  
Bruno Eduardo Madeira ◽  
Luiz Velho

We describe a new architecture composed of software and hardware for displaying stereoscopic images over a horizontal surface. It works as a ``Virtual Table and Teleporter'', in the sense that virtual objects depicted over a table have the appearance of real objects. This system can be used for visualization and interaction. We propose two basic configurations: the Virtual Table, consisting of a single display surface, and the Virtual Teleporter, consisting of a pair of tables for image capture and display. The Virtual Table displays either 3D computer generated images or previously captured stereoscopic video and can be used for interactive applications. The Virtual Teleporter captures and transmits stereoscopic video from one table to the other and can be used for telepresence applications. In both configurations the images are properly deformed and displayed for horizontal 3D stereo. In the Virtual Teleporter two cameras are pointed to the first table, capturing a stereoscopic image pair. These images are shown on the second table that is, in fact, a stereoscopic display positioned horizontally. Many applications can benefit from this technology such as virtual reality, games, teleconferencing, and distance learning. We present some interactive applications that we developed using this architecture.



2016 ◽  
Vol 30 (2) ◽  
pp. 641-655 ◽  
Author(s):  
Chung-Feng Jeffrey Kuo ◽  
Tz-ying Fang ◽  
Chi-Lung Lee ◽  
Han-Cheng Wu


Author(s):  
Gabriel Zachmann

Collision detection is one of the enabling technologies in many areas, such as virtual assembly simulation, physically-based simulation, serious games, and virtual-reality based medical training. This chapter will provide a number of techniques and algorithms that provide efficient, real-time collision detection for virtual objects. They are applicable to various kinds of objects and are easy to implement.



2020 ◽  
Vol 10 (16) ◽  
pp. 5436 ◽  
Author(s):  
Dong-Hyun Kim ◽  
Yong-Guk Go ◽  
Soo-Mi Choi

A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to provide an immersive experience and a safe environment for drone users by creating additional virtual obstacles when flying a drone in an open area. The proposed system is effective in perceiving the depth of obstacles, and enables bidirectional interaction between real and virtual worlds using a drone equipped with a stereo camera based on human binocular vision. In addition, it synchronizes the parameters of the real and virtual cameras to effectively and naturally create virtual objects in a real space. Based on user studies that included both general and expert users, we confirm that the proposed system successfully creates a mixed-reality environment using a flying drone by quickly recognizing real objects and stably combining them with virtual objects.



Sign in / Sign up

Export Citation Format

Share Document