Automatic Reel Editing in Chip on Film Quality Control With Computer Vision

Author(s):  
Shing Hwang Doong

Chip on film (COF) is a special packaging technology to pack integrated circuits in a flexible carrier tape. Chips packed with COF are primarily used in the display industry. Reel editing is a critical step in COF quality control to remove sections of congregating NG (not good) chips from a reel. Today, COF manufactures hire workers to count consecutive NG chips in a rolling reel with naked eyes. When the count is greater than a preset number, the corresponding section is removed. A novel method using object detection and object tracking is proposed to solve this problem. Object detection techniques including convolutional neural network (CNN), template matching (TM), and scale invariant feature transform (SIFT) were used to detect NG marks, and object tracking was used to track them with IDs so that congregating NG chips could be counted reliably. Using simulation videos similar to worksite scenes, experiments show that both CNN and TM detectors could solve the reel editing problem, while SIFT detectors failed. Furthermore, TM is better than CNN by yielding a real time solution.

2010 ◽  
Vol 36 ◽  
pp. 413-421 ◽  
Author(s):  
Hideaki Kawano ◽  
Hideaki Orii ◽  
Katsuaki Shiraishi ◽  
Hiroshi Maeda

Autonomous robots are at advanced stage in various fields, and they are expected to autonomously work at the scenes of nursing care or medical care in the near future. In this paper, we focus on object counting task by images. Since the number of objects is not a mere physical quantity, it is difficult for conventional phisical sensors to measure such quantity and an intelligent sensing with higher-order recognition is required to accomplish such counting task. It is often that we count the number of objects in various situations. In the case of several objects, we can recognize the number at a glance. On the other hand, in the case of a dozen of objects, the task to count the number might become troublesome. Thus, simple and easy way to enumerate the objects automatically has been expected. In this study, we propose a method to recognize the number of objects by image. In general, the target object to count varies according to user's request. In order to accept the user's various requests, the region belonging to the desired object in the image is selected as a template. Main process of the proposed method is to search and count regions which resembles the template. To achieve robustness against spatial transformation, such as translation, rotation, and scaling, scale-invariant feature transform (SIFT) is employed as a feature. To show the effectiveness, the proposed method is applied to few images containing everyday objects, e.g., binders, cans etc.


2014 ◽  
Vol 7 (3) ◽  
Author(s):  
Kentaro Takemura ◽  
Tomohisa Yamakawa ◽  
Jun Takamatsu ◽  
Tsukasa Ogasawara

Researchers are considering the use of eye tracking in head-mounted camera systems, such as Google’s Project Glass. Typical methods require detailed calibration in advance, but long periods of use disrupt the calibration record between the eye and the scene camera. In addition, the focused object might not be estimated even if the point-of-regard is estimated using a portable eye-tracker. Therefore, we propose a novel method for estimating the object that a user is focused upon, where an eye camera captures the reflection on the corneal surface. Eye and environment information can be extracted from the corneal surface image simultaneously. We use inverse ray tracing to rectify the reflected image and a scale-invariant feature transform to estimate the object where the point-of-regard is located. Unwarped images can also be generated continuously from corneal surface images. We consider that our proposed method could be applied to a guidance system and we confirmed the feasibility of this application in experiments that estimated the object focused upon and the point-of-regard.


Author(s):  
Sari Awwad ◽  
Bashar Igried ◽  
Mohammad Wedyan ◽  
Mohammad Alshira'H

<div>Object detection is considered a hot research topic in applications of artificial intel-ligence and computer vision. Historically, object detection was widely used in var-ious fields like surveillance, fine-grained activities and robotics. All studies focus on improving accuracy for object detection using images, whether indoor or outdoor scenes. Therefore, this paper took a shot by improving the doable features extraction and proposing crossed sliding window approach using exiting classifiers for object de-tection. In this paper, the contribution includes two parts: First, improving local depth pattern feature along side SIFT and the second part explains a new technique presented by proposing crossed sliding window approach using two different types of images (colored and depth). Two types of features local depth patterns for detection (LDPD) and scale-invariant feature transform (SIFT) were merged as one feature vector. The RGB-D object dataset has been used and it consists of 300 different objects and in-cludes thousands of scenes. The proposed approach achieved high results comparing to other features or separated features that are used in this paper. All experiments and comparatives were applied on the same dataset for the same objective. Experimental results report a high accuracy in terms of detection rate, recall, precision and F1 scorein RGB-D scenes.</div>


Sign in / Sign up

Export Citation Format

Share Document