Autoregressive identification method for partially occluded industrial object recognition (Abstract Only)

1991 ◽  
Author(s):  
Dan Ionescu ◽  
Tayeb Damerji
2012 ◽  
Vol 24 (11) ◽  
pp. 2248-2261 ◽  
Author(s):  
Dean Wyatte ◽  
Tim Curran ◽  
Randall O'Reilly

Everyday vision requires robustness to a myriad of environmental factors that degrade stimuli. Foreground clutter can occlude objects of interest, and complex lighting and shadows can decrease the contrast of items. How does the brain recognize visual objects despite these low-quality inputs? On the basis of predictions from a model of object recognition that contains excitatory feedback, we hypothesized that recurrent processing would promote robust recognition when objects were degraded by strengthening bottom–up signals that were weakened because of occlusion and contrast reduction. To test this hypothesis, we used backward masking to interrupt the processing of partially occluded and contrast reduced images during a categorization experiment. As predicted by the model, we found significant interactions between the mask and occlusion and the mask and contrast, such that the recognition of heavily degraded stimuli was differentially impaired by masking. The model provided a close fit of these results in an isomorphic version of the experiment with identical stimuli. The model also provided an intuitive explanation of the interactions between the mask and degradations, indicating that masking interfered specifically with the extensive recurrent processing necessary to amplify and resolve highly degraded inputs, whereas less degraded inputs did not require much amplification and could be rapidly resolved, making them less susceptible to masking. Together, the results of the experiment and the accompanying model simulations illustrate the limits of feedforward vision and suggest that object recognition is better characterized as a highly interactive, dynamic process that depends on the coordination of multiple brain areas.


Perception ◽  
10.1068/p3441 ◽  
2002 ◽  
Vol 31 (11) ◽  
pp. 1299-1312 ◽  
Author(s):  
Norma T DiPietro ◽  
Edward A Wasserman ◽  
Michael E Young

Casual observation suggests that pigeons and other animals can recognize occluded objects; yet laboratory research has thus far failed to show that pigeons can do so. In a series of experiments, we investigated pigeons' ability to ‘name’ shaded, textured stimuli by associating each with a different response. After first learning to recognize four unoccluded objects, pigeons had to recognize the objects when they were partially occluded by another surface or when they were placed on top of another surface; in each case, recognition was weak. Following training with the unoccluded stimuli and with the stimuli placed on top of the occluder, pigeons' recognition of occluded objects dramatically improved. Pigeons' improved recognition of occluded objects was not limited to the trained objects but transferred to novel objects as well. Evidently, the recognition of occluded objects requires pigeons to learn to discriminate the object from the occluder; once this discrimination is mastered, occluded objects can be better recognized.


2018 ◽  
Vol 21 (4) ◽  
pp. 1167-1183
Author(s):  
Oung Tak You ◽  
Dong Sung Pae ◽  
Sung Hee Kim ◽  
Kyeong Eun Kim ◽  
Myo Taeg Lim ◽  
...  

2018 ◽  
Vol 8 (10) ◽  
pp. 1857 ◽  
Author(s):  
Jing Yang ◽  
Shaobo Li ◽  
Zong Gao ◽  
Zheng Wang ◽  
Wei Liu

The complexity of the background and the similarities between different types of precision parts, especially in the high-speed movement of conveyor belts in complex industrial scenes, pose immense challenges to the object recognition of precision parts due to diversity in illumination. This study presents a real-time object recognition method for 0.8 cm darning needles and KR22 bearing machine parts under a complex industrial background. First, we propose an image data increase algorithm based on directional flip, and we establish two types of dataset, namely, real data and increased data. We focus on increasing recognition accuracy and reducing computation time, and we design a multilayer feature fusion network to obtain feature information. Subsequently, we propose an accurate method for classifying precision parts on the basis of non-maximal suppression, and then form an improved You Only Look Once (YOLO) V3 network. We implement this method and compare it with models in our real-time industrial object detection experimental platform. Finally, experiments on real and increased datasets show that the proposed method outperforms the YOLO V3 algorithm in terms of recognition accuracy and robustness.


Perception ◽  
2020 ◽  
pp. 030100662098372
Author(s):  
Eli Brenner ◽  
Sergio Sánchez Hurtado ◽  
Elena Alvarez Arias ◽  
Jeroen B. J. Smeets ◽  
Roland W. Fleming

Does recognizing the transformations that gave rise to an object’s retinal image contribute to early object recognition? It might, because finding a partially occluded object among similar objects that are not occluded is more difficult than finding an object that has the same retinal image shape without evident occlusion. If this is because the occlusion is recognized as such, we might see something similar for other transformations. We confirmed that it is difficult to find a cookie with a section missing when this was the result of occlusion. It is not more difficult to find a cookie from which a piece has been bitten off than to find one that was baked in a similar shape. On the contrary, the bite marks help detect the bitten cookie. Thus, biting off a part of a cookie has very different effects on visual search than occluding part of it. These findings do not support the idea that observers rapidly and automatically compensate for the ways in which objects’ shapes are transformed to give rise to the objects’ retinal images. They are easy to explain in terms of detecting characteristic features in the retinal image that such transformations may hide or create.


Sign in / Sign up

Export Citation Format

Share Document