A prominent problem in computer vision is occlusion, which occurs when
an object’s key features temporarily disappear behind another crossing
body, causing the computer to struggle with image detection. While the
human brain is capable of compensating for the invisible parts of the
blocked object, computers lack such scene interpretation skills. Cloud
computing using convolutional neural networks is typically the method of
choice for handling such a scenario. However, for mobile applications
where energy consumption and computational costs are critical, cloud
computing should be minimized. In this regard, we propose a computer
vision sensor capable of efficiently detecting and tracking covered
objects without heavy reliance on occlusion handling software. Our
edge-computing sensor accomplishes this task by self-learning the object
prior to the moment of occlusion and uses this information to
“reconstruct” the blocked invisible features. Furthermore, the sensor
is capable of tracking a moving object by predicting the path it will
most likely take while travelling out of sight behind an obstructing
body. Finally, sensor operation is demonstrated by exposing the device
to various simulated occlusion events.
Keywords: Computer vision, occlusion handling, edge computing,
object tracking, dye sensitized solar cell.
Corresponding author Email:
[email protected]