Real-time 2D feature detection with low-level image processing algorithms on smart CCD/CMOS image sensors

Author(s):  
T. Spirig ◽  
P. Seitz ◽  
O. Vietze ◽  
F. Heitger ◽  
O. Kubler
2012 ◽  
Vol 40 (12) ◽  
pp. 3485-3492 ◽  
Author(s):  
Márcio Portes de Albuquerque ◽  
Marcelo Portes de Albuquerque ◽  
Germano T. Chacon ◽  
E. L. de Faria ◽  
Andrea Murari

Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1932
Author(s):  
Malik Haris ◽  
Adam Glowacz

Automated driving and vehicle safety systems need object detection. It is important that object detection be accurate overall and robust to weather and environmental conditions and run in real-time. As a consequence of this approach, they require image processing algorithms to inspect the contents of images. This article compares the accuracy of five major image processing algorithms: Region-based Fully Convolutional Network (R-FCN), Mask Region-based Convolutional Neural Networks (Mask R-CNN), Single Shot Multi-Box Detector (SSD), RetinaNet, and You Only Look Once v4 (YOLOv4). In this comparative analysis, we used a large-scale Berkeley Deep Drive (BDD100K) dataset. Their strengths and limitations are analyzed based on parameters such as accuracy (with/without occlusion and truncation), computation time, precision-recall curve. The comparison is given in this article helpful in understanding the pros and cons of standard deep learning-based algorithms while operating under real-time deployment restrictions. We conclude that the YOLOv4 outperforms accurately in detecting difficult road target objects under complex road scenarios and weather conditions in an identical testing environment.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Juan Mompeán ◽  
Juan L. Aragón ◽  
Pablo Artal

AbstractA novel portable device has been developed and built to dynamically, and automatically, correct presbyopia by means of a couple of opto-electronics lenses driven by pupil tracking. The system is completely portable providing with a high range of defocus correction up to 10 D. The glasses are controlled and powered by a smartphone. To achieve a truly real-time response, image processing algorithms have been implemented in OpenCL and ran on the GPU of the smartphone. To validate the system, different visual experiments were carried out in presbyopic subjects. Visual acuity was maintained nearly constant for a range of distances from 5 m to 20 cm.


Sign in / Sign up

Export Citation Format

Share Document