Mobile image sensors for object detection using color segmentation

2013 ◽  
Vol 16 (4) ◽  
pp. 757-763 ◽  
Author(s):  
Sang-Hoon Kim ◽  
Young-Sik Jeong
2013 ◽  
Vol 16 (4) ◽  
pp. 765-765 ◽  
Author(s):  
Sang-Hoon Kim ◽  
Young-Sik Jeong

2020 ◽  
Vol 2020 (16) ◽  
pp. 41-1-41-7
Author(s):  
Orit Skorka ◽  
Paul J. Kane

Many of the metrics developed for informational imaging are useful in automotive imaging, since many of the tasks – for example, object detection and identification – are similar. This work discusses sensor characterization parameters for the Ideal Observer SNR model, and elaborates on the noise power spectrum. It presents cross-correlation analysis results for matched-filter detection of a tribar pattern in sets of resolution target images that were captured with three image sensors over a range of illumination levels. Lastly, the work compares the crosscorrelation data to predictions made by the Ideal Observer Model and demonstrates good agreement between the two methods on relative evaluation of detection capabilities.


2020 ◽  
Vol 2020 (12) ◽  
pp. 172-1-172-7 ◽  
Author(s):  
Tejaswini Ananthanarayana ◽  
Raymond Ptucha ◽  
Sean C. Kelly

CMOS Image sensors play a vital role in the exponentially growing field of Artificial Intelligence (AI). Applications like image classification, object detection and tracking are just some of the many problems now solved with the help of AI, and specifically deep learning. In this work, we target image classification to discern between six categories of fruits — fresh/ rotten apples, fresh/ rotten oranges, fresh/ rotten bananas. Using images captured from high speed CMOS sensors along with lightweight CNN architectures, we show the results on various edge platforms. Specifically, we show results using ON Semiconductor’s global-shutter based, 12MP, 90 frame per second image sensor (XGS-12), and ON Semiconductor’s 13 MP AR1335 image sensor feeding into MobileNetV2, implemented on NVIDIA Jetson platforms. In addition to using the data captured with these sensors, we utilize an open-source fruits dataset to increase the number of training images. For image classification, we train our model on approximately 30,000 RGB images from the six categories of fruits. The model achieves an accuracy of 97% on edge platforms using ON Semiconductor’s 13 MP camera with AR1335 sensor. In addition to the image classification model, work is currently in progress to improve the accuracy of object detection using SSD and SSDLite with MobileNetV2 as the feature extractor. In this paper, we show preliminary results on the object detection model for the same six categories of fruits.


Author(s):  
Iljoo Baek ◽  
Wei Chen ◽  
Asish Chakrapani Gumparthi Venkat ◽  
Ragunathan Raj Rajkumar

Author(s):  
Кonstantin А. Elshin ◽  
Еlena I. Molchanova ◽  
Мarina V. Usoltseva ◽  
Yelena V. Likhoshway

Using the TensorFlow Object Detection API, an approach to identifying and registering Baikal diatom species Synedra acus subsp. radians has been tested. As a result, a set of images was formed and training was conducted. It is shown that аfter 15000 training iterations, the total value of the loss function was obtained equal to 0,04. At the same time, the classification accuracy is equal to 95%, and the accuracy of construction of the bounding box is also equal to 95%.


2010 ◽  
Vol 130 (9) ◽  
pp. 1572-1580
Author(s):  
Dipankar Das ◽  
Yoshinori Kobayashi ◽  
Yoshinori Kuno

Sign in / Sign up

Export Citation Format

Share Document