Object identification in computational ghost imaging based on deep learning

2020 ◽  
Vol 126 (10) ◽  
Author(s):  
Jianbo Li ◽  
Mingnan Le ◽  
Jun Wang ◽  
Wei Zhang ◽  
Bin Li ◽  
...  
2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Meng Lyu ◽  
Wei Wang ◽  
Hao Wang ◽  
Haichao Wang ◽  
Guowei Li ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Yi-Yi Huang ◽  
Chen Ouyang ◽  
Ke Fang ◽  
Yu-Feng Dong ◽  
Jie Zhang ◽  
...  

Author(s):  
Chané Moodley ◽  
Bereneice Sephton ◽  
Valeria Rodriguez-Fajardo ◽  
Andrew Forbes
Keyword(s):  

Author(s):  
Rachana Raut ◽  
Ritika Deore ◽  
Saloni Bobade ◽  
Shreyanka Suryawanshi ◽  
Sunita Jahirabadkar

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Saad Rizvi ◽  
Jie Cao ◽  
Kaiyu Zhang ◽  
Qun Hao

2019 ◽  
Vol 109 (6) ◽  
pp. 1083-1087 ◽  
Author(s):  
Dor Oppenheim ◽  
Guy Shani ◽  
Orly Erlich ◽  
Leah Tsror

Many plant diseases have distinct visual symptoms, which can be used to identify and classify them correctly. This article presents a potato disease classification algorithm that leverages these distinct appearances and advances in computer vision made possible by deep learning. The algorithm uses a deep convolutional neural network, training it to classify the tubers into five classes: namely, four disease classes and a healthy potato class. The database of images used in this study, containing potato tubers of different cultivars, sizes, and diseases, was acquired, classified, and labeled manually by experts. The models were trained over different train-test splits to better understand the amount of image data needed to apply deep learning for such classification tasks. The models were tested over a data set of images taken using standard low-cost RGB (red, green, and blue) sensors and were tagged by experts, demonstrating high classification accuracy. This is the first article to report the successful implementation of deep convolutional networks, popular in object identification, to the task of disease identification in potato tubers, showing the potential of deep learning techniques in agricultural tasks.


Energies ◽  
2020 ◽  
Vol 13 (22) ◽  
pp. 6104
Author(s):  
Bernardo Calabrese ◽  
Ramiro Velázquez ◽  
Carolina Del-Valle-Soto ◽  
Roberto de Fazio ◽  
Nicola Ivan Giannoccaro ◽  
...  

This paper introduces a novel low-cost solar-powered wearable assistive technology (AT) device, whose aim is to provide continuous, real-time object recognition to ease the finding of the objects for visually impaired (VI) people in daily life. The system consists of three major components: a miniature low-cost camera, a system on module (SoM) computing unit, and an ultrasonic sensor. The first is worn on the user’s eyeglasses and acquires real-time video of the nearby space. The second is worn as a belt and runs deep learning-based methods and spatial algorithms which process the video coming from the camera performing objects’ detection and recognition. The third assists on positioning the objects found in the surrounding space. The developed device provides audible descriptive sentences as feedback to the user involving the objects recognized and their position referenced to the user gaze. After a proper power consumption analysis, a wearable solar harvesting system, integrated with the developed AT device, has been designed and tested to extend the energy autonomy in the different operating modes and scenarios. Experimental results obtained with the developed low-cost AT device have demonstrated an accurate and reliable real-time object identification with an 86% correct recognition rate and 215 ms average time interval (in case of high-speed SoM operating mode) for the image processing. The proposed system is capable of recognizing the 91 objects offered by the Microsoft Common Objects in Context (COCO) dataset plus several custom objects and human faces. In addition, a simple and scalable methodology for using image datasets and training of Convolutional Neural Networks (CNNs) is introduced to add objects to the system and increase its repertory. It is also demonstrated that comprehensive trainings involving 100 images per targeted object achieve 89% recognition rates, while fast trainings with only 12 images achieve acceptable recognition rates of 55%.


2020 ◽  
Vol 134 ◽  
pp. 106183 ◽  
Author(s):  
Heng Wu ◽  
Ruizhou Wang ◽  
Genping Zhao ◽  
Huapan Xiao ◽  
Jian Liang ◽  
...  
Keyword(s):  

Author(s):  
S Gopi Naik

Abstract: The plan is to establish an integrated system that can manage high-quality visual information and also detect weapons quickly and efficiently. It is obtained by integrating ARM-based computer vision and optimization algorithms with deep neural networks able to detect the presence of a threat. The whole system is connected to a Raspberry Pi module, which will capture live broadcasting and evaluate it using a deep convolutional neural network. Due to the intimate interaction between object identification and video and image analysis in real-time objects, By generating sophisticated ensembles that incorporate various low-level picture features with high-level information from object detection and scenario classifiers, their performance can quickly plateau. Deep learning models, which can learn semantic, high-level, deeper features, have been developed to overcome the issues that are present in optimization algorithms. It presents a review of deep learning based object detection frameworks that use Convolutional Neural Network layers for better understanding of object detection. The Mobile-Net SSD model behaves differently in network design, training methods, and optimization functions, among other things. The crime rate in suspicious areas has been reduced as a consequence of weapon detection. However, security is always a major concern in human life. The Raspberry Pi module, or computer vision, has been extensively used in the detection and monitoring of weapons. Due to the growing rate of human safety protection, privacy and the integration of live broadcasting systems which can detect and analyse images, suspicious areas are becoming indispensable in intelligence. This process uses a Mobile-Net SSD algorithm to achieve automatic weapons and object detection. Keywords: Computer Vision, Weapon and Object Detection, Raspberry Pi Camera, RTSP, SMTP, Mobile-Net SSD, CNN, Artificial Intelligence.


Sign in / Sign up

Export Citation Format

Share Document