scholarly journals REAL TIME OBJECT DETECTION USING YOLO ALGORITHM

Author(s):  
Bobburi Taralathasri ◽  
Dammati Vidya Sri ◽  
Gadidammalla Narendra Kumar ◽  
Annam Subbarao ◽  
Palli R Krishna Prasad

The major and wide range applications like Driverless cars, robots, Image surveillance has become famous in the Computer vision .Computer vision is the core in all those applications which is responsible for the image detection and it became more popular worldwide. Object Detection System using Deep Learning Technique” detects objects efficiently based on YOLO algorithm and applies the algorithm on image data to detect objects.

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2611
Author(s):  
Andrew Shepley ◽  
Greg Falzon ◽  
Christopher Lawson ◽  
Paul Meek ◽  
Paul Kwan

Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy.


Author(s):  
Jesus Benito-Picazo ◽  
Enrique Dominguez ◽  
Esteban J. Palomo ◽  
Ezequiel Lopez-Rubio ◽  
Juan Miguel Ortiz-de-Lazcano-Lobato

2020 ◽  
Vol 10 (7) ◽  
pp. 2511
Author(s):  
Young-Joo Han ◽  
Ha-Jin Yu

As defect detection using machine vision is diversifying and expanding, approaches using deep learning are increasing. Recently, there have been much research for detecting and classifying defects using image segmentation, image detection, and image classification. These methods are effective but require a large number of actual defect data. However, it is very difficult to get a large amount of actual defect data in industrial areas. To overcome this problem, we propose a method for defect detection using stacked convolutional autoencoders. The autoencoders we proposed are trained by using only non-defect data and synthetic defect data generated by using the characteristics of defect based on the knowledge of the experts. A key advantage of our approach is that actual defect data is not required, and we verified that the performance is comparable to the systems trained using real defect data.


Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 583 ◽  
Author(s):  
Khang Nguyen ◽  
Nhut T. Huynh ◽  
Phat C. Nguyen ◽  
Khanh-Duy Nguyen ◽  
Nguyen D. Vo ◽  
...  

Unmanned aircraft systems or drones enable us to record or capture many scenes from the bird’s-eye view and they have been fast deployed to a wide range of practical domains, i.e., agriculture, aerial photography, fast delivery and surveillance. Object detection task is one of the core steps in understanding videos collected from the drones. However, this task is very challenging due to the unconstrained viewpoints and low resolution of captured videos. While deep-learning modern object detectors have recently achieved great success in general benchmarks, i.e., PASCAL-VOC and MS-COCO, the robustness of these detectors on aerial images captured by drones is not well studied. In this paper, we present an evaluation of state-of-the-art deep-learning detectors including Faster R-CNN (Faster Regional CNN), RFCN (Region-based Fully Convolutional Networks), SNIPER (Scale Normalization for Image Pyramids with Efficient Resampling), Single-Shot Detector (SSD), YOLO (You Only Look Once), RetinaNet, and CenterNet for the object detection in videos captured by drones. We conduct experiments on VisDrone2019 dataset which contains 96 videos with 39,988 annotated frames and provide insights into efficient object detectors for aerial images.


Nanoscale ◽  
2019 ◽  
Vol 11 (44) ◽  
pp. 21266-21274 ◽  
Author(s):  
Omid Hemmatyar ◽  
Sajjad Abdollahramezani ◽  
Yashar Kiarashinejad ◽  
Mohammadreza Zandehshahvar ◽  
Ali Adibi

Here, for the first time to our knowledge, a Fano resonance metasurface made of HfO2 is experimentally demonstrated to generate a wide range of colors. We use a novel deep-learning technique to design and optimize the metasurface.


Author(s):  
Shideh Saraeian ◽  
Mahya Mohammadi Golchi

Comprehensive development of computer networks causes the increment of Distributed Denial of Service (DDoS) attacks. These types of attacks can easily restrict communication and computing. Among all the previous researches, the accuracy of the attack detection has not been properly addressed. In this study, deep learning technique is used in a hybrid network-based Intrusion Detection System (IDS) to detect intrusion on network. The performance of the proposed technique is evaluated on the NSL-KDD and ISCXIDS 2012 datasets. We performed traffic visual analysis using Wireshark tool and did some experimentations to prove the superiority of the proposed method. The results have shown that our proposed method achieved higher accuracy in comparison with other useful machine learning techniques.


Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1174
Author(s):  
Ashish Kumar Gupta ◽  
Ayan Seal ◽  
Mukesh Prasad ◽  
Pritee Khanna

Detection and localization of regions of images that attract immediate human visual attention is currently an intensive area of research in computer vision. The capability of automatic identification and segmentation of such salient image regions has immediate consequences for applications in the field of computer vision, computer graphics, and multimedia. A large number of salient object detection (SOD) methods have been devised to effectively mimic the capability of the human visual system to detect the salient regions in images. These methods can be broadly categorized into two categories based on their feature engineering mechanism: conventional or deep learning-based. In this survey, most of the influential advances in image-based SOD from both conventional as well as deep learning-based categories have been reviewed in detail. Relevant saliency modeling trends with key issues, core techniques, and the scope for future research work have been discussed in the context of difficulties often faced in salient object detection. Results are presented for various challenging cases for some large-scale public datasets. Different metrics considered for assessment of the performance of state-of-the-art salient object detection models are also covered. Some future directions for SOD are presented towards end.


2018 ◽  
Vol 155 ◽  
pp. 01016 ◽  
Author(s):  
Cuong Nguyen The ◽  
Dmitry Shashev

Video files are files that store motion pictures and sounds like in real life. In today's world, the need for automated processing of information in video files is increasing. Automated processing of information has a wide range of application including office/home surveillance cameras, traffic control, sports applications, remote object detection, and others. In particular, detection and tracking of object movement in video file plays an important role. This article describes the methods of detecting objects in video files. Today, this problem in the field of computer vision is being studied worldwide.


2020 ◽  
Vol 63 (6) ◽  
pp. 1969-1980
Author(s):  
Ali Hamidisepehr ◽  
Seyed V. Mirnezami ◽  
Jason K. Ward

HighlightsCorn damage detection was possible using advanced deep learning and computer vision techniques trained with images of simulated corn lodging.RetinaNet and YOLOv2 both worked well at identifying regions of lodged corn.Automating crop damage identification could provide useful information to producers and other stakeholders from visual-band UAS imagery.Abstract. Severe weather events can cause large financial losses to farmers. Detailed information on the location and severity of damage will assist farmers, insurance companies, and disaster response agencies in making wise post-damage decisions. The goal of this study was a proof-of-concept to detect areas of damaged corn from aerial imagery using computer vision and deep learning techniques. A specific objective was to compare existing object detection algorithms to determine which is best suited for corn damage detection. Simulated corn lodging was used to create a training and analysis data set. An unmanned aerial system equipped with an RGB camera was used for image acquisition. Three popular object detectors (Faster R-CNN, YOLOv2, and RetinaNet) were assessed for their ability to detect damaged areas. Average precision (AP) was used to compare object detectors. RetinaNet and YOLOv2 demonstrated robust capability for corn damage identification, with AP ranging from 98.43% to 73.24% and from 97.0% to 55.99%, respectively, across all conditions. Faster R-CNN did not perform as well as the other two models, with AP between 77.29% and 14.47% for all conditions. Detecting corn damage at later growth stages was more difficult for all three object detectors. Keywords: Computer vision, Faster R-CNN, RetinaNet, Severe weather, Smart farming, YOLO.


Sign in / Sign up

Export Citation Format

Share Document