A dual neural network for object detection in UAV images

2021 ◽  
Vol 443 ◽  
pp. 292-301
Author(s):  
Gangyi Tian ◽  
Jianran Liu ◽  
Wenyuan Yang
2019 ◽  
Vol 17 (1) ◽  
pp. 69-76
Author(s):  
Mohammad Shiddiq Ghozali

Perkembangan Teknologi Informasi dan Komunikasi begitu pesat di zaman sekarang ini. Diikuti pula dengan perkembangan di bidang Artificial Intelligence (AI) atau Kecerdasan Buatan. Di Indonesia sendiri masih belum begitu populer dikalangan masyarakat akan tetapi perusahaan-perusahaan IT berlomba-lomba menciptakan inovasi dibidang Kecerdasan Buatan dan penerapan Kecerdasan Buatan disegala aspek kehidupan. Contoh kasus di Automated Teller Machine (ATM), seringkali terjadi kejahatan di ATM seperti pengintaian nomor pin, skimming, lebanese loop dan kejahatan lainnya. Walaupun di ATM sudah terdapat CCTV akan tetapi penjahat menggunakan alat bantu untuk menutupi wajahnya seperti helm, topi, masker dan kacamata hitam. Biasanya didepan pintu masuk ATM terpampang larangan untuk tidak menggunakan helm, topi, masker dan kacamata hitam serta tidak membawa rokok. Akan tetapi larangan itu masih tetap ada yang melanggar, dikarenakan tidak ada tindak lanjut ketika seseorang menggunakan benda-benda yang dilarang dibawa kedalam ATM. Oleh karena itu penulis membuat sistem pendeteksi obyek di bidang Kecerdasan Buatan untuk mendeteksi benda-benda yang dilarang digunakan ketika berada di ATM. Salah satu metode yang digunakan untuk menciptakan Object Detection yaitu You Only Look Once (YOLO). Implementasi ide ini tersedia pada DARKNET (open source neural network). Cara kerja YOLO yaitu dengan melihat seluruh gambar sekali, kemudian melewati jaringan saraf sekali langsung mendeteksi object yang ada. Oleh karena itu disebut You Only Look Once (YOLO). Pada penelitian ini, penulis membuat sistem yang masih dalam bentuk pengembangan, sehingga menjalankannya masih menggunakan command prompt. Keywords : Automated Teller Machine (ATM), Kecerdasan Buatan, Pendeteksi Obyek, You Only Look Once (YOLO)  


2021 ◽  
Vol 18 (1) ◽  
pp. 172988142199332
Author(s):  
Xintao Ding ◽  
Boquan Li ◽  
Jinbao Wang

Indoor object detection is a very demanding and important task for robot applications. Object knowledge, such as two-dimensional (2D) shape and depth information, may be helpful for detection. In this article, we focus on region-based convolutional neural network (CNN) detector and propose a geometric property-based Faster R-CNN method (GP-Faster) for indoor object detection. GP-Faster incorporates geometric property in Faster R-CNN to improve the detection performance. In detail, we first use mesh grids that are the intersections of direct and inverse proportion functions to generate appropriate anchors for indoor objects. After the anchors are regressed to the regions of interest produced by a region proposal network (RPN-RoIs), we then use 2D geometric constraints to refine the RPN-RoIs, in which the 2D constraint of every classification is a convex hull region enclosing the width and height coordinates of the ground-truth boxes on the training set. Comparison experiments are implemented on two indoor datasets SUN2012 and NYUv2. Since the depth information is available in NYUv2, we involve depth constraints in GP-Faster and propose 3D geometric property-based Faster R-CNN (DGP-Faster) on NYUv2. The experimental results show that both GP-Faster and DGP-Faster increase the performance of the mean average precision.


Drones ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 66
Author(s):  
Rahee Walambe ◽  
Aboli Marathe ◽  
Ketan Kotecha

Object detection in uncrewed aerial vehicle (UAV) images has been a longstanding challenge in the field of computer vision. Specifically, object detection in drone images is a complex task due to objects of various scales such as humans, buildings, water bodies, and hills. In this paper, we present an implementation of ensemble transfer learning to enhance the performance of the base models for multiscale object detection in drone imagery. Combined with a test-time augmentation pipeline, the algorithm combines different models and applies voting strategies to detect objects of various scales in UAV images. The data augmentation also presents a solution to the deficiency of drone image datasets. We experimented with two specific datasets in the open domain: the VisDrone dataset and the AU-AIR Dataset. Our approach is more practical and efficient due to the use of transfer learning and two-level voting strategy ensemble instead of training custom models on entire datasets. The experimentation shows significant improvement in the mAP for both VisDrone and AU-AIR datasets by employing the ensemble transfer learning method. Furthermore, the utilization of voting strategies further increases the 3reliability of the ensemble as the end-user can select and trace the effects of the mechanism for bounding box predictions.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1737
Author(s):  
Wooseop Lee ◽  
Min-Hee Kang ◽  
Jaein Song ◽  
Keeyeon Hwang

As automated vehicles have been considered one of the important trends in intelligent transportation systems, various research is being conducted to enhance their safety. In particular, the importance of technologies for the design of preventive automated driving systems, such as detection of surrounding objects and estimation of distance between vehicles. Object detection is mainly performed through cameras and LiDAR, but due to the cost and limits of LiDAR’s recognition distance, the need to improve Camera recognition technique, which is relatively convenient for commercialization, is increasing. This study learned convolutional neural network (CNN)-based faster regions with CNN (Faster R-CNN) and You Only Look Once (YOLO) V2 to improve the recognition techniques of vehicle-mounted monocular cameras for the design of preventive automated driving systems, recognizing surrounding vehicles in black box highway driving videos and estimating distances from surrounding vehicles through more suitable models for automated driving systems. Moreover, we learned the PASCAL visual object classes (VOC) dataset for model comparison. Faster R-CNN showed similar accuracy, with a mean average precision (mAP) of 76.4 to YOLO with a mAP of 78.6, but with a Frame Per Second (FPS) of 5, showing slower processing speed than YOLO V2 with an FPS of 40, and a Faster R-CNN, which we had difficulty detecting. As a result, YOLO V2, which shows better performance in accuracy and processing speed, was determined to be a more suitable model for automated driving systems, further progressing in estimating the distance between vehicles. For distance estimation, we conducted coordinate value conversion through camera calibration and perspective transform, set the threshold to 0.7, and performed object detection and distance estimation, showing more than 80% accuracy for near-distance vehicles. Through this study, it is believed that it will be able to help prevent accidents in automated vehicles, and it is expected that additional research will provide various accident prevention alternatives such as calculating and securing appropriate safety distances, depending on the vehicle types.


Author(s):  
Zhiyong Gao ◽  
Jianhong Xiang

Background: While detecting the object directly from the 3D point cloud, the natural 3D patterns and invariance of 3D data are often obscure. Objective: In this work, we aimed at studying the 3D object detection from discrete, disordered and sparse 3D point clouds. Methods: The CNN is composed of the frustum sequence module, 3D instance segmentation module S-NET, 3D point cloud transformation module T-NET, and 3D boundary box estimation module E-NET. The search space of the object is determined by the frustum sequence module. The instance segmentation of the point cloud is performed by the 3D instance segmentation module. The 3D coordinates of the object are confirmed by the transformation module and the 3D bounding box estimation module. Results: Evaluated on KITTI benchmark dataset, our method outperforms the state of the art by remarkable margins while having real-time capability. Conclusion: We achieve real-time 3D object detection by proposing an improved convolutional neural network (CNN) based on image-driven point clouds.


2021 ◽  
Author(s):  
Alexis Koulidis ◽  
Mohamed Abdullatif ◽  
Ahmed Galal Abdel-Kader ◽  
Mohammed-ilies Ayachi ◽  
Shehab Ahmed ◽  
...  

Abstract Surface data measurement and analysis are an established mean of detecting drillstring low-frequency torsional vibration or stick-slip. The industry has also developed models that link surface torque and downhole drill bit rotational speed. Cameras provide an alternative noninvasive approach to existing wired/wireless sensors used to gather such surface data. The results of a preliminary field assessment of drilling dynamics utilizing camera-based drillstring monitoring are presented in this work. Detection and timing of events from the video are performed using computer vision techniques and object detection algorithms. A real-time interest point tracker utilizing homography estimation and sparse optical flow point tracking is deployed. We use a fully convolutional deep neural network trained to detect interest points and compute their accompanying descriptors. The detected points and descriptors are matched across video sequences and used for drillstring rotation detection and speed estimation. When the drillstring's vibration is invisible to the naked eye, the point tracking algorithm is preceded with a motion amplification function based on another deep convolutional neural network. We have clearly demonstrated the potential of camera-based noninvasive approaches to surface drillstring dynamics data acquisition and analysis. Through the application of real-time object detection algorithms on rig video feed, surface events were detected and timed. We were also able to estimate drillstring rotary speed and motion profile. Torsional drillstring modes can be identified and correlated with drilling parameters and bottomhole assembly design. A novel vibration array sensing approach based on a multi-point tracking algorithm is also proposed. A vibration threshold setting was utilized to enable an additional motion amplification function providing seamless assessment for multi-scale vibration measurement. Cameras were typically devices to acquire images/videos for offline automated assessment (recently) or online manual monitoring (mainly), this work has shown how fog/edge computing makes it possible for these cameras to be "conscious" and "intelligent," hence play a critical role in automation/digitalization of drilling rigs. We showcase their preliminary application as drilling dynamics and rig operations sensors in this work. Cameras are an ideal sensor for a drilling environment since they can be installed anywhere on a rig to perform large-scale live video analytics on drilling processes.


Sign in / Sign up

Export Citation Format

Share Document