scholarly journals The Implementation Of Target Recognation to Determine Enemy Coordinates Using Unmanned Aerial Vehicle (UAV) GALAK-24 Aircraft Cameras With Object Detection Method

2020 ◽  
Vol 2 (Oktober) ◽  
pp. 20-28
Author(s):  
Mehmek Ali Akza Arsyad ◽  
Isa Mahfudi ◽  
Bambang Purwanto

Abstract – In this era of increasingly advance, camera technology to make it easier for the military to carry out attacks and defenses to destroy embattled opponents, for that is requires camera technology that can detect objects at once with the coordinates or position of the object cleary, so as to help troops to maximize attacks and maneuvers in war. This research is expected to develop GALAK-24 aitcraft equipped with enemy detection cameras and at the same  time determine the position of enemy coorninates in real time in helping intelligence on the bettlefield, thus facilitating decision-making in warfare. The detection system uses object Detection methods to detect objects that are on the surface of the land crossed by the aircraft. The workings of this detection camera is to use the phython programming language thats is connected to the PC and connected also to the camera, when the aircraft makes a flight across enemy territory then the camera will capture the entire enemy territory so that there are vehicle object recorded as well, the target will be reported to calculate the enemy’s strength and enemy position. For security prosedures the aircraft will be flown at on altitude of 500 (m) to avoid enemy personnel fire and also reduce noise so as not to be heard by the enemy reporting the condition of enemy territory, enemy forces at the same time and sent to the operator.

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2238 ◽  
Author(s):  
Mingjie Liu ◽  
Xianhao Wang ◽  
Anjian Zhou ◽  
Xiuyuan Fu ◽  
Yiwei Ma ◽  
...  

Object detection, as a fundamental task in computer vision, has been developed enormously, but is still challenging work, especially for Unmanned Aerial Vehicle (UAV) perspective due to small scale of the target. In this study, the authors develop a special detection method for small objects in UAV perspective. Based on YOLOv3, the Resblock in darknet is first optimized by concatenating two ResNet units that have the same width and height. Then, the entire darknet structure is improved by increasing convolution operation at an early layer to enrich spatial information. Both these two optimizations can enlarge the receptive filed. Furthermore, UAV-viewed dataset is collected to UAV perspective or small object detection. An optimized training method is also proposed based on collected UAV-viewed dataset. The experimental results on public dataset and our collected UAV-viewed dataset show distinct performance improvement on small object detection with keeping the same level performance on normal dataset, which means our proposed method adapts to different kinds of conditions.


2020 ◽  
Vol 12 (1) ◽  
pp. 182 ◽  
Author(s):  
Lingxuan Meng ◽  
Zhixing Peng ◽  
Ji Zhou ◽  
Jirong Zhang ◽  
Zhenyu Lu ◽  
...  

Unmanned aerial vehicle (UAV) remote sensing and deep learning provide a practical approach to object detection. However, most of the current approaches for processing UAV remote-sensing data cannot carry out object detection in real time for emergencies, such as firefighting. This study proposes a new approach for integrating UAV remote sensing and deep learning for the real-time detection of ground objects. Excavators, which usually threaten pipeline safety, are selected as the target object. A widely used deep-learning algorithm, namely You Only Look Once V3, is first used to train the excavator detection model on a workstation and then deployed on an embedded board that is carried by a UAV. The recall rate of the trained excavator detection model is 99.4%, demonstrating that the trained model has a very high accuracy. Then, the UAV for an excavator detection system (UAV-ED) is further constructed for operational application. UAV-ED is composed of a UAV Control Module, a UAV Module, and a Warning Module. A UAV experiment with different scenarios was conducted to evaluate the performance of the UAV-ED. The whole process from the UAV observation of an excavator to the Warning Module (350 km away from the testing area) receiving the detection results only lasted about 1.15 s. Thus, the UAV-ED system has good performance and would benefit the management of pipeline safety.


2018 ◽  
Vol 4 (9 (94)) ◽  
pp. 19-26 ◽  
Author(s):  
Vyacheslav Moskalenko ◽  
Anatoliy Dovbysh ◽  
Igor Naumenko ◽  
Alyona Moskalenko ◽  
Artem Korobov

2021 ◽  
Vol 13 (18) ◽  
pp. 3652
Author(s):  
Duo Xu ◽  
Yixin Zhao ◽  
Yaodong Jiang ◽  
Cun Zhang ◽  
Bo Sun ◽  
...  

Information on the ground fissures induced by coal mining is important to the safety of coal mine production and the management of environment in the mining area. In order to identify these fissures timely and accurately, a new method was proposed in the present paper, which is based on an unmanned aerial vehicle (UAV) equipped with a visible light camera and an infrared camera. According to such equipment, edge detection technology was used to detect mining-induced ground fissures. Field experiments show high efficiency of the UAV in monitoring the mining-induced ground fissures. Furthermore, a reasonable time period between 3:00 a.m. and 5:00 a.m. under the studied conditions helps UAV infrared remote sensing identify fissures preferably. The Roberts operator, Sobel operator, Prewitt operator, Canny operator and Laplacian operator were tested to detect the fissures in the visible image, infrared image and fused image. An improved edge detection method was proposed which based on the Laplacian of Gaussian, Canny and mathematical morphology operators. The peak signal-to-noise rate, effective edge rate, Pratt’s figure of merit and F-measure indicated that the proposed method was superior to the other methods. In addition, the fissures in infrared images at different times can be accurately detected by the proposed method except at 7:00 a.m., 1:00 p.m. and 3:00 p.m.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1651 ◽  
Author(s):  
Suk-Ju Hong ◽  
Yunhyeok Han ◽  
Sang-Yeon Kim ◽  
Ah-Yeong Lee ◽  
Ghiseok Kim

Wild birds are monitored with the important objectives of identifying their habitats and estimating the size of their populations. Especially in the case of migratory bird, they are significantly recorded during specific periods of time to forecast any possible spread of animal disease such as avian influenza. This study led to the construction of deep-learning-based object-detection models with the aid of aerial photographs collected by an unmanned aerial vehicle (UAV). The dataset containing the aerial photographs includes diverse images of birds in various bird habitats and in the vicinity of lakes and on farmland. In addition, aerial images of bird decoys are captured to achieve various bird patterns and more accurate bird information. Bird detection models such as Faster Region-based Convolutional Neural Network (R-CNN), Region-based Fully Convolutional Network (R-FCN), Single Shot MultiBox Detector (SSD), Retinanet, and You Only Look Once (YOLO) were created and the performance of all models was estimated by comparing their computing speed and average precision. The test results show Faster R-CNN to be the most accurate and YOLO to be the fastest among the models. The combined results demonstrate that the use of deep-learning-based detection methods in combination with UAV aerial imagery is fairly suitable for bird detection in various environments.


Sign in / Sign up

Export Citation Format

Share Document