aerial images
Recently Published Documents


TOTAL DOCUMENTS

2270
(FIVE YEARS 987)

H-INDEX

54
(FIVE YEARS 13)

2022 ◽  
Vol 9 (2) ◽  
pp. 87-93
Author(s):  
Muhammed Enes ATİK ◽  
Zaide DURAN ◽  
Roni ÖZGÜNLÜK

2022 ◽  
Vol 14 (2) ◽  
pp. 354
Author(s):  
Jan Kavan ◽  
Guy D. Tallentire ◽  
Mihail Demidionov ◽  
Justyna Dudek ◽  
Mateusz C. Strzelecki

Tidewater glaciers on the east coast of Svalbard were examined for surface elevation changes and retreat rate. An archival digital elevation model (DEM) from 1970 (generated from aerial images by the Norwegian Polar Institute) in combination with recent ArcticDEM were used to compare the surface elevation changes of eleven glaciers. This approach was complemented by a retreat rate estimation based on the analysis of Landsat and Sentinel-2 images. In total, four of the 11 tidewater glaciers became land-based due to the retreat of their termini. The remaining tidewater glaciers retreated at an average annual retreat rate of 48 m year−1, and with range between 10–150 m year−1. All the glaciers studied experienced thinning in their frontal zones with maximum surface elevation loss exceeding 100 m in the ablation areas of three glaciers. In contrast to the massive retreat and thinning of the frontal zones, a minor increase in ice thickness was recorded in some accumulation areas of the glaciers, exceeding 10 m on three glaciers. The change in glacier geometry suggests an important shift in glacier dynamics over the last 50 years, which very likely reflects the overall trend of increasing air temperatures. Such changes in glacier geometry are common at surging glaciers in their quiescent phase. Surging was detected on two glaciers studied, and was documented by the glacier front readvance and massive surface thinning in high elevated areas.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 604
Author(s):  
Carlos A. M. Correia ◽  
Fabio A. A. Andrade ◽  
Agnar Sivertsen ◽  
Ihannah Pinto Guedes ◽  
Milena Faria Pinto ◽  
...  

Optical image sensors are the most common remote sensing data acquisition devices present in Unmanned Aerial Systems (UAS). In this context, assigning a location in a geographic frame of reference to the acquired image is a necessary task in the majority of the applications. This process is denominated direct georeferencing when ground control points are not used. Despite it applies simple mathematical fundamentals, the complete direct georeferencing process involves much information, such as camera sensor characteristics, mounting measurements, attitude and position of the UAS, among others. In addition, there are many rotations and translations between the different reference frames, among many other details, which makes the whole process a considerable complex operation. Another problem is that manufacturers and software tools may use different reference frames posing additional difficulty when implementing the direct georeferencing. As this information is spread among many sources, researchers may face difficulties on having a complete vision of the method. In fact, there is absolutely no paper in the literature that explain this process in a comprehensive way. In order to supply this implicit demand, this paper presents a comprehensive method for direct georeferencing of aerial images acquired by cameras mounted on UAS, where all required information, mathematical operations and implementation steps are explained in detail. Finally, in order to show the practical use of the method and to prove its accuracy, both simulated and real flights were performed, where objects of the acquired images were georeferenced.


Drones ◽  
2022 ◽  
Vol 6 (1) ◽  
pp. 19
Author(s):  
Mirela Kundid Vasić ◽  
Vladan Papić

Recent results in person detection using deep learning methods applied to aerial images gathered by Unmanned Aerial Vehicles (UAVs) have demonstrated the applicability of this approach in scenarios such as Search and Rescue (SAR) operations. In this paper, the continuation of our previous research is presented. The main goal is to further improve detection results, especially in terms of reducing the number of false positive detections and consequently increasing the precision value. We present a new approach that, as input to the multimodel neural network architecture, uses sequences of consecutive images instead of only one static image. Since successive images overlap, the same object of interest needs to be detected in more than one image. The correlation between successive images was calculated, and detected regions in one image were translated to other images based on the displacement vector. The assumption is that an object detected in more than one image has a higher probability of being a true positive detection because it is unlikely that the detection model will find the same false positive detections in multiple images. Based on this information, three different algorithms for rejecting detections and adding detections from one image to other images in the sequence are proposed. All of them achieved precision value about 80% which is increased by almost 20% compared to the current state-of-the-art methods.


2022 ◽  
Vol 14 (2) ◽  
pp. 339
Author(s):  
Paul Berg ◽  
Deise Santana Maia ◽  
Minh-Tan Pham ◽  
Sébastien Lefèvre

Human activities in the sea, such as intensive fishing and exploitation of offshore wind farms, may impact negatively on the marine mega fauna. As an attempt to control such impacts, surveying, and tracking of marine animals are often performed on the sites where those activities take place. Nowadays, thank to high resolution cameras and to the development of machine learning techniques, tracking of wild animals can be performed remotely and the analysis of the acquired images can be automatized using state-of-the-art object detection models. However, most state-of-the-art detection methods require lots of annotated data to provide satisfactory results. Since analyzing thousands of images acquired during a flight survey can be a cumbersome and time consuming task, we focus in this article on the weakly supervised detection of marine animals. We propose a modification of the patch distribution modeling method (PaDiM), which is currently one of the state-of-the-art approaches for anomaly detection and localization for visual industrial inspection. In order to show its effectiveness and suitability for marine animal detection, we conduct a comparative evaluation of the proposed method against the original version, as well as other state-of-the-art approaches on two high-resolution marine animal image datasets. On both tested datasets, the proposed method yielded better F1 and recall scores (75% recall/41% precision, and 57% recall/60% precision, respectively) when trained on images known to contain no object of interest. This shows a great potential of the proposed approach to speed up the marine animal discovery in new flight surveys. Additionally, such a method could be adopted for bounding box proposals to perform faster and cheaper annotation within a fully-supervised detection framework.


2022 ◽  
Vol 12 ◽  
Author(s):  
Ryo Fujiwara ◽  
Hiroyuki Nashida ◽  
Midori Fukushima ◽  
Naoya Suzuki ◽  
Hiroko Sato ◽  
...  

Evaluation of the legume proportion in grass-legume mixed swards is necessary for breeding and for cultivation research of forage. For objective and time-efficient estimation of legume proportion, convolutional neural network (CNN) models were trained by fine-tuning the GoogLeNet to estimate the coverage of timothy (TY), white clover (WC), and background (Bg) on the unmanned aerial vehicle-based images. The accuracies of the CNN models trained on different datasets were compared using the mean bias error and the mean average error. The models predicted the coverage with small errors when the plots in the training datasets were similar to the target plots in terms of coverage rate. The models that are trained on datasets of multiple plots had smaller errors than those trained on datasets of a single plot. The CNN models estimated the WC coverage more precisely than they did to the TY and the Bg coverages. The correlation coefficients (r) of the measured coverage for aerial images vs. estimated coverage were 0.92–0.96, whereas those of the scored coverage by a breeder vs. estimated coverage were 0.76–0.93. These results indicate that CNN models are helpful in effectively estimating the legume coverage.


2022 ◽  
Vol 14 (2) ◽  
pp. 305
Author(s):  
Qi Diao ◽  
Yaping Dai ◽  
Ce Zhang ◽  
Yan Wu ◽  
Xiaoxue Feng ◽  
...  

Semantic segmentation is one of the significant tasks in understanding aerial images with high spatial resolution. Recently, Graph Neural Network (GNN) and attention mechanism have achieved excellent performance in semantic segmentation tasks in general images and been applied to aerial images. In this paper, we propose a novel Superpixel-based Attention Graph Neural Network (SAGNN) for semantic segmentation of high spatial resolution aerial images. A K-Nearest Neighbor (KNN) graph is constructed from our network for each image, where each node corresponds to a superpixel in the image and is associated with a hidden representation vector. On this basis, the initialization of the hidden representation vector is the appearance feature extracted by a unary Convolutional Neural Network (CNN) from the image. Moreover, relying on the attention mechanism and recursive functions, each node can update its hidden representation according to the current state and the incoming information from its neighbors. The final representation of each node is used to predict the semantic class of each superpixel. The attention mechanism enables graph nodes to differentially aggregate neighbor information, which can extract higher-quality features. Furthermore, the superpixels not only save computational resources, but also maintain object boundary to achieve more accurate predictions. The accuracy of our model on the Potsdam and Vaihingen public datasets exceeds all benchmark approaches, reaching 90.23% and 89.32%, respectively.


Author(s):  
Jiajia Liao ◽  
Yujun Liu ◽  
Yingchao Piao ◽  
Jinhe Su ◽  
Guorong Cai ◽  
...  

AbstractRecent advances in camera-equipped drone applications increased the demand for visual object detection algorithms with deep learning for aerial images. There are several limitations in accuracy for a single deep learning model. Inspired by ensemble learning can significantly improve the generalization ability of the model in the machine learning field, we introduce a novel integration strategy to combine the inference results of two different methods without non-maximum suppression. In this paper, a global and local ensemble network (GLE-Net) was proposed to increase the quality of predictions by considering the global weights for different models and adjusting the local weights for bounding boxes. Specifically, the global module assigns different weights to models. In the local module, we group the bounding boxes that corresponding to the same object as a cluster. Each cluster generates a final predict box and assigns the highest score in the cluster as the score of the final predict box. Experiments on benchmarks VisDrone2019 show promising performance of GLE-Net compared with the baseline network.


2022 ◽  
Vol 40 (2) ◽  
pp. 607-618
Author(s):  
Yahia Said ◽  
Mohammad Barr ◽  
Taoufik Saidani ◽  
Mohamed Atri
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document