A comparison of photogrammetric software packages for mosaicking unmanned aerial vehicle (UAV) images in agricultural application

2020 ◽  
Vol 46 (7) ◽  
pp. 1112-1119
Author(s):  
Peng-Fei CHEN ◽  
Xin-Gang XU
Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4442
Author(s):  
Zijie Niu ◽  
Juntao Deng ◽  
Xu Zhang ◽  
Jun Zhang ◽  
Shijia Pan ◽  
...  

It is important to obtain accurate information about kiwifruit vines to monitoring their physiological states and undertake precise orchard operations. However, because vines are small and cling to trellises, and have branches laying on the ground, numerous challenges exist in the acquisition of accurate data for kiwifruit vines. In this paper, a kiwifruit canopy distribution prediction model is proposed on the basis of low-altitude unmanned aerial vehicle (UAV) images and deep learning techniques. First, the location of the kiwifruit plants and vine distribution are extracted from high-precision images collected by UAV. The canopy gradient distribution maps with different noise reduction and distribution effects are generated by modifying the threshold and sampling size using the resampling normalization method. The results showed that the accuracies of the vine segmentation using PSPnet, support vector machine, and random forest classification were 71.2%, 85.8%, and 75.26%, respectively. However, the segmentation image obtained using depth semantic segmentation had a higher signal-to-noise ratio and was closer to the real situation. The average intersection over union of the deep semantic segmentation was more than or equal to 80% in distribution maps, whereas, in traditional machine learning, the average intersection was between 20% and 60%. This indicates the proposed model can quickly extract the vine distribution and plant position, and is thus able to perform dynamic monitoring of orchards to provide real-time operation guidance.


2021 ◽  
Vol 173 ◽  
pp. 95-121
Author(s):  
Juepeng Zheng ◽  
Haohuan Fu ◽  
Weijia Li ◽  
Wenzhao Wu ◽  
Le Yu ◽  
...  

Author(s):  
Veronika Kopačková-Strnadová ◽  
Lucie Koucká ◽  
Jan Jelenek ◽  
Zuzana Lhotakova ◽  
Filip Oulehle

Remote sensing is one of the modern methods that have significantly developed over the last two decades and nowadays provides a new means for forest monitoring. High spatial and temporal resolutions are demanded for accurate and timely monitoring of forests. In this study multi-spectral Unmanned Aerial Vehicle (UAV) images were used to estimate canopy parameters (definition of crown extent, top and height as well as photosynthetic pigment contents). The UAV images in Green, Red, Red-Edge and NIR bands were acquired by Parrot Sequoia camera over selected sites in two small catchments (Czech Republic) covered dominantly by Norway spruce monocultures. Individual tree extents, together with tree tops and heights, were derived from the Canopy Height Model (CHM). In addition, the following were tested i) to what extent can the linear relationship be established between selected vegetation indexes (NDVI and NDVIred edge) derived for individual trees and the corresponding ground truth (e.g., biochemically assessed needle photosynthetic pigment contents), and ii) whether needle age selection as a ground truth and crown light conditions affect the validity of linear models. The results of the conducted statistical analysis show that the two vegetation indexes (NDVI and NDVIred edge) tested here have a potential to assess photosynthetic pigments in Norway spruce forests at a semi-quantitative level, however the needle-age selection as a ground truth was revealed to be a very important factor. The only usable results were obtained for linear models when using the 2nd year needle pigment contents as a ground truth. On the other hand, the illumination conditions of the crown proved to have very little effect on the model’s validity. No study was found to directly compare these results conducted on coniferous forest stands. This shows that there is a further need for studies dealing with a quantitative estimation of the biochemical variables of nature coniferous forests when employing spectral data acquired by the UAV platform at a very high spatial resolution.


2019 ◽  
Vol 11 (10) ◽  
pp. 1226 ◽  
Author(s):  
Jianqing Zhao ◽  
Xiaohu Zhang ◽  
Chenxi Gao ◽  
Xiaolei Qiu ◽  
Yongchao Tian ◽  
...  

To improve the efficiency and effectiveness of mosaicking unmanned aerial vehicle (UAV) images, we propose in this paper a rapid mosaicking method based on scale-invariant feature transform (SIFT) for mosaicking UAV images used for crop growth monitoring. The proposed method dynamically sets the appropriate contrast threshold in the difference of Gaussian (DOG) scale-space according to the contrast characteristics of UAV images used for crop growth monitoring. Therefore, this method adjusts and optimizes the number of matched feature point pairs in UAV images and increases the mosaicking efficiency. Meanwhile, based on the relative location relationship of UAV images used for crop growth monitoring, the random sample consensus (RANSAC) algorithm is integrated to eliminate the influence of mismatched point pairs in UAV images on mosaicking and to keep the accuracy and quality of mosaicking. Mosaicking experiments were conducted by setting three types of UAV images in crop growth monitoring: visible, near-infrared, and thermal infrared. The results indicate that compared to the standard SIFT algorithm and frequently used commercial mosaicking software, the method proposed here significantly improves the applicability, efficiency, and accuracy of mosaicking UAV images in crop growth monitoring. In comparison with image mosaicking based on the standard SIFT algorithm, the time efficiency of the proposed method is higher by 30%, and its structural similarity index of mosaicking accuracy is about 0.9. Meanwhile, the approach successfully mosaics low-resolution UAV images used for crop growth monitoring and improves the applicability of the SIFT algorithm, providing a technical reference for UAV application used for crop growth and phenotypic monitoring.


2018 ◽  
Vol 10 (8) ◽  
pp. 1246 ◽  
Author(s):  
San Jiang ◽  
Wanshou Jiang

Accurate orientation is required for the applications of UAV (Unmanned Aerial Vehicle) images. In this study, an integrated Structure from Motion (SfM) solution is proposed, which aims to address three issues to ensure the efficient and reliable orientation of oblique UAV images, including match pair selection for large-volume images with large overlap degree, reliable feature matching of images captured from varying directions, and efficient geometrical verification of initial matches. By using four datasets captured with different oblique imaging systems, the proposed SfM solution is comprehensively compared and analyzed. The results demonstrate that linear computational costs can be achieved in feature extraction and matching; although high decrease ratios occur in image pairs, reliable orientation results are still obtained from both the relative and absolute bundle adjustment (BA) tests when compared with other software packages. For the orientation of oblique UAV images, the proposed method can be an efficient and reliable solution.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6540
Author(s):  
Qian Pan ◽  
Maofang Gao ◽  
Pingbo Wu ◽  
Jingwen Yan ◽  
Shilei Li

Yellow rust is a disease with a wide range that causes great damage to wheat. The traditional method of manually identifying wheat yellow rust is very inefficient. To improve this situation, this study proposed a deep-learning-based method for identifying wheat yellow rust from unmanned aerial vehicle (UAV) images. The method was based on the pyramid scene parsing network (PSPNet) semantic segmentation model to classify healthy wheat, yellow rust wheat, and bare soil in small-scale UAV images, and to investigate the spatial generalization of the model. In addition, it was proposed to use the high-accuracy classification results of traditional algorithms as weak samples for wheat yellow rust identification. The recognition accuracy of the PSPNet model in this study reached 98%. On this basis, this study used the trained semantic segmentation model to recognize another wheat field. The results showed that the method had certain generalization ability, and its accuracy reached 98%. In addition, the high-accuracy classification result of a support vector machine was used as a weak label by weak supervision, which better solved the labeling problem of large-size images, and the final recognition accuracy reached 94%. Therefore, the present study method facilitated timely control measures to reduce economic losses.


2019 ◽  
Vol 11 (14) ◽  
pp. 1708 ◽  
Author(s):  
Shuang Cao ◽  
Yongtao Yu ◽  
Haiyan Guan ◽  
Daifeng Peng ◽  
Wanqian Yan

Vehicle detection from remote sensing images plays a significant role in transportation related applications. However, the scale variations, orientation variations, illumination variations, and partial occlusions of vehicles, as well as the image qualities, bring great challenges for accurate vehicle detection. In this paper, we present an affine-function transformation-based object matching framework for vehicle detection from unmanned aerial vehicle (UAV) images. First, meaningful and non-redundant patches are generated through a superpixel segmentation strategy. Then, the affine-function transformation-based object matching framework is applied to a vehicle template and each of the patches for vehicle existence estimation. Finally, vehicles are detected and located after matching cost thresholding, vehicle location estimation, and multiple response elimination. Quantitative evaluations on two UAV image datasets show that the proposed method achieves an average completeness, correctness, quality, and F1-measure of 0.909, 0.969, 0.883, and 0.938, respectively. Comparative studies also demonstrate that the proposed method achieves compatible performance with the Faster R-CNN and outperforms the other eight existing methods in accurately detecting vehicles of various conditions.


Sign in / Sign up

Export Citation Format

Share Document