Precision management of winter wheat based on aerial images and hyperspectral data obtained by unmanned aircraft

Author(s):  
Han Yunxia ◽  
Li Minzan ◽  
Zhang Xijie ◽  
Jia Liangliang ◽  
Chen Xingping ◽  
...  
2005 ◽  
Author(s):  
Yunxia Han ◽  
Minzan Li ◽  
Liangliang Jia ◽  
Xijie Zhang ◽  
Fusuo Zhang

2006 ◽  
Author(s):  
Wolfgang Koppe ◽  
Rainer Laudien ◽  
Martin L. Gnyp ◽  
Liangliang Jia ◽  
Fei Li ◽  
...  

2021 ◽  
Vol 267 ◽  
pp. 112724
Author(s):  
Yao Zhang ◽  
Jian Hui ◽  
Qiming Qin ◽  
Yuanheng Sun ◽  
Tianyuan Zhang ◽  
...  

2017 ◽  
Vol 3 (2) ◽  
pp. 207-218 ◽  
Author(s):  
Julia M. Hildebrand

Abstract Consumer drones are entering everyday spaces with increasing frequency and impact as more and more hobbyists use the aerial tool for recreational photography and videography. In this article, I seek to expand the common reference to drones as “unmanned aircraft systems” by conceptualising the hobby drone practice more broadly as a heterogeneous, mobile assemblage of virtual and physical practices and human and non-human actors. Drawing on initial ethnographic fieldwork and interviews with drone hobbyists as well as ongoing cyber-ethnographic research on social networking sites, this article gives an overview of how the mobile drone practice needs to be situated alongside people, things, and data in physical and virtual spheres. As drone hobbyists set out to fly their devices at a given time and place, a number of relations reaching across atmospheric (e. g. weather conditions, daylight hours, GPS availability), geographic (e. g. volumetric obstacles), mobile (e. g. flight restrictions, ground traffic), and social (e. g. bystanders) dimensions demand attention. Furthermore, when drone operators share their aerial images online, visual (e. g. live stream) and cyber-social relations (e. g. comments, scrutiny) come into play, which may similarly impact the drone practice in terms of the pilot’s performance. While drone hobbysists appear to be interested in keeping a “low profile” in the physical space, many pilots manage a comparatively “high profile” in the virtual sphere with respect to the sharing of their images. Since the recreational trend brings together elements of convergence, location-awareness, and real-time feedback, I suggest approaching consumer drones as, what Scott McQuire (2016) terms, “geomedia.” Moreover, consumer drones open up different “cybermobilities” (Adey/Bevan 2006) understood as connected movement that flows through and shapes both physical and virtual spaces simultaneously. The way that many drone hobbyists appear to navigate these different environments, sometimes at the same time, has methodological implications for ethnographic research on consumer drones. Ultimately, the assemblage-perspective brings together aviation-related and socio-cultural concerns relevant in the context of consumer drones as digital communication technology and visual production tool.


2020 ◽  
Vol 11 (11) ◽  
pp. 1032-1041
Author(s):  
Lin Wang ◽  
Qinhong Liao ◽  
Xiaobin Xu ◽  
Zhenhai Li ◽  
Hongchun Zhu

Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 583 ◽  
Author(s):  
Khang Nguyen ◽  
Nhut T. Huynh ◽  
Phat C. Nguyen ◽  
Khanh-Duy Nguyen ◽  
Nguyen D. Vo ◽  
...  

Unmanned aircraft systems or drones enable us to record or capture many scenes from the bird’s-eye view and they have been fast deployed to a wide range of practical domains, i.e., agriculture, aerial photography, fast delivery and surveillance. Object detection task is one of the core steps in understanding videos collected from the drones. However, this task is very challenging due to the unconstrained viewpoints and low resolution of captured videos. While deep-learning modern object detectors have recently achieved great success in general benchmarks, i.e., PASCAL-VOC and MS-COCO, the robustness of these detectors on aerial images captured by drones is not well studied. In this paper, we present an evaluation of state-of-the-art deep-learning detectors including Faster R-CNN (Faster Regional CNN), RFCN (Region-based Fully Convolutional Networks), SNIPER (Scale Normalization for Image Pyramids with Efficient Resampling), Single-Shot Detector (SSD), YOLO (You Only Look Once), RetinaNet, and CenterNet for the object detection in videos captured by drones. We conduct experiments on VisDrone2019 dataset which contains 96 videos with 39,988 annotated frames and provide insights into efficient object detectors for aerial images.


Sign in / Sign up

Export Citation Format

Share Document