scholarly journals The Framework of Novel k-means Embedded Cloud Computing Platform for Real-time Unmanned Aerial Vehicle (UAV) Remote Sensing Images Processing

2014 ◽  
Author(s):  
Feng-cheng LIN
2013 ◽  
Vol 50 (3) ◽  
pp. 322-336 ◽  
Author(s):  
Feng-Cheng Lin ◽  
Lan-Kun Chung ◽  
Chun-Ju Wang ◽  
Wen-Yuan Ku ◽  
Tien-Yin Chou

Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4115 ◽  
Author(s):  
Yuxia Li ◽  
Bo Peng ◽  
Lei He ◽  
Kunlong Fan ◽  
Zhenxu Li ◽  
...  

Roads are vital components of infrastructure, the extraction of which has become a topic of significant interest in the field of remote sensing. Because deep learning has been a popular method in image processing and information extraction, researchers have paid more attention to extracting road using neural networks. This article proposes the improvement of neural networks to extract roads from Unmanned Aerial Vehicle (UAV) remote sensing images. D-Linknet was first considered for its high performance; however, the huge scale of the net reduced computational efficiency. With a focus on the low computational efficiency problem of the popular D-LinkNet, this article made some improvements: (1) Replace the initial block with a stem block. (2) Rebuild the entire network based on ResNet units with a new structure, allowing for the construction of an improved neural network D-Linknetplus. (3) Add a 1 × 1 convolution layer before DBlock to reduce the input feature maps, reducing parameters and improving computational efficiency. Add another 1 × 1 convolution layer after DBlock to recover the required number of output channels. Accordingly, another improved neural network B-D-LinknetPlus was built. Comparisons were performed between the neural nets, and the verification were made with the Massachusetts Roads Dataset. The results show improved neural networks are helpful in reducing the network size and developing the precision needed for road extraction.


2020 ◽  
pp. 002029402092226
Author(s):  
Cheng Xu ◽  
Chanjuan Yin ◽  
Daqing Huang ◽  
Wei Han ◽  
Dongzhen Wang

Ground target three-dimensional positions measured from optical remote-sensing images taken by an unmanned aerial vehicle play an important role in related military and civil applications. The weakness of this system lies in its localization accuracy being unstable and its efficiency being low when using a single unmanned aerial vehicle. In this paper, a novel multi–unmanned aerial vehicle cooperative target localization measurement method is proposed to overcome these issues. In the target localization measurement stage, three or more unmanned aerial vehicles simultaneously observe the same ground target and acquire multiple remote-sensing images. According to the principle of perspective projection, the target point, its image point, and the camera’s optic center are collinear, and nonlinear observation equations are established. These equations are then converted to linear equations using a Taylor expansion. Robust weighted least-squares estimation is used to solve the equations with the objective function of minimizing the weighted square sum of re-projection errors from target points to multiple pairs of images, which can make the best use of the effective information and avoid interference from the observation data. An automatic calculation strategy using a weight matrix is designed, and the weight matrix and target-position coordinate value are updated in each iteration until the iteration stopping condition is satisfied. Compared with the stereo-image-pair cross-target localization method, the multi–unmanned aerial vehicle cooperative target localization method can use more observation information, which results in higher rendezvous accuracy and improved performance. Finally, the effectiveness and robustness of this method is verified by numerical simulation and flight testing. The results show that the proposed method can effectively improve the precision of the target’s localization and demonstrates great potential for providing more accurate target localization in engineering applications.


2021 ◽  
Vol 2065 (1) ◽  
pp. 012020
Author(s):  
Nver Ren ◽  
Rong Jiang ◽  
Dongze Zhang

Abstract An cloud computing platform based on B/S architecture and docker container technology for autonomous driving simulation has been established in this paper. The map editor module of the cloud platform lets users design 3D scenes for simulating and testing automated driving systems. When the customized roadway scene for simulation created, it would be saved as OpenDrive format both for the server of cloud platform and CarMaker’s TestRun which all parameters of the virtual environment (vehicle, road, tires, etc.) are sufficiently defined. Then, based on the application online (APO) communication protocol of CarMaker, the local APO agent service was created. When the 27 parameters of vehicle dynamics received from CarMaker server, they were sent to the cloud platform in real time through UPD protocol. The process of data communication is completed by APO agent. Through the work above, a co-simulation between cloud platform and CarMaker could be successfully established for autonomous driving with seventeen-degree-of-freedom. Through the co-simulation experiment, it is found that the real-time data sampling frequency of the co-simulation is 70Hz, which completes the synchronous simulation of carmaker and cloud platform.


Sign in / Sign up

Export Citation Format

Share Document