Road Detection Based on Edge Feature with GAC Model in Aerial Image

Author(s):  
Linying Zhou ◽  
Zhou Zhou ◽  
Hang Ning

Road detection from aerial images still is a challenging task since it is heavily influenced by spectral reflectance, shadows and occlusions. In order to increase the road detection accuracy, a proposed method for road detection by GAC model with edge feature extraction and segmentation is studied in this paper. First, edge feature can be extracted using the proposed gradient magnitude with Canny operator. Then, a reconstructed gradient map is applied in watershed transformation method, which is segmented for the next initial contour. Last, with the combination of edge feature and initial contour, the boundary stopping function is applied in the GAC model. The road boundary result can be accomplished finally. Experimental results show, by comparing with other methods in [Formula: see text]-measure system, that the proposed method can achieve satisfying results.

2019 ◽  
Vol 11 (18) ◽  
pp. 2176 ◽  
Author(s):  
Chen ◽  
Zhong ◽  
Tan

Detecting objects in aerial images is a challenging task due to multiple orientations and relatively small size of the objects. Although many traditional detection models have demonstrated an acceptable performance by using the imagery pyramid and multiple templates in a sliding-window manner, such techniques are inefficient and costly. Recently, convolutional neural networks (CNNs) have successfully been used for object detection, and they have demonstrated considerably superior performance than that of traditional detection methods; however, this success has not been expanded to aerial images. To overcome such problems, we propose a detection model based on two CNNs. One of the CNNs is designed to propose many object-like regions that are generated from the feature maps of multi scales and hierarchies with the orientation information. Based on such a design, the positioning of small size objects becomes more accurate, and the generated regions with orientation information are more suitable for the objects arranged with arbitrary orientations. Furthermore, another CNN is designed for object recognition; it first extracts the features of each generated region and subsequently makes the final decisions. The results of the extensive experiments performed on the vehicle detection in aerial imagery (VEDAI) and overhead imagery research data set (OIRDS) datasets indicate that the proposed model performs well in terms of not only the detection accuracy but also the detection speed.


Author(s):  
Y. Wang ◽  
G. Wang ◽  
Y. Li ◽  
Y. Huang

Vehicle detection from high-resolution aerial image facilitates the study of the public traveling behavior on a large scale. In the context of road, a simple and effective algorithm is proposed to extract the texture-salient vehicle among the pavement surface. Texturally speaking, the majority of pavement surface changes a little except for the neighborhood of vehicles and edges. Within a certain distance away from the given vector of the road network, the aerial image is decomposed into a smoothly-varying cartoon part and an oscillatory details of textural part. The variational model of Total Variation regularization term and L1 fidelity term (TV-L1) is adopted to obtain the salient texture of vehicles and the cartoon surface of pavement. To eliminate the noise of texture decomposition, regions of pavement surface are refined by seed growing and morphological operation. Based on the shape saliency analysis of the central objects in those regions, vehicles are detected as the objects of rectangular shape saliency. The proposed algorithm is tested with a diverse set of aerial images that are acquired at various resolution and scenarios around China. Experimental results demonstrate that the proposed algorithm can detect vehicles at the rate of 71.5% and the false alarm rate of 21.5%, and that the speed is 39.13 seconds for a 4656 x 3496 aerial image. It is promising for large-scale transportation management and planning.


2013 ◽  
Vol 765-767 ◽  
pp. 2189-2194
Author(s):  
Chun Guang Duan ◽  
Shu Yi Pang ◽  
Hsin Guan

The research of vehicle dynamics performance on the flat road has been more perfect at present. Around the world ,for lack of simulation environment, the analysis of vehicle driving and handling performance on non-level road is on an explorative stage. The paper established tire road detection model by using of open source code OPCODE. The ray given off from the wheel center intersects with the road model, and obtains the precise contact point and the road normal vector. Then wrote computer program based on the established detection model and embedded it into the complex vehicle model to simulate on longitudinal and lateral slope road. The simulation results show that: each detection time reach microsecond level and in the 1ms vehicle dynamics calculation step, road detection model meet the real-time simulation, and also the detection accuracy satisfy the requirements of the whole vehicle simulation.


2021 ◽  
Author(s):  
◽  
Pooparat Plodpradista

The revised unpaved road detection system (RURD) is a novel method for detecting unpaved roads in an arid environment from color imagery collected by a forward-looking camera mounted on a moving platform. The objective is to develop and validate a novel system with the ability to detect an unpaved road at a look-ahead distance up to 40 meters that does not utilize an expensive sensor, i.e., LIDAR but instead a low-cost color camera sensor. The RURD system is composed of two stages, the road region estimation (RRE) and the road model formation (RMF). The RRE stage classifies the image patches selected at 20-meter distance from the camera and labels them to either road or non-road. The classification result is used as a high confidence road area in the image, which is used in the RMF stage. The RMF stage uses log Gabor filter bank to extract road pixels that connect to the high confidence road region and generates a 3rd degree polynomial curve to represent the road model in a given image. The road model allows the system to extend the detection range from 20 meters to farther look-ahead distance. The RURD system is evaluated with two-years worth of data collection that measures both spatial and temporal precisions. The system is also benchmarked against an algorithm from Rasmussen entitled "Grouping Dominant Orientations for Ill-Structured Roads Following", which shown an average increase detection accuracy over 30 [percent].


2019 ◽  
Vol 11 (8) ◽  
pp. 930 ◽  
Author(s):  
Xiangrong Zhang ◽  
Xiao Han ◽  
Chen Li ◽  
Xu Tang ◽  
Huiyu Zhou ◽  
...  

Aerial photographs and satellite images are one of the resources used for earth observation. In practice, automated detection of roads on aerial images is of significant values for the application such as car navigation, law enforcement, and fire services. In this paper, we present a novel road extraction method from aerial images based on an improved generative adversarial network, which is an end-to-end framework only requiring a few samples for training. Experimental results on the Massachusetts Roads Dataset show that the proposed method provides better performance than several state of the art techniques in terms of detection accuracy, recall, precision and F1-score.


2019 ◽  
Vol 4 (1) ◽  
pp. 9
Author(s):  
Takuro Oki ◽  
Ryusuke Miyamoto ◽  
Hiroyuki Yomo ◽  
Shinsuke Hara

In the fields of professional and amateur sports, players’ health, physical and physiological conditions during exercise should be properly monitored and managed. The authors of this paper previously proposed a real-time vital-sign monitoring system for players using a wireless multi-hop sensor network that transmits their vital data. However, existing routing schemes based on the received signal strength indicator or global positioning system do not work well, because of the high speeds and the density of sensor nodes attached to players. To solve this problem, we proposed a novel scheme, image-assisted routing (IAR), which estimates the locations of sensor nodes using images captured from cameras mounted on unmanned aerial vehicles. However, it is not clear where the best viewpoints are for aerial player detection. In this study, the authors investigated detection accuracy from several viewpoints using an aerial-image dataset generated with computer graphics. Experimental results show that the detection accuracy was best when the viewpoints were slightly distant from just above the center of the field. In the best case, the detection accuracy was very good: 0.005524 miss rate at 0.01 false positive-per-image. These results are informative for player detection using aerial images and can facilitate to realize IAR.


Author(s):  
Y. Wang ◽  
G. Wang ◽  
Y. Li ◽  
Y. Huang

Vehicle detection from high-resolution aerial image facilitates the study of the public traveling behavior on a large scale. In the context of road, a simple and effective algorithm is proposed to extract the texture-salient vehicle among the pavement surface. Texturally speaking, the majority of pavement surface changes a little except for the neighborhood of vehicles and edges. Within a certain distance away from the given vector of the road network, the aerial image is decomposed into a smoothly-varying cartoon part and an oscillatory details of textural part. The variational model of Total Variation regularization term and L1 fidelity term (TV-L1) is adopted to obtain the salient texture of vehicles and the cartoon surface of pavement. To eliminate the noise of texture decomposition, regions of pavement surface are refined by seed growing and morphological operation. Based on the shape saliency analysis of the central objects in those regions, vehicles are detected as the objects of rectangular shape saliency. The proposed algorithm is tested with a diverse set of aerial images that are acquired at various resolution and scenarios around China. Experimental results demonstrate that the proposed algorithm can detect vehicles at the rate of 71.5% and the false alarm rate of 21.5%, and that the speed is 39.13 seconds for a 4656 x 3496 aerial image. It is promising for large-scale transportation management and planning.


2020 ◽  
Vol 12 (22) ◽  
pp. 3750
Author(s):  
Wei Guo ◽  
Weihong Li ◽  
Zhenghao Li ◽  
Weiguo Gong ◽  
Jinkai Cui ◽  
...  

Object detection is one of the core technologies in aerial image processing and analysis. Although existing aerial image object detection methods based on deep learning have made some progress, there are still some problems remained: (1) Most existing methods fail to simultaneously consider multi-scale and multi-shape object characteristics in aerial images, which may lead to some missing or false detections; (2) high precision detection generally requires a large and complex network structure, which usually makes it difficult to achieve the high detection efficiency and deploy the network on resource-constrained devices for practical applications. To solve these problems, we propose a slimmer network for more efficient object detection in aerial images. Firstly, we design a polymorphic module (PM) for simultaneously learning the multi-scale and multi-shape object features, so as to better detect the hugely different objects in aerial images. Then, we design a group attention module (GAM) for better utilizing the diversiform concatenation features in the network. By designing multiple detection headers with adaptive anchors and the above-mentioned two modules, we propose a one-stage network called PG-YOLO for realizing the higher detection accuracy. Based on the proposed network, we further propose a more efficient channel pruning method, which can slim the network parameters from 63.7 million (M) to 3.3M that decreases the parameter size by 94.8%, so it can significantly improve the detection efficiency for real-time detection. Finally, we execute the comparative experiments on three public aerial datasets, and the experimental results show that the proposed method outperforms the state-of-the-art methods.


2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


2021 ◽  
Vol 11 (14) ◽  
pp. 6269
Author(s):  
Wang Jing ◽  
Wang Leqi ◽  
Han Yanling ◽  
Zhang Yun ◽  
Zhou Ruyan

For the fast detection and recognition of apple fruit targets, based on the real-time DeepSnake deep learning instance segmentation model, this paper provided an algorithm basis for the practical application and promotion of apple picking robots. Since the initial detection results have an important impact on the subsequent edge prediction, this paper proposed an automatic detection method for apple fruit targets in natural environments based on saliency detection and traditional color difference methods. Combined with the original image, the histogram backprojection algorithm was used to further optimize the salient image results. A dynamic adaptive overlapping target separation algorithm was proposed to locate the single target fruit and further to determine the initial contour for DeepSnake, in view of the possible overlapping fruit regions in the saliency map. Finally, the target fruit was labeled based on the segmentation results of the examples. In the experiment, 300 training datasets were used to train the DeepSnake model, and the self-built dataset containing 1036 pictures of apples in various situations under natural environment was tested. The detection accuracy of target fruits under non-overlapping shaded fruits, overlapping fruits, shaded branches and leaves, and poor illumination conditions were 99.12%, 94.78%, 90.71%, and 94.46% respectively. The comprehensive detection accuracy was 95.66%, and the average processing time was 0.42 s in 1036 test images, which showed that the proposed algorithm can effectively separate the overlapping fruits through a not-very-large training samples and realize the rapid and accurate detection of apple targets.


Sign in / Sign up

Export Citation Format

Share Document