scholarly journals An Oil Well Dataset Derived from Satellite-Based Remote Sensing

2021 ◽  
Vol 13 (6) ◽  
pp. 1132
Author(s):  
Zhibao Wang ◽  
Lu Bai ◽  
Guangfu Song ◽  
Jie Zhang ◽  
Jinhua Tao ◽  
...  

Estimation of the number and geo-location of oil wells is important for policy holders considering their impact on energy resource planning. With the recent development in optical remote sensing, it is possible to identify oil wells from satellite images. Moreover, the recent advancement in deep learning frameworks for object detection in remote sensing makes it possible to automatically detect oil wells from remote sensing images. In this paper, we collected a dataset named Northeast Petroleum University–Oil Well Object Detection Version 1.0 (NEPU–OWOD V1.0) based on high-resolution remote sensing images from Google Earth Imagery. Our database includes 1192 oil wells in 432 images from Daqing City, which has the largest oilfield in China. In this study, we compared nine different state-of-the-art deep learning models based on algorithms for object detection from optical remote sensing images. Experimental results show that the state-of-the-art deep learning models achieve high precision on our collected dataset, which demonstrate the great potential for oil well detection in remote sensing.

2021 ◽  
Vol 13 (13) ◽  
pp. 2524
Author(s):  
Ziyi Chen ◽  
Dilong Li ◽  
Wentao Fan ◽  
Haiyan Guan ◽  
Cheng Wang ◽  
...  

Deep learning models have brought great breakthroughs in building extraction from high-resolution optical remote-sensing images. Among recent research, the self-attention module has called up a storm in many fields, including building extraction. However, most current deep learning models loading with the self-attention module still lose sight of the reconstruction bias’s effectiveness. Through tipping the balance between the abilities of encoding and decoding, i.e., making the decoding network be much more complex than the encoding network, the semantic segmentation ability will be reinforced. To remedy the research weakness in combing self-attention and reconstruction-bias modules for building extraction, this paper presents a U-Net architecture that combines self-attention and reconstruction-bias modules. In the encoding part, a self-attention module is added to learn the attention weights of the inputs. Through the self-attention module, the network will pay more attention to positions where there may be salient regions. In the decoding part, multiple large convolutional up-sampling operations are used for increasing the reconstruction ability. We test our model on two open available datasets: the WHU and Massachusetts Building datasets. We achieve IoU scores of 89.39% and 73.49% for the WHU and Massachusetts Building datasets, respectively. Compared with several recently famous semantic segmentation methods and representative building extraction methods, our method’s results are satisfactory.


2021 ◽  
Vol 13 (10) ◽  
pp. 1921
Author(s):  
Xu He ◽  
Shiping Ma ◽  
Linyuan He ◽  
Le Ru ◽  
Chen Wang

Oriented object detection in optical remote sensing images (ORSIs) is a challenging task since the targets in ORSIs are displayed in an arbitrarily oriented manner and on small scales, and are densely packed. Current state-of-the-art oriented object detection models used in ORSIs primarily evolved from anchor-based and direct regression-based detection paradigms. Nevertheless, they still encounter a design difficulty from handcrafted anchor definitions and learning complexities in direct localization regression. To tackle these issues, in this paper, we proposed a novel multi-sector oriented object detection framework called MSO2-Det, which quantizes the scales and orientation prediction of targets in ORSIs via an anchor-free classification-to-regression approach. Specifically, we first represented the arbitrarily oriented bounding box as four scale offsets and angles in four quadrant sectors of the corresponding Cartesian coordinate system. Then, we divided the scales and angle space into multiple discrete sectors and obtained more accurate localization information by a coarse-granularity classification to fine-grained regression strategy. In addition, to decrease the angular-sector classification loss and accelerate the network’s convergence, we designed a smooth angular-sector label (SASL) that smoothly distributes label values with a definite tolerance radius. Finally, we proposed a localization-aided detection score (LADS) to better represent the confidence of a detected box by combining the category-classification score and the sector-selection score. The proposed MSO2-Det achieves state-of-the-art results on three widely used benchmarks, including the DOTA, HRSC2016, and UCAS-AOD data sets.


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2271 ◽  
Author(s):  
Fukun Bi ◽  
Jinyuan Hou ◽  
Liang Chen ◽  
Zhihua Yang ◽  
Yanping Wang

Ship detection plays a significant role in military and civil fields. Although some state-of-the-art detection methods, based on convolutional neural networks (CNN) have certain advantages, they still cannot solve the challenge well, including the large size of images, complex scene structure, a large amount of false alarm interference, and inshore ships. This paper proposes a ship detection method from optical remote sensing images, based on visual attention enhanced network. To effectively reduce false alarm in non-ship area and improve the detection efficiency from remote sensing images, we developed a light-weight local candidate scene network( L 2 CSN) to extract the local candidate scenes with ships. Then, for the selected local candidate scenes, we propose a ship detection method, based on the visual attention DSOD(VA-DSOD). Here, to enhance the detection performance and positioning accuracy of inshore ships, we both extract semantic features, based on DSOD and embed a visual attention enhanced network in DSOD to extract the visual features. We test the detection method on a large number of typical remote sensing datasets, which consist of Google Earth images and GaoFen-2 images. We regard the state-of-the-art method [sliding window DSOD (SW+DSOD)] as a baseline, which achieves the average precision (AP) of 82.33%. The AP of the proposed method increases by 7.53%. The detection and location performance of our proposed method outperforms the baseline in complex remote sensing scenes.


2021 ◽  
Vol 30 ◽  
pp. 1305-1317
Author(s):  
Qijian Zhang ◽  
Runmin Cong ◽  
Chongyi Li ◽  
Ming-Ming Cheng ◽  
Yuming Fang ◽  
...  

2021 ◽  
Vol 13 (4) ◽  
pp. 747
Author(s):  
Yanghua Di ◽  
Zhiguo Jiang ◽  
Haopeng Zhang

Fine-grained visual categorization (FGVC) is an important and challenging problem due to large intra-class differences and small inter-class differences caused by deformation, illumination, angles, etc. Although major advances have been achieved in natural images in the past few years due to the release of popular datasets such as the CUB-200-2011, Stanford Cars and Aircraft datasets, fine-grained ship classification in remote sensing images has been rarely studied because of relative scarcity of publicly available datasets. In this paper, we investigate a large amount of remote sensing image data of sea ships and determine most common 42 categories for fine-grained visual categorization. Based our previous DSCR dataset, a dataset for ship classification in remote sensing images, we collect more remote sensing images containing warships and civilian ships of various scales from Google Earth and other popular remote sensing image datasets including DOTA, HRSC2016, NWPU VHR-10, We call our dataset FGSCR-42, meaning a dataset for Fine-Grained Ship Classification in Remote sensing images with 42 categories. The whole dataset of FGSCR-42 contains 9320 images of most common types of ships. We evaluate popular object classification algorithms and fine-grained visual categorization algorithms to build a benchmark. Our FGSCR-42 dataset is publicly available at our webpages.


2017 ◽  
Vol 12 ◽  
pp. 05012 ◽  
Author(s):  
Ying Liu ◽  
Hong-Yuan Cui ◽  
Zheng Kuang ◽  
Guo-Qing Li

Sign in / Sign up

Export Citation Format

Share Document