scholarly journals Mapping Multi-Temporal Population Distribution in China from 1985 to 2010 Using Landsat Images via Deep Learning

2021 ◽  
Vol 13 (17) ◽  
pp. 3533
Author(s):  
Haoming Zhuang ◽  
Xiaoping Liu ◽  
Yuchao Yan ◽  
Jinpei Ou ◽  
Jialyu He ◽  
...  

Fine knowledge of the spatiotemporal distribution of the population is fundamental in a wide range of fields, including resource management, disaster response, public health, and urban planning. The United Nations’ Sustainable Development Goals also require the accurate and timely assessment of where people live to formulate, implement, and monitor sustainable development policies. However, due to the lack of appropriate auxiliary datasets and effective methodological frameworks, there are rarely continuous multi-temporal gridded population data over a long historical period to aid in our understanding of the spatiotemporal evolution of the population. In this study, we developed a framework integrating a ResNet-N deep learning architecture, considering neighborhood effects with a vast number of Landsat-5 images from Google Earth Engine for population mapping, to overcome both the data and methodology obstacles associated with rapid multi-temporal population mapping over a long historical period at a large scale. Using this proposed framework in China, we mapped fine-scale multi-temporal gridded population data (1 km × 1 km) of China for the 1985–2010 period with a 5-year interval. The produced multi-temporal population data were validated with available census data and achieved comparable performance. By analyzing the multi-temporal population grids, we revealed the spatiotemporal evolution of population distribution from 1985 to 2010 in China with the characteristic of concentration of the population in big cities and the contraction of small- and medium-sized cities. The framework proposed in this study demonstrates the feasibility of mapping multi-temporal gridded population distribution at a large scale over a long period in a timely and low-cost manner, which is particularly useful in low-income and data-poor areas.

2018 ◽  
Vol 7 (10) ◽  
pp. 389 ◽  
Author(s):  
Wei He ◽  
Naoto Yokoya

In this paper, we present the optical image simulation from synthetic aperture radar (SAR) data using deep learning based methods. Two models, i.e., optical image simulation directly from the SAR data and from multi-temporal SAR-optical data, are proposed to testify the possibilities. The deep learning based methods that we chose to achieve the models are a convolutional neural network (CNN) with a residual architecture and a conditional generative adversarial network (cGAN). We validate our models using the Sentinel-1 and -2 datasets. The experiments demonstrate that the model with multi-temporal SAR-optical data can successfully simulate the optical image; meanwhile, the state-of-the-art model with simple SAR data as input failed. The optical image simulation results indicate the possibility of SAR-optical information blending for the subsequent applications such as large-scale cloud removal, and optical data temporal super-resolution. We also investigate the sensitivity of the proposed models against the training samples, and reveal possible future directions.


2021 ◽  
Vol 13 (9) ◽  
pp. 1740
Author(s):  
Chenxi Lin ◽  
Zhenong Jin ◽  
David Mulla ◽  
Rahul Ghosh ◽  
Kaiyu Guan ◽  
...  

Timely and accurate monitoring of tree crop extent and productivities are necessary for informing policy-making and investments. However, except for a very few tree species (e.g., oil palms) with obvious canopy and extensive planting, most small-crown tree crops are understudied in the remote sensing domain. To conduct large-scale small-crown tree mapping, several key questions remain to be answered, such as the choice of satellite imagery with different spatial and temporal resolution and model generalizability. In this study, we use olive trees in Morocco as an example to explore the two abovementioned questions in mapping small-crown orchard trees using 0.5 m DigitalGlobe (DG) and 3 m Planet imagery and deep learning (DL) techniques. Results show that compared to DG imagery whose mean overall accuracy (OA) can reach 0.94 and 0.92 in two climatic regions, Planet imagery has limited capacity to detect olive orchards even with multi-temporal information. The temporal information of Planet only helps when enough spatial features can be captured, e.g., when olives are with large crown sizes (e.g., >3 m) and small tree spacings (e.g., <3 m). Regarding model generalizability, experiments with DG imagery show a decrease in F1 score up to 5% and OA to 4% when transferring models to new regions with distribution shift in the feature space. Findings from this study can serve as a practical reference for many other similar mapping tasks (e.g., nuts and citrus) around the world.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6936
Author(s):  
Remis Balaniuk ◽  
Olga Isupova ◽  
Steven Reece

This work explores the combination of free cloud computing, free open-source software, and deep learning methods to analyze a real, large-scale problem: the automatic country-wide identification and classification of surface mines and mining tailings dams in Brazil. Locations of officially registered mines and dams were obtained from the Brazilian government open data resource. Multispectral Sentinel-2 satellite imagery, obtained and processed at the Google Earth Engine platform, was used to train and test deep neural networks using the TensorFlow 2 application programming interface (API) and Google Colaboratory (Colab) platform. Fully convolutional neural networks were used in an innovative way to search for unregistered ore mines and tailing dams in large areas of the Brazilian territory. The efficacy of the approach is demonstrated by the discovery of 263 mines that do not have an official mining concession. This exploratory work highlights the potential of a set of new technologies, freely available, for the construction of low cost data science tools that have high social impact. At the same time, it discusses and seeks to suggest practical solutions for the complex and serious problem of illegal mining and the proliferation of tailings dams, which pose high risks to the population and the environment, especially in developing countries.


2020 ◽  
Vol 12 (18) ◽  
pp. 2997 ◽  
Author(s):  
Tianwen Zhang ◽  
Xiaoling Zhang ◽  
Xiao Ke ◽  
Xu Zhan ◽  
Jun Shi ◽  
...  

Ship detection in synthetic aperture radar (SAR) images is becoming a research hotspot. In recent years, as the rise of artificial intelligence, deep learning has almost dominated SAR ship detection community for its higher accuracy, faster speed, less human intervention, etc. However, today, there is still a lack of a reliable deep learning SAR ship detection dataset that can meet the practical migration application of ship detection in large-scene space-borne SAR images. Thus, to solve this problem, this paper releases a Large-Scale SAR Ship Detection Dataset-v1.0 (LS-SSDD-v1.0) from Sentinel-1, for small ship detection under large-scale backgrounds. LS-SSDD-v1.0 contains 15 large-scale SAR images whose ground truths are correctly labeled by SAR experts by drawing support from the Automatic Identification System (AIS) and Google Earth. To facilitate network training, the large-scale images are directly cut into 9000 sub-images without bells and whistles, providing convenience for subsequent detection result presentation in large-scale SAR images. Notably, LS-SSDD-v1.0 has five advantages: (1) large-scale backgrounds, (2) small ship detection, (3) abundant pure backgrounds, (4) fully automatic detection flow, and (5) numerous and standardized research baselines. Last but not least, combined with the advantage of abundant pure backgrounds, we also propose a Pure Background Hybrid Training mechanism (PBHT-mechanism) to suppress false alarms of land in large-scale SAR images. Experimental results of ablation study can verify the effectiveness of the PBHT-mechanism. LS-SSDD-v1.0 can inspire related scholars to make extensive research into SAR ship detection methods with engineering application value, which is conducive to the progress of SAR intelligent interpretation technology.


2021 ◽  
Vol 13 (6) ◽  
pp. 1171
Author(s):  
Mohammed Alahmadi ◽  
Shawky Mansour ◽  
David Martin ◽  
Peter Atkinson

Knowledge of the spatial pattern of the population is important. Census population data provide insufficient spatial information because they are released only for large geographic areas. Nighttime light (NTL) data have been utilized widely as an effective proxy for population mapping. However, the well-reported challenges of pixel overglow and saturation influence the applicability of the Defense Meteorological Program Operational Line-Scan System (DMSP-OLS) for accurate population mapping. This paper integrates three remotely sensed information sources, DMSP-OLS, vegetation, and bare land areas, to develop a novel index called the Vegetation-Bare Adjusted NTL Index (VBANTLI) to overcome the uncertainties in the DMSP-OLS data. The VBANTLI was applied to Riyadh province to downscale governorate-level census population for 2004 and 2010 to a gridded surface of 1 km resolution. The experimental results confirmed that the VBANTLI significantly reduced the overglow and saturation effects compared to widely applied indices such as the Human Settlement Index (HSI), Vegetation Adjusted Normalized Urban Index (VANUI), and radiance-calibrated NTL (RCNTL). The correlation coefficient between the census population and the RCNTL (R = 0.99) and VBANTLI (R = 0.98) was larger than for the HSI (R = 0.14) and VANUI (R = 0.81) products. In addition, Model 5 (VBANTLI) was the most accurate model with R2 and mean relative error (MRE) values of 0.95% and 37%, respectively.


2021 ◽  
Vol 13 (16) ◽  
pp. 3158
Author(s):  
Bo Yu ◽  
Fang Chen ◽  
Chong Xu ◽  
Lei Wang ◽  
Ning Wang

Practical landslide inventory maps covering large-scale areas are essential in emergency response and geohazard analysis. Recently proposed techniques in landslide detection generally focused on landslides in pure vegetation backgrounds and image radiometric correction. There are still challenges in regard to robust methods that automatically detect landslides from images with multiple platforms and without radiometric correction. It is a significant issue in practical application. In order to detect landslides from images over different large-scale areas with different spatial resolutions, this paper proposes a two-branch Matrix SegNet to semantically segment input images by change detection. The Matrix SegNet learns landslide features in multiple scales and aspect ratios. The pre- and post- event images are captured directly from Google Earth, without radiometric correction. To evaluate the proposed framework, we conducted landslide detection in four study areas with two different spatial resolutions. Moreover, two other widely used frameworks: U-Net and SegNet, were adapted to detect landslides via the same data by change detection. The experiments show that our model improves the performance largely in terms of recall, precision, F1-score, and IOU. It is a good starting point to develop a practical, deep learning landslide detection framework for large scale application, using images from different areas, with different spatial resolutions.


2020 ◽  
Vol 2 (1) ◽  
pp. 123-130
Author(s):  
Ahmad Syazili ◽  
Ahmad Mutatkin Bakti

Information on population data is quite important information for stakeholders. Information on population data is often used in various ways such as aiding, or government activities related to population data. The information that is often needed is related to the actual existence of the population in order to conduct a census or other social interests. For this reason, in this study the development of a population mapping information system in Tugumulyo sub-district was made as an effort to provide information to the public regarding population data. In the process of developing information systems, web engineering development methods are used. Web engineering method is a method used to develop a web-based information system. The results of the development in the form of a population distribution information system with the main features of population data information are presented in the form of tables, graphics and maps. In addition, the information displayed can be seen based on the sub-district and also the village or village in the sub-district


2021 ◽  
Vol 13 (6) ◽  
pp. 1142
Author(s):  
Daniela Palacios-Lopez ◽  
Felix Bachofer ◽  
Thomas Esch ◽  
Mattia Marconcini ◽  
Kytt MacManus ◽  
...  

The field of human population mapping is constantly evolving, leveraging the increasing availability of high-resolution satellite imagery and the advancements in the field of machine learning. In recent years, the emergence of global built-area datasets that accurately describe the extent, location, and characteristics of human settlements has facilitated the production of new population grids, with improved quality, accuracy, and spatial resolution. In this research, we explore the capabilities of the novel World Settlement Footprint 2019 Imperviousness layer (WSF2019-Imp), as a single proxy in the production of a new high-resolution population distribution dataset for all of Africa—the WSF2019-Population dataset (WSF2019-Pop). Results of a comprehensive qualitative and quantitative assessment indicate that the WSF2019-Imp layer has the potential to overcome the complexities and limitations of top-down binary and multi-layer approaches of large-scale population mapping, by delivering a weighting framework which is spatially consistent and free of applicability restrictions. The increased thematic detail and spatial resolution (~10m at the Equator) of the WSF2019-Imp layer improve the spatial distribution of populations at local scales, where fully built-up settlement pixels are clearly differentiated from settlement pixels that share a proportion of their area with green spaces, such as parks or gardens. Overall, eighty percent of the African countries reported estimation accuracies with percentage mean absolute errors between ~15% and ~32%, and 50% of the validation units in more than half of the countries reported relative errors below 20%. Here, the remaining lack of information on the vertical dimension and the functional characterisation of the built-up environment are still remaining limitations affecting the quality and accuracy of the final population datasets.


Author(s):  
M. Schmitt ◽  
L. H. Hughes ◽  
C. Qiu ◽  
X. X. Zhu

<p><strong>Abstract.</strong> The availability of curated large-scale training data is a crucial factor for the development of well-generalizing deep learning methods for the extraction of geoinformation from multi-sensor remote sensing imagery. While quite some datasets have already been published by the community, most of them suffer from rather strong limitations, e.g. regarding spatial coverage, diversity or simply number of available samples. Exploiting the freely available data acquired by the Sentinel satellites of the Copernicus program implemented by the European Space Agency, as well as the cloud computing facilities of Google Earth Engine, we provide a dataset consisting of 180,662 triplets of dual-pol synthetic aperture radar (SAR) image patches, multi-spectral Sentinel-2 image patches, and MODIS land cover maps. With all patches being fully georeferenced at a 10&amp;thinsp;m ground sampling distance and covering all inhabited continents during all meteorological seasons, we expect the dataset to support the community in developing sophisticated deep learning-based approaches for common tasks such as scene classification or semantic segmentation for land cover mapping.</p>


2021 ◽  
Author(s):  
Nae-Chyun Chen ◽  
Alexey Kolesnikov ◽  
Sidharth Goel ◽  
Taedong Yun ◽  
Pi-Chuan Chang ◽  
...  

Large-scale population variant data is often used to filter and aid interpretation of variant calls in a single sample. These approaches do not incorporate population information directly into the process of variant calling, and are often limited to filtering which trades recall for precision. In this study, we modify DeepVariant to add a new channel encoding population allele frequencies from the 1000 Genomes Project. We show that this model reduces variant calling errors, improving both precision and recall. We assess the impact of using population-specific or diverse reference panels. We achieve the greatest accuracy with diverse panels, suggesting that large, diverse panels are preferable to individual populations, even when the population matches sample ancestry. Finally, we show that this benefit generalizes to samples with different ancestry from the training data even when the ancestry is also excluded from the reference panel.


Sign in / Sign up

Export Citation Format

Share Document