optical imagery
Recently Published Documents


TOTAL DOCUMENTS

344
(FIVE YEARS 111)

H-INDEX

30
(FIVE YEARS 6)

2021 ◽  
Vol 2 ◽  
Author(s):  
Nadja den Besten ◽  
Susan Steele-Dunne ◽  
Benjamin Aouizerats ◽  
Ariel Zajdband ◽  
Richard de Jeu ◽  
...  

In this study the impact of sucrose accumulation in Sentinel-1 backscatter observations is presented and compared to Planet optical observations. Sugarcane yield data from a sugarcane plantation in Xinavane, Mozambique are used for this study. The database contains sugarcane yield of 387 fields over two seasons (2018-2019 and 2019-2020). The relation between sugarcane yield and Sentinel-1 VV and VH backscatter observation is analyzed by using the Normalized Difference Vegetation Index (NDVI) data as derived from Planet Scope optical imagery as a benchmark. The different satellite observations were compared over time to sugarcane yield to understand how the relation between the observations and yield evolves during the growing season. A negative correlation between yield and Cross Ratio (CR) from Sentinel-1 backscatter was found while a positive correlation between yield and Planet NDVI was observed. An additional modeling study on the dielectric properties of the crop revealed how the CR could be affected by sucrose accumulation during the growing season and supported the opposite correlations. The results shows CR contains information on sucrose content in the sugarcane plant. This sets a basis for further development of sucrose monitoring and prediction using a combination of radar and optical imagery.


2021 ◽  
Vol 13 (23) ◽  
pp. 4928
Author(s):  
Yanming Chen ◽  
Xiaoqiang Liu ◽  
Yijia Xiao ◽  
Qiqi Zhao ◽  
Sida Wan

The heterogeneity of urban landscape in the vertical direction should not be neglected in urban ecology research, which requires urban land cover product transformation from two-dimensions to three-dimensions using light detection and ranging system (LiDAR) point clouds. Previous studies have demonstrated that the performance of two-dimensional land cover classification can be improved by fusing optical imagery and LiDAR data using several strategies. However, few studies have focused on the fusion of LiDAR point clouds and optical imagery for three-dimensional land cover classification, especially using a deep learning framework. In this study, we proposed a novel prior-level fusion strategy and compared it with the no-fusion strategy (baseline) and three other commonly used fusion strategies (point-level, feature-level, and decision-level). The proposed prior-level fusion strategy uses two-dimensional land cover derived from optical imagery as the prior knowledge for three-dimensional classification. Then, a LiDAR point cloud is linked to the prior information using the nearest neighbor method and classified by a deep neural network. Our proposed prior-fusion strategy has higher overall accuracy (82.47%) on data from the International Society for Photogrammetry and Remote Sensing, compared with the baseline (74.62%), point-level (79.86%), feature-level (76.22%), and decision-level (81.12%). The improved accuracy reflects two features: (1) fusing optical imagery to LiDAR point clouds improves the performance of three-dimensional urban land cover classification, and (2) the proposed prior-level strategy directly uses semantic information provided by the two-dimensional land cover classification rather than the original spectral information of optical imagery. Furthermore, the proposed prior-level fusion strategy provides a series that fills the gap between two- and three-dimensional land cover classification.


2021 ◽  
Vol 13 (23) ◽  
pp. 4836
Author(s):  
Chunjing Yao ◽  
Hongchao Ma ◽  
Wenjun Luo ◽  
Haichi Ma

The registration of optical imagery and 3D Light Detection and Ranging (LiDAR) point data continues to be a challenge for various applications in photogrammetry and remote sensing. In this paper, the framework employs a new registration primitive called virtual point (VP) that can be generated from the linear features within a LiDAR dataset including straight lines (SL) and curved lines (CL). By using an auxiliary parameter (λ), it is easy to take advantage of the accurate and fast calculation of the one-step registration transformation model. The transformation model parameters and λs can be calculated simultaneously by applying the least square method recursively. In urban areas, there are many buildings with different shapes. Therefore, the boundaries of buildings provide a large number of SL and CL features and selecting properly linear features and transforming into VPs can reduce the errors caused by the semi-discrete random characteristics of the LiDAR points. According to the result shown in the paper, the registration precision can reach the 1~2 pixels level of the optical images.


2021 ◽  
Vol 13 (21) ◽  
pp. 4435
Author(s):  
Nicolas Le Moine ◽  
Mounir Mahdade

Bathymetry is a key element in the modeling of river systems for flood mapping, geomorphology, or stream habitat characterization. Standard practices rely on the interpolation of in situ depth measurements obtained with differential GPS or total station surveys, while more advanced techniques involve bathymetric LiDAR or acoustic soundings. However, these high-resolution active techniques are not so easily applied over large areas. Alternative methods using passive optical imagery present an interesting trade-off: they rely on the fact that wavelengths composing solar radiation are not attenuated at the same rates in water. Under certain assumptions, the logarithm of the ratio of radiances in two spectral bands is linearly correlated with depth. In this study, we go beyond these ratio methods in defining a multispectral hue that retains all spectral information. Given n coregistered bands, this spectral invariant lies on the (n−2)-sphere embedded in Rn−1, denoted Sn−2 and tagged ‘hue hypersphere’. It can be seen as a generalization of the RGB ‘color wheel’ (S1) in higher dimensions. We use this mapping to identify a hue-depth relation in a 35 km reach of the Garonne River, using high resolution (0.50 m) airborne imagery in four bands and data from 120 surveyed cross-sections. The distribution of multispectral hue over river pixels is modeled as a mixture of two components: one component represents the distribution of substrate hue, while the other represents the distribution of ‘deep water’ hue; parameters are fitted such that membership probability for the ‘deep’ component correlates with depth.


2021 ◽  
Vol 13 (21) ◽  
pp. 4394
Author(s):  
Zainoolabadien Karim ◽  
Terence L. van Zyl

Differential interferometric synthetic aperture radar (DInSAR), coherence, phase, and displacement are derived from processing SAR images to monitor geological phenomena and urban change. Previously, Sentinel-1 SAR data combined with Sentinel-2 optical imagery has improved classification accuracy in various domains. However, the fusing of Sentinel-1 DInSAR processed imagery with Sentinel-2 optical imagery has not been thoroughly investigated. Thus, we explored this fusion in urban change detection by creating a verified balanced binary classification dataset comprising 1440 blobs. Machine learning models using feature descriptors and non-deep learning classifiers, including a two-layer convolutional neural network (ConvNet2), were used as baselines. Transfer learning by feature extraction (TLFE) using various pre-trained models, deep learning from random initialization, and transfer learning by fine-tuning (TLFT) were all evaluated. We introduce a feature space ensemble family (FeatSpaceEnsNet), an average ensemble family (AvgEnsNet), and a hybrid ensemble family (HybridEnsNet) of TLFE neural networks. The FeatSpaceEnsNets combine TLFE features directly in the feature space using logistic regression. AvgEnsNets combine TLFEs at the decision level by aggregation. HybridEnsNets are a combination of FeatSpaceEnsNets and AvgEnsNets. Several FeatSpaceEnsNets, AvgEnsNets, and HybridEnsNets, comprising a heterogeneous mixture of different depth and architecture models, are defined and evaluated. We show that, in general, TLFE outperforms both TLFT and classic deep learning for the small dataset used and that larger ensembles of TLFE models do not always improve accuracy. The best performing ensemble is an AvgEnsNet (84.862%) comprised of a ResNet50, ResNeXt50, and EfficientNet B4. This was matched by a similarly composed FeatSpaceEnsNet with an F1 score of 0.001 and variance of 0.266 less. The best performing HybridEnsNet had an accuracy of 84.775%. All of the ensembles evaluated outperform the best performing single model, ResNet50 with TLFE (83.751%), except for AvgEnsNet 3, AvgEnsNet 6, and FeatSpaceEnsNet 5. Five of the seven similarly composed FeatSpaceEnsNets outperform the corresponding AvgEnsNet.


2021 ◽  
Vol 22 (11) ◽  
Author(s):  
Anggita Kartikasari ◽  
TODHI PRISTIANTO ◽  
RIZKI HANINTYO ◽  
EGHBERT ELVAN AMPOU ◽  
TEJA ARIEF WIBAWA ◽  
...  

Abstract. Kartikasari A, Pristianto T, Hanintyo R, Ampou EE, Wibawa TA, Borneo BB. 2021. Representative benthic habitat mapping on Lovina coral reefs in Northern Bali, Indonesia. Biodiversitas 22: 4766-4774. Satellite optical imagery datasets integrated with in situ measurements are widely used to derive the spatial distribution of various benthic habitats in coral reef ecosystems. In this study, an approach to estimate spatial coverage of those habitats based on observation derived from Sentinel-2 optical imagery and a field survey, is presented. This study focused on the Lovina coral reef ecosystem of Northern Bali, Indonesia to support deployment of artificial reefs within the Indonesian Coral Reef Garden (ICRG) programme. Three specific locations were explored: Temukus, Tukad Mungga, and Baktiseraga waters. Spatial benthic habitat coverages of these three waters was estimated based on supervised classification techniques using 10m bands of Sentinel-2 imagery and the medium scale approach (MSA) transect method of in situ measurement.The study indicates that total coverage of benthic habitat is 61.34 ha, 25.17 ha, and 27.88 ha for Temukus, Tukad Mungga, and Baktiseraga waters, respectively. The dominant benthic habitat of those three waters consists of sand, seagrass, coral, rubble, reef slope and intertidal zone. The coral reef coverage is 29.48 ha (48%) for Temukus covered by genus Acropora, Isopora, Porites, Montipora, Pocillopora. The coverage for Tukad Mungga is 8.69 ha (35%) covered by genus Acropora, Montipora, Favia, Psammocora, Porites, and the coverage for Baktiseraga is 11.37 ha (41%) covered by genus Montipora sp, Goniastrea, Pavona, Platygyra, Pocillopora, Porites, Acropora, Leptoseris, Acropora, Pocillopora, Fungia. The results are expected to be suitable as supporting data in restoring coral reef ecosystems in the northern part of Bali, especially in Buleleng District.


Forests ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 1214
Author(s):  
Qingfan Zhang ◽  
Bo Wan ◽  
Zhenxiu Cao ◽  
Quanfa Zhang ◽  
Dezhi Wang

Mapping plucking areas of tea plantations is essential for tea plantation management and production estimation. However, on-ground survey methods are time-consuming and labor-intensive, and satellite-based remotely sensed data are not fine enough for plucking area mapping that is 0.5–1.5 m in width. Unmanned aerial vehicles (UAV) remote sensing can provide an alternative. This paper explores the potential of using UAV-derived remotely sensed data for identifying plucking areas of tea plantations. In particular, four classification models were built based on different UAV data (optical imagery, digital aerial photogrammetry, and lidar data). The results indicated that the integration of optical imagery and lidar data produced the highest overall accuracy using the random forest algorithm (94.39%), while the digital aerial photogrammetry data could be an alternative to lidar point clouds with only a ~3% accuracy loss. The plucking area of tea plantations in the Huashan Tea Garden was accurately measured for the first time with a total area of 6.41 ha, which accounts for 57.47% of the tea garden land. The most important features required for tea plantation mapping were the canopy height, variances of heights, blue band, and red band. Furthermore, a cost–benefit analysis was conducted. The novelty of this study is that it is the first specific exploration of UAV remote sensing in mapping plucking areas of tea plantations, demonstrating it to be an accurate and cost-effective method, and hence represents an advance in remote sensing of tea plantations.


Author(s):  
L. Boudinaud ◽  
S. A. Orenstein

Abstract. The proposed analysis based on Sentinel-2 imagery provides evidence of impacts of the conflict in the Mopti region (central Mali), which has led to widescale cropland abandonment. This area is characterized by rapidly rising levels of violence since 2018, due to the presence of armed groups and the proliferation of self-defence militias. This study investigates how high-resolution optical imagery can help evaluate the linkages between violence and land cover / land use (LCLU) change. The processing environment of Google Earth Engine was used to generate the so-called 3-Period TimeScan (3PTS) product, a RGB composite combining the maximum NDVI values in the beginning, in the middle and in the end of the growing season, used to single out cultivated land for each year of interest. Theoretically, the period between June 15th and October 15th covers an annual agricultural cycle for the considered area; consequently, images acquired during that period were used to generate the 3PTS composites for the year of interest (2019) and for pre-conflict years. By comparing the situations before and after the start of the crisis, each populated site was categorized according to the degree of cropland change detected in its surroundings. The resulting overview map enables a regional-scale interpretation of farming activities in 2019, clearly highlighting localized areas of cropland abandonment in the region and showing a strong spatial correlation with incidence of conflict.


Sign in / Sign up

Export Citation Format

Share Document