scholarly journals DSM Accuracy Evaluation for the ISPRS Commission I Image Matching Benchmark

Author(s):  
G. Kuschk ◽  
P. d'Angelo ◽  
R. Qin ◽  
D. Poli ◽  
P. Reinartz ◽  
...  

To improve the quality of algorithms for automatic generation of Digital Surface Models (DSM) from optical stereo data in the remote sensing community, the Working Group 4 of Commission I: Geometric and Radiometric Modeling of Optical Airborne and Spaceborne Sensors provides on its website <a href="http://www2.isprs.org/commissions/comm1/wg4/benchmark-test.html"target="_blank">http://www2.isprs.org/commissions/comm1/wg4/benchmark-test.html</a> a benchmark dataset for measuring and comparing the accuracy of dense stereo algorithms. The data provided consists of several optical spaceborne stereo images together with ground truth data produced by aerial laser scanning. In this paper we present our latest work on this benchmark, based upon previous work. <br><br> As a first point, we noticed that providing the abovementioned test data as geo-referenced satellite images together with their corresponding RPC camera model seems too high a burden for being used widely by other researchers, as a considerable effort still has to be made to integrate the test datas camera model into the researchers local stereo reconstruction framework. To bypass this problem, we now also provide additional rectified input images, which enable stereo algorithms to work out of the box without the need for implementing special camera models. Care was taken to minimize the errors resulting from the rectification transformation and the involved image resampling. <br><br> We further improved the robustness of the evaluation method against errors in the orientation of the satellite images (with respect to the LiDAR ground truth). To this end we implemented a point cloud alignment of the DSM and the LiDAR reference points using an Iterative Closest Point (ICP) algorithm and an estimation of the best fitting transformation. This way, we concentrate on the errors from the stereo reconstruction and make sure that the result is not biased by errors in the absolute orientation of the satellite images. The evaluation of the stereo algorithms is done by triangulating the resulting (filled) DSMs and computing for each LiDAR point the nearest Euclidean distance to the DSM surface. We implemented an adaptive triangulation method minimizing the second order derivative of the surface in a local neighborhood, which captures the real surface more accurate than a fixed triangulation. As a further advantage, using our point-to-surface evaluation, we are also able to evaluate non-uniformly sampled DSMs or triangulated 3D models in general. The latter is for example needed when evaluating building extraction and data reduction algorithms. <br><br> As practical example we compare results from three different matching methods applied to the data available within the benchmark data sets. These results are analyzed using the above mentioned methodology and show advantages and disadvantages of the different methods, also depending on the land cover classes.

Author(s):  
C. Weidinger ◽  
T. Kadiofsky ◽  
P. Glira ◽  
C. Zinner ◽  
W. Kubinger

Abstract. Environmental perception is one of the core requirements in autonomous vehicle navigation. If exposed to harsh conditions, commonly deployed sensors like cameras or lidars deliver poor sensing performance. Millimeter wave radars enable robust sensing of the environment, but suffer from specular reflections and large beamwidths. To incorporate the sensor noise and lateral uncertainty, a new probabilistic, voxel-based recursive mapping method is presented to enable online terrain mapping using scanning radar sensors. For map accuracy evaluation, test measurements are performed with a scanning radar sensor in an off-road area. The voxel map is used to derive a digital terrain model, which can be compared with ground-truth data from an image-based photogrammetric reconstruction of the terrain. The method evaluation shows promising results for terrain mapping solely performed with radar scanners. However, small terrain structures still pose a problem due to larger beamwidths in comparison to lidar sensors.


Drones ◽  
2018 ◽  
Vol 3 (1) ◽  
pp. 3 ◽  
Author(s):  
Ricardo Díaz-Delgado ◽  
Constantin Cazacu ◽  
Mihai Adamescu

Long-term ecological research (LTER) sites need a periodic assessment of the state of their ecosystems and services in order to monitor trends and prevent irreversible changes. The ecological integrity (EI) framework opens the door to evaluate any ecosystem in a comparable way, by measuring indicators on ecosystem structure and processes. Such an approach also allows to gauge the sustainability of conservation management actions in the case of protected areas. Remote sensing (RS), provided by satellite, airborne, or drone-borne sensors becomes a very synoptic and valuable tool to quickly map isolated and inaccessible areas such as wetlands. However, few RS practical indicators have been proposed to relate to EI indicators for wetlands. In this work, we suggest several RS wetlands indicators to be used for EI assessment in wetlands and specially to be applied with unmanned aerial vehicles (UAVs). We also assess the applicability of multispectral images captured by UAVs over two long-term socio-ecological research (LTSER) wetland sites to provide detailed mapping of inundation levels, water turbidity and depth as well as aquatic plant cover. We followed an empirical approach to find linear relationships between UAVs spectral reflectance and the RS indicators over the Doñana LTSER platform in SW Spain. The method assessment was carried out using ground-truth data collected in transects. The resulting empirical models were implemented for Doñana marshes and can be applied for the Braila LTSER platform in Romania. The resulting maps are a very valuable input to assess habitat diversity, wetlands dynamics, and ecosystem productivity as frequently as desired by managers or scientists. Finally, we also examined the feasibility to upscale the information obtained from the collected ground-truth data to satellite images from Sentinel-2 MSI using segments from the UAV multispectral orthomosaic. We found a close multispectral relationship between Parrot Sequoia and Sentinel-2 bands which made it possible to extend ground-truth to map inundation in satellite images.


2020 ◽  
Vol 12 (15) ◽  
pp. 2345 ◽  
Author(s):  
Ahram Song ◽  
Yongil Kim ◽  
Youkyung Han

Object-based image analysis (OBIA) is better than pixel-based image analysis for change detection (CD) in very high-resolution (VHR) remote sensing images. Although the effectiveness of deep learning approaches has recently been proved, few studies have investigated OBIA and deep learning for CD. Previously proposed methods use the object information obtained from the preprocessing and postprocessing phase of deep learning. In general, they use the dominant or most frequently used label information with respect to all the pixels inside an object without considering any quantitative criteria to integrate the deep learning network and object information. In this study, we developed an object-based CD method for VHR satellite images using a deep learning network to denote the uncertainty associated with an object and effectively detect the changes in an area without the ground truth data. The proposed method defines the uncertainty associated with an object and mainly includes two phases. Initially, CD objects were generated by unsupervised CD methods, and the objects were used to train the CD network comprising three-dimensional convolutional layers and convolutional long short-term memory layers. The CD objects were updated according to the uncertainty level after the learning process was completed. Further, the updated CD objects were considered as the training data for the CD network. This process was repeated until the entire area was classified into two classes, i.e., change and no-change, with respect to the object units or defined epoch. The experiments conducted using two different VHR satellite images confirmed that the proposed method achieved the best performance when compared with the performances obtained using the traditional CD approaches. The method was less affected by salt and pepper noise and could effectively extract the region of change in object units without ground truth data. Furthermore, the proposed method can offer advantages associated with unsupervised CD methods and a CD network subjected to postprocessing by effectively utilizing the deep learning technique and object information.


2021 ◽  
Vol 13 (8) ◽  
pp. 1520
Author(s):  
Emon Kumar Dey ◽  
Fayez Tarsha Kurdi ◽  
Mohammad Awrangjeb ◽  
Bela Stantic

Existing approaches that extract buildings from point cloud data do not select the appropriate neighbourhood for estimation of normals on individual points. However, the success of these approaches depends on correct estimation of the normal vector. In most cases, a fixed neighbourhood is selected without considering the geometric structure of the object and the distribution of the input point cloud. Thus, considering the object structure and the heterogeneous distribution of the point cloud, this paper proposes a new effective approach for selecting a minimal neighbourhood, which can vary for each input point. For each point, a minimal number of neighbouring points are iteratively selected. At each iteration, based on the calculated standard deviation from a fitted 3D line to the selected points, a decision is made adaptively about the neighbourhood. The selected minimal neighbouring points make the calculation of the normal vector accurate. The direction of the normal vector is then used to calculate the inside fold feature points. In addition, the Euclidean distance from a point to the calculated mean of its neighbouring points is used to make a decision about the boundary point. In the context of the accuracy evaluation, the experimental results confirm the competitive performance of the proposed approach of neighbourhood selection over the state-of-the-art methods. Based on our generated ground truth data, the proposed fold and boundary point extraction techniques show more than 90% F1-scores.


2020 ◽  
Author(s):  
Nika Abdollahi ◽  
Anne de Septenville ◽  
Frédéric Davi ◽  
Juliana S. Bernardes

MotivationThe adaptive B-cell response is driven by the expansion, somatic hypermutation, and selection of B-cell clones. Their number, size and sequence diversity are essential characteristics of B-cell populations. Identifying clones in B-cell populations is central to several repertoire studies such as statistical analysis, repertoire comparisons, and clonal tracking. Several clonal grouping methods have been developed to group sequences from B-cell immune repertoires. Such methods have been principally evaluated on simulated benchmarks since experimental data containing clonally related sequences can be difficult to obtain. However, experimental data might contains multiple sources of sequence variability hampering their artificial reproduction. Therefore, the generation of high precision ground truth data that preserves real repertoire distributions is necessary to accurately evaluate clonal grouping methods.ResultsWe proposed a novel methodology to generate ground truth data sets from real repertoires. Our procedure requires V(D)J annotations to obtain the initial clones, and iteratively apply an optimisation step that moves sequences among clones to increase their cohesion and separation. We first showed that our method was able to identify clonally-related sequences in simulated repertoires with higher mutation rates, accurately. Next, we demonstrated how real benchmarks (generated by our method) constitute a challenge for clonal grouping methods, when comparing the performance of a widely used clonal grouping algorithm on several generated benchmarks. Our method can be used to generate a high number of benchmarks and contribute to construct more accurate clonal grouping tools.Availability and implementationThe source code and generated data sets are freely available at github.com/NikaAb/BCR_GTG


2011 ◽  
Vol 05 (01) ◽  
pp. 1-18 ◽  
Author(s):  
ABDELGHANI MESLEM ◽  
FUMIO YAMAZAKI ◽  
YOSHIHISA MARUYAMA

Using QuickBird satellite images of Boumerdes city obtained following the 21 May 2003 Algeria earthquake, our study examined the applicability of high-resolution optical imagery for the visual detection of building damage grade based on the ground-truth data on the urban nature, typology of a total of 2,794 buildings, and the real damage observed. The results are presented as geographical information system (GIS) damage mapping of buildings obtained from field surveys and QuickBird images. In general, totally collapsed buildings, partially collapsed buildings, and buildings surrounded by debris can be identified by using only post-event pan-sharpened images. However, due to the nature of the damage observed, some buildings may be judged incorrectly even if preevent images are employed as a reference to evaluate the damage status. Hence, in this study, we clarify the limitations regarding the applicability of high-resolution optical satellite imagery in building damage-level mapping.


2021 ◽  
Vol 13 (10) ◽  
pp. 1966
Author(s):  
Christopher W Smith ◽  
Santosh K Panda ◽  
Uma S Bhatt ◽  
Franz J Meyer ◽  
Anushree Badola ◽  
...  

In recent years, there have been rapid improvements in both remote sensing methods and satellite image availability that have the potential to massively improve burn severity assessments of the Alaskan boreal forest. In this study, we utilized recent pre- and post-fire Sentinel-2 satellite imagery of the 2019 Nugget Creek and Shovel Creek burn scars located in Interior Alaska to both assess burn severity across the burn scars and test the effectiveness of several remote sensing methods for generating accurate map products: Normalized Difference Vegetation Index (NDVI), Normalized Burn Ratio (NBR), and Random Forest (RF) and Support Vector Machine (SVM) supervised classification. We used 52 Composite Burn Index (CBI) plots from the Shovel Creek burn scar and 28 from the Nugget Creek burn scar for training classifiers and product validation. For the Shovel Creek burn scar, the RF and SVM machine learning (ML) classification methods outperformed the traditional spectral indices that use linear regression to separate burn severity classes (RF and SVM accuracy, 83.33%, versus NBR accuracy, 73.08%). However, for the Nugget Creek burn scar, the NDVI product (accuracy: 96%) outperformed the other indices and ML classifiers. In this study, we demonstrated that when sufficient ground truth data is available, the ML classifiers can be very effective for reliable mapping of burn severity in the Alaskan boreal forest. Since the performance of ML classifiers are dependent on the quantity of ground truth data, when sufficient ground truth data is available, the ML classification methods would be better at assessing burn severity, whereas with limited ground truth data the traditional spectral indices would be better suited. We also looked at the relationship between burn severity, fuel type, and topography (aspect and slope) and found that the relationship is site-dependent.


2020 ◽  
Vol 13 (1) ◽  
pp. 26
Author(s):  
Wen-Hao Su ◽  
Jiajing Zhang ◽  
Ce Yang ◽  
Rae Page ◽  
Tamas Szinyei ◽  
...  

In many regions of the world, wheat is vulnerable to severe yield and quality losses from the fungus disease of Fusarium head blight (FHB). The development of resistant cultivars is one means of ameliorating the devastating effects of this disease, but the breeding process requires the evaluation of hundreds of lines each year for reaction to the disease. These field evaluations are laborious, expensive, time-consuming, and are prone to rater error. A phenotyping cart that can quickly capture images of the spikes of wheat lines and their level of FHB infection would greatly benefit wheat breeding programs. In this study, mask region convolutional neural network (Mask-RCNN) allowed for reliable identification of the symptom location and the disease severity of wheat spikes. Within a wheat line planted in the field, color images of individual wheat spikes and their corresponding diseased areas were labeled and segmented into sub-images. Images with annotated spikes and sub-images of individual spikes with labeled diseased areas were used as ground truth data to train Mask-RCNN models for automatic image segmentation of wheat spikes and FHB diseased areas, respectively. The feature pyramid network (FPN) based on ResNet-101 network was used as the backbone of Mask-RCNN for constructing the feature pyramid and extracting features. After generating mask images of wheat spikes from full-size images, Mask-RCNN was performed to predict diseased areas on each individual spike. This protocol enabled the rapid recognition of wheat spikes and diseased areas with the detection rates of 77.76% and 98.81%, respectively. The prediction accuracy of 77.19% was achieved by calculating the ratio of the wheat FHB severity value of prediction over ground truth. This study demonstrates the feasibility of rapidly determining levels of FHB in wheat spikes, which will greatly facilitate the breeding of resistant cultivars.


Sign in / Sign up

Export Citation Format

Share Document