scholarly journals Optimal parameter selection for intensity-based multi-sensor data registration

Author(s):  
E. G. Parmehr ◽  
C. S. Fraser ◽  
C. Zhang ◽  
J. Leach

Accurate co-registration of multi-sensor data is a primary step in data integration for photogrammetric and remote sensing applications. A proven intensity-based registration approach is Mutual Information (MI). However the effectiveness of MI for automated registration of multi-sensor remote sensing data can be impacted to the point of failure by its non-monotonic convergence surface. Since MI-based methods rely on joint probability density functions (PDF) for the datasets, errors in PDF estimation can directly affect the MI value. Certain PDF parameter values, such as the bin-size of the joint histogram and the smoothing kernel, need to be assigned in advance, since they play a key role in forming the convergence surface. The lack of a general approach to the assignment of these parameter values for various data types reduces both the automation level and the robustness of registration. This paper proposes a new approach for selection of optimal parameter values for PDF estimation in MI-based registration of optical imagery to LiDAR point clouds. The proposed method determines the best parameters for PDF estimation via an analysis of the relationship between similarity measure values of the data and the adopted geometric transformation in order to achieve the optimal registration reliability. The performance of the proposed parameter selection method is experimentally evaluated and the obtained results are compared with those achieved through a feature-based registration method.

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ryan B. Patterson-Cross ◽  
Ariel J. Levine ◽  
Vilas Menon

Abstract Background Generating and analysing single-cell data has become a widespread approach to examine tissue heterogeneity, and numerous algorithms exist for clustering these datasets to identify putative cell types with shared transcriptomic signatures. However, many of these clustering workflows rely on user-tuned parameter values, tailored to each dataset, to identify a set of biologically relevant clusters. Whereas users often develop their own intuition as to the optimal range of parameters for clustering on each data set, the lack of systematic approaches to identify this range can be daunting to new users of any given workflow. In addition, an optimal parameter set does not guarantee that all clusters are equally well-resolved, given the heterogeneity in transcriptomic signatures in most biological systems. Results Here, we illustrate a subsampling-based approach (chooseR) that simultaneously guides parameter selection and characterizes cluster robustness. Through bootstrapped iterative clustering across a range of parameters, chooseR was used to select parameter values for two distinct clustering workflows (Seurat and scVI). In each case, chooseR identified parameters that produced biologically relevant clusters from both well-characterized (human PBMC) and complex (mouse spinal cord) datasets. Moreover, it provided a simple “robustness score” for each of these clusters, facilitating the assessment of cluster quality. Conclusion chooseR is a simple, conceptually understandable tool that can be used flexibly across clustering algorithms, workflows, and datasets to guide clustering parameter selection and characterize cluster robustness.


2021 ◽  
Vol 13 (17) ◽  
pp. 3425
Author(s):  
Xin Zhao ◽  
Hui Li ◽  
Ping Wang ◽  
Linhai Jing

Accurate registration for multisource high-resolution remote sensing images is an essential step for various remote sensing applications. Due to the complexity of the feature and texture information of high-resolution remote sensing images, especially for images covering earthquake disasters, feature-based image registration methods need a more helpful feature descriptor to improve the accuracy. However, traditional image registration methods that only use local features at low levels have difficulty representing the features of the matching points. To improve the accuracy of matching features for multisource high-resolution remote sensing images, an image registration method based on a deep residual network (ResNet) and scale-invariant feature transform (SIFT) was proposed. It used the fusion of SIFT features and ResNet features on the basis of the traditional algorithm to achieve image registration. The proposed method consists of two parts: model construction and training and image registration using a combination of SIFT and ResNet34 features. First, a registration sample set constructed from high-resolution satellite remote sensing images was used to fine-tune the network to obtain the ResNet model. Then, for the image to be registered, the Shi_Tomas algorithm and the combination of SIFT and ResNet features were used for feature extraction to complete the image registration. Considering the difference in image sizes and scenes, five pairs of images were used to conduct experiments to verify the effectiveness of the method in different practical applications. The experimental results showed that the proposed method can achieve higher accuracies and more tie points than traditional feature-based methods.


2021 ◽  
Vol 13 (5) ◽  
pp. 928
Author(s):  
Yuanxin Ye ◽  
Chao Yang ◽  
Bai Zhu ◽  
Liang Zhou ◽  
Youquan He ◽  
...  

Co-registering the Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 optical data of the European Space Agency (ESA) is of great importance for many remote sensing applications. However, we find that there are evident misregistration shifts between the Sentinel-1 SAR and Sentinel-2 optical images that are directly downloaded from the official website. To address that, this paper presents a fast and effective registration method for the two types of images. In the proposed method, a block-based scheme is first designed to extract evenly distributed interest points. Then, the correspondences are detected by using the similarity of structural features between the SAR and optical images, where the three-dimensional (3D) phase correlation (PC) is used as the similarity measure for accelerating image matching. Lastly, the obtained correspondences are employed to measure the misregistration shifts between the images. Moreover, to eliminate the misregistration, we use some representative geometric transformation models such as polynomial models, projective models, and rational function models for the co-registration of the two types of images, and we compare and analyze their registration accuracy under different numbers of control points and different terrains. Six pairs of the Sentinel-1 SAR L1 and Sentinel-2 optical L1C images covering three different terrains are tested in our experiments. Experimental results show that the proposed method can achieve precise correspondences between the images, and the third-order polynomial achieves the most satisfactory registration results. Its registration accuracy of the flat areas is less than 1.0 10 m pixel, that of the hilly areas is about 1.5 10 m pixels, and that of the mountainous areas is between 1.7 and 2.3 10 m pixels, which significantly improves the co-registration accuracy of the Sentinel-1 SAR and Sentinel-2 optical images.


2019 ◽  
Vol 85 (10) ◽  
pp. 725-736 ◽  
Author(s):  
Ming Hao ◽  
Jian Jin ◽  
Mengchao Zhou ◽  
Yi Tian ◽  
Wenzhong Shi

Image registration is an indispensable component of remote sensing applications, such as disaster monitoring, change detection, and classification. Grayscale differences and geometric distortions often occur among multisource images due to their different imaging mechanisms, thus making it difficult to acquire feature points and match corresponding points. This article proposes a scene shape similarity feature (SSSF) descriptor based on scene shape features and shape context algorithms. A new similarity measure called SSSFncc is then defined by computing the normalized correlation coefficient of the SSSF descriptors between multisource remote sensing images. Furthermore, the tie points between the reference and the sensed image are extracted via a template matching strategy. A global consistency check method is then used to remove the mismatched tie points. Finally, a piecewise linear transform model is selected to rectify the remote sensing image. The proposed SSSFncc aims to extract the scene shape similarity between multisource images. The accuracy of the proposed SSSFncc is evaluated using five pairs of experimental images from optical, synthetic aperture radar, and map data. Registration results demonstrate that the SSSFncc similarity measure is robust enough for complex nonlinear grayscale differences among multisource remote sensing images. The proposed method achieves more reliable registration outcomes compared with other popular methods.


2021 ◽  
Vol 13 (17) ◽  
pp. 3443
Author(s):  
Yuan Chen ◽  
Jie Jiang

The registration of multi-temporal remote sensing images with abundant information and complex changes is an important preprocessing step for subsequent applications. This paper presents a novel two-stage deep learning registration method based on sub-image matching. Unlike the conventional registration framework, the proposed network learns the mapping between matched sub-images and the geometric transformation parameters directly. In the first stage, the matching of sub-images (MSI), sub-images cropped from the images are matched through the corresponding heatmaps, which are made of the predicted similarity of each sub-image pairs. The second stage, the estimation of transformation parameters (ETP), a network with weight structure and position embedding estimates the global transformation parameters from the matched pairs. The network can deal with an uncertain number of matched sub-image inputs and reduce the impact of outliers. Furthermore, the sample sharing training strategy and the augmentation based on the bounding rectangle are introduced. We evaluated our method by comparing the conventional and deep learning methods qualitatively and quantitatively on Google Earth, ISPRS, and WHU Building Datasets. The experiments showed that our method obtained the probability of correct keypoints (PCK) of over 99% at α = 0.05 (α: the normalized distance threshold) and achieved a maximum increase of 16.8% at α = 0.01, compared with the latest method. The results demonstrated that our method has good robustness and improved the precision in the registration of optical remote sensing images with great variation.


2019 ◽  
Vol 11 (12) ◽  
pp. 1443 ◽  
Author(s):  
Huang Yao ◽  
Rongjun Qin ◽  
Xiaoyu Chen

The unmanned aerial vehicle (UAV) sensors and platforms nowadays are being used in almost every application (e.g., agriculture, forestry, and mining) that needs observed information from the top or oblique views. While they intend to be a general remote sensing (RS) tool, the relevant RS data processing and analysis methods are still largely ad-hoc to applications. Although the obvious advantages of UAV data are their high spatial resolution and flexibility in acquisition and sensor integration, there is in general a lack of systematic analysis on how these characteristics alter solutions for typical RS tasks such as land-cover classification, change detection, and thematic mapping. For instance, the ultra-high-resolution data (less than 10 cm of Ground Sampling Distance (GSD)) bring more unwanted classes of objects (e.g., pedestrian and cars) in land-cover classification; the often available 3D data generated from photogrammetric images call for more advanced techniques for geometric and spectral analysis. In this paper, we perform a critical review on RS tasks that involve UAV data and their derived products as their main sources including raw perspective images, digital surface models, and orthophotos. In particular, we focus on solutions that address the “new” aspects of the UAV data including (1) ultra-high resolution; (2) availability of coherent geometric and spectral data; and (3) capability of simultaneously using multi-sensor data for fusion. Based on these solutions, we provide a brief summary of existing examples of UAV-based RS in agricultural, environmental, urban, and hazards assessment applications, etc., and by discussing their practical potentials, we share our views in their future research directions and draw conclusive remarks.


Author(s):  
M. Khoshboresh Masouleh ◽  
M. R. Saradjian

Abstract. Building footprint extraction (BFE) from multi-sensor data such as optical images and light detection and ranging (LiDAR) point clouds is widely used in various fields of remote sensing applications. However, it is still challenging research topic due to relatively inefficient building extraction techniques from variety of complex scenes in multi-sensor data. In this study, we develop and evaluate a deep competition network (DCN) that fuses very high spatial resolution optical remote sensing images with LiDAR data for robust BFE. DCN is a deep superpixelwise convolutional encoder-decoder architecture using the encoder vector quantization with classified structure. DCN consists of five encoding-decoding blocks with convolutional weights for robust binary representation (superpixel) learning. DCN is trained and tested in a big multi-sensor dataset obtained from the state of Indiana in the United States with multiple building scenes. Comparison results of the accuracy assessment showed that DCN has competitive BFE performance in comparison with other deep semantic binary segmentation architectures. Therefore, we conclude that the proposed model is a suitable solution to the robust BFE from big multi-sensor data.


Sign in / Sign up

Export Citation Format

Share Document