scholarly journals Seamless Mosaicking of UAV-Based Push-Broom Hyperspectral Images for Environment Monitoring

2021 ◽  
Vol 13 (22) ◽  
pp. 4720
Author(s):  
Lina Yi ◽  
Jing M. Chen ◽  
Guifeng Zhang ◽  
Xiao Xu ◽  
Xing Ming ◽  
...  

This paper proposes a systematic image mosaicking methodology to produce hyperspectral image for environment monitoring using an emerging UAV-based push-broom hyperspectral imager. The suitability of alternative methods in each step is assessed by experiments of an urban scape, a river course and a forest study area. First, the hyperspectral image strips were acquired by sequentially stitching the UAV images acquired by push-broom scanning along each flight line. Next, direct geo-referencing was applied to each image strip to get initial geo-rectified result. Then, with ground control points, the curved surface spline function was used to transform the initial geo-rectified image strips to improve their geometrical accuracy. To further remove the displacement between pairs of image strips, an improved phase correlation (IPC) and a SIFT and RANSAC-based method (SR) were used in image registration. Finally, the weighted average and the best stitching image fusion method were used to remove the spectral differences between image strips and get the seamless mosaic. Experiment results showed that as the GCPs‘ number increases, the mosaicked image‘s geometrical accuracy increases. In image registration, there exists obvious edge information that can be accurately extracted from the urban scape and river course area; comparative results can be achieved by the IPC method with less time cost. However, for the ground objects with complex texture like forest, the edges extracted from the image is prone to be inaccurate and result in the failure of the IPC method, and only the SR method can get a good result. In image fusion, the best stitching fusion method can get seamless results for all three study areas. Whereas, the weighted average fusion method was only useful in eliminating the stitching line for the river course and forest areas but failed for the urban scape area due to the spectral heterogeneity of different ground objects. For different environment monitoring applications, the proposed methodology provides a practical solution to seamlessly mosaic UAV-based push-broom hyperspectral images with high geometrical accuracy and spectral fidelity.

2021 ◽  
Vol 11 (16) ◽  
pp. 7365
Author(s):  
Jian Long ◽  
Yuanxi Peng ◽  
Tong Zhou ◽  
Liyuan Zhao ◽  
Jun Li

Fusion low-resolution hyperspectral images (LR-HSI) and high-resolution multispectral images (HR-MSI) are important methods for obtaining high-resolution hyperspectral images (HR-HSI). Some hyperspectral image fusion application areas have strong real-time requirements for image fusion, and a fast fusion method is urgently needed. This paper proposes a fast and stable fusion method (FSF) based on matrix factorization, which can largely reduce the computational workloads of image fusion to achieve fast and efficient image fusion. FSF introduces the Moore–Penrose inverse in the fusion model to simplify the estimation of the coefficient matrix and uses singular value decomposition (SVD) to simplify the estimation of the spectral basis, thus significantly reducing the computational effort of model solving. Meanwhile, FSF introduces two multiplicative iterative processes to optimize the spectral basis and coefficient matrix to achieve stable and high-quality fusion. We have tested the fusion method on remote sensing and ground-based datasets. The experiments show that our proposed method can achieve the performance of several state-of-the-art algorithms while reducing execution time to less than 1% of such algorithms.


2013 ◽  
Vol 448-453 ◽  
pp. 3621-3624 ◽  
Author(s):  
Ming Jing Li ◽  
Yu Bing Dong ◽  
Xiao Li Wang

Image fusion method based on the non multi-scale take the original image as object of study, using various fusion rule of image fusion to fuse images, but not decomposition or transform to original images. So, it can also be called simple multi sensor image fusion methods. Its advantages are low computational complexity and simple principle. Image fusion method based on the non multi-scale is currently the most widely used image fusion methods. The basic principle of fuse method is directly to select large gray, small gray and weighted average among pixel on the source image, to fuse into a new image. Simple pixel level image fusion method mainly includes the pixel gray value being average or weighted average, pixel gray value being selected large and pixel gray value being selected small, etc. Basic principle of fusion process was introduced in detail in this paper, and pixel level fusion algorithm at present was summed up. Simulation results on fusion are presented to illustrate the proposed fusion scheme. In practice, fusion algorithm was selected according to imaging characteristics being retained.


2014 ◽  
Vol 530-531 ◽  
pp. 394-402
Author(s):  
Ze Tao Jiang ◽  
Li Wen Zhang ◽  
Le Zhou

At present, image fusion universally exists problem that fuzzy edge, sparse texture. To solve this problem, this study proposes an image fusion method based on the combination of Lifting Wavelet and Median Filter. The method adopts different fusion rules. For the low frequency coefficient, the low frequency scale coefficients have had the convolution do the square respectively to get enhanced edge of the image fusion. Then the details information of original image is extracted by measuring region characteristics. For high frequency coefficient, the high frequency parts are denoised by the Median Filter, and then neighborhood spatial frequency and consistency verification fusion rule is adopted to the fusion of detail sub-images. Compared with Weighted Average and Regional Energy , experimental results show that edge and texture information are the most. Method in study solves the fuzzy edge and sparse texture in a certain degree,which has strong practical value in image fusion.


Image fusion is the mechanism in which at least two images are consolidated into a single image holding the imperative features from each one of the first images. Emerging images are upgraded and the image content is been enhanced in the entire context, this out coming image is much more preferable than the base images. Certain circumstances in image processing need both high dimensional and high spectral information in a solitary image, which is crucial in remote sensing. Image fusion procedure incorporates intensifying, filtering, and moulding the images for better results. Efficient and imperative approaches for image fusion are enforced here. The image fusion method comprises two discrete types of images, the visible image and the infrared image. The Single Scale Retinex (SSR) is applied to the visible image to obtain an upgraded image, simultaneously Principal Component Analysis (PCA) is been applied to infrared image to obtain an image with superior contrast and colour. Further these treated images are decomposed into a multilayer image by using Laplacian Pyramid algorithm. To end with Weighted Average fusion method aids in fusing the images to reproduce the augmented fused image.


2015 ◽  
Vol 2015 ◽  
pp. 1-7
Author(s):  
Feng Zhu ◽  
Yingkun Hou ◽  
Jingyu Yang

A new multifocus image fusion method is proposed. Two image blocks are selected by sliding the window from the two source images at the same position, discrete cosine transform (DCT) is implemented, respectively, on these two blocks, and the alternating component (AC) energy of these blocks is then calculated to decide which is the well-focused one. In addition, block matching is used to determine a group of image blocks that are all similar to the well-focused reference block. Finally, all the blocks are returned to their original positions through weighted average. The weight is decided with the AC energy of the well-focused block. Experimental results demonstrate that, unlike other spatial methods, the proposed method effectively avoids block artifacts. The proposed method also significantly improves the objective evaluation results, which are obtained by some transform domain methods.


2004 ◽  
Vol 101 (Supplement3) ◽  
pp. 351-355 ◽  
Author(s):  
Javad Rahimian ◽  
Joseph C. Chen ◽  
Ajay A. Rao ◽  
Michael R. Girvigian ◽  
Michael J. Miller ◽  
...  

Object. Stringent geometrical accuracy and precision are required in the stereotactic radiosurgical treatment of patients. Accurate targeting is especially important when treating a patient in a single fraction of a very high radiation dose (90 Gy) to a small target such as that used in the treatment of trigeminal neuralgia (3 to 4—mm diameter). The purpose of this study was to determine the inaccuracies in each step of the procedure including imaging, fusion, treatment planning, and finally the treatment. The authors implemented a detailed quality-assurance program. Methods. Overall geometrical accuracy of the Novalis stereotactic system was evaluated using a Radionics Geometric Phantom Chamber. The phantom has several magnetic resonance (MR) and computerized tomography (CT) imaging—friendly objects of various shapes and sizes. Axial 1-mm-thick MR and CT images of the phantom were acquired using a T1-weighted three-dimensional spoiled gradient recalled pulse sequence and the CT scanning protocols used clinically in patients. The absolute errors due to MR image distortion, CT scan resolution, and the image fusion inaccuracies were measured knowing the exact physical dimensions of the objects in the phantom. The isocentric accuracy of the Novalis gantry and the patient support system was measured using the Winston—Lutz test. Because inaccuracies are cumulative, to calculate the system's overall spatial accuracy, the root mean square (RMS) of all the errors was calculated. To validate the accuracy of the technique, a 1.5-mm-diameter spherical marker taped on top of a radiochromic film was fixed parallel to the x–z plane of the stereotactic coordinate system inside the phantom. The marker was defined as a target on the CT images, and seven noncoplanar circular arcs were used to treat the target on the film. The calculated system RMS value was then correlated with the position of the target and the highest density on the radiochromic film. The mean spatial errors due to image fusion and MR imaging were 0.41 ± 0.3 and 0.22 ± 0.1 mm, respectively. Gantry and couch isocentricities were 0.3 ± 0.1 and 0.6 ± 0.15 mm, respectively. The system overall RMS values were 0.9 and 0.6 mm with and without the couch errors included, respectively (isocenter variations due to couch rotation are microadjusted between couch positions). The positional verification of the marker was within 0.7 ± 0.1 mm of the highest optical density on the radiochromic film, correlating well with the system's overall RMS value. The overall mean system deviation was 0.32 ± 0.42 mm. Conclusions. The highest spatial errors were caused by image fusion and gantry rotation. A comprehensive quality-assurance program was developed for the authors' stereotactic radiosurgery program that includes medical imaging, linear accelerator mechanical isocentricity, and treatment delivery. For a successful treatment of trigeminal neuralgia with a 4-mm cone, the overall RMS value of equal to or less than 1 mm must be guaranteed.


Author(s):  
Liu Xian-Hong ◽  
Chen Zhi-Bin

Background: A multi-scale multidirectional image fusion method is proposed, which introduces the Nonsubsampled Directional Filter Bank (NSDFB) into the multi-scale edge-preserving decomposition based on the fast guided filter. Methods: The proposed method has the advantages of preserving edges and extracting directional information simultaneously. In order to get better-fused sub-bands coefficients, a Convolutional Sparse Representation (CSR) based approximation sub-bands fusion rule is introduced and a Pulse Coupled Neural Network (PCNN) based detail sub-bands fusion strategy with New Sum of Modified Laplacian (NSML) to be the external input is also presented simultaneously. Results: Experimental results have demonstrated the superiority of the proposed method over conventional methods in terms of visual effects and objective evaluations. Conclusion: In this paper, combining fast guided filter and nonsubsampled directional filter bank, a multi-scale directional edge-preserving filter image fusion method is proposed. The proposed method has the features of edge-preserving and extracting directional information.


2021 ◽  
Vol 13 (2) ◽  
pp. 268
Author(s):  
Xiaochen Lv ◽  
Wenhong Wang ◽  
Hongfu Liu

Hyperspectral unmixing is an important technique for analyzing remote sensing images which aims to obtain a collection of endmembers and their corresponding abundances. In recent years, non-negative matrix factorization (NMF) has received extensive attention due to its good adaptability for mixed data with different degrees. The majority of existing NMF-based unmixing methods are developed by incorporating additional constraints into the standard NMF based on the spectral and spatial information of hyperspectral images. However, they neglect to exploit the nature of imbalanced pixels included in the data, which may cause the pixels mixed with imbalanced endmembers to be ignored, and thus the imbalanced endmembers generally cannot be accurately estimated due to the statistical property of NMF. To exploit the information of imbalanced samples in hyperspectral data during the unmixing procedure, in this paper, a cluster-wise weighted NMF (CW-NMF) method for the unmixing of hyperspectral images with imbalanced data is proposed. Specifically, based on the result of clustering conducted on the hyperspectral image, we construct a weight matrix and introduce it into the model of standard NMF. The proposed weight matrix can provide an appropriate weight value to the reconstruction error between each original pixel and the reconstructed pixel in the unmixing procedure. In this way, the adverse effect of imbalanced samples on the statistical accuracy of NMF is expected to be reduced by assigning larger weight values to the pixels concerning imbalanced endmembers and giving smaller weight values to the pixels mixed by majority endmembers. Besides, we extend the proposed CW-NMF by introducing the sparsity constraints of abundance and graph-based regularization, respectively. The experimental results on both synthetic and real hyperspectral data have been reported, and the effectiveness of our proposed methods has been demonstrated by comparing them with several state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document