scholarly journals MINDFLOW BASED DENSE MATCHING BETWEEN TIR AND RGB IMAGES

Author(s):  
J. Zhu ◽  
Z. Ye ◽  
Y. Xu ◽  
L. Hoegner ◽  
U. Stilla

Abstract. Image registration is a fundamental issue in photogrammetry and remote sensing, which targets to find the alignment between different images. Recently, registration of images from difference sensors become the hot topic. The registered images from different sensors are able to offer additional information, which help with different tasks like segmentation, classification, and even emergency analysis. In this paper, we proposed a registration strategy to calculate the dominant orientation difference and then achieve the dense alignment of Thermal Infrared (TIR) image and RGB image with MINDflow. Firstly, the orientation difference of TIR images and RGB images is calculated by finding the dominant image orientations based on phase congruency. Then, the modality independent neighborhood descriptor (MIND) together with global optical flow algorithm are adopted as MINDflow for dense matching. Our method is tested in the image sets containing TIR images and RGB images captured separately but in the same construction site areas. The results show that it is able to achieve the optimal results with features of significance even for dramatically radiometric differences between TIR images and RGB images. By comparing the results with other descriptor, our method is more robust and keep the features of objects in the images.

1995 ◽  
Vol 32 (2) ◽  
pp. 77-83
Author(s):  
Y. Yüksel ◽  
D. Maktav ◽  
S. Kapdasli

Submarine pipelines must be designed to resist wave and current induced hydrodynamic forces especially in and near the surf zone. They are buried as protection against forces in the surf zone, however this procedure is not always feasible particularly on a movable sea bed. For this reason the characteristics of the sediment transport on the construction site of beaches should be investigated. In this investigation, the application of the remote sensing method is introduced in order to determine and observe the coastal morphology, so that submarine pipelines may be protected against undesirable seabed movement.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2407
Author(s):  
Hojun You ◽  
Dongsu Kim

Fluvial remote sensing has been used to monitor diverse riverine properties through processes such as river bathymetry and visual detection of suspended sediment, algal blooms, and bed materials more efficiently than laborious and expensive in-situ measurements. Red–green–blue (RGB) optical sensors have been widely used in traditional fluvial remote sensing. However, owing to their three confined bands, they rely on visual inspection for qualitative assessments and are limited to performing quantitative and accurate monitoring. Recent advances in hyperspectral imaging in the fluvial domain have enabled hyperspectral images to be geared with more than 150 spectral bands. Thus, various riverine properties can be quantitatively characterized using sensors in low-altitude unmanned aerial vehicles (UAVs) with a high spatial resolution. Many efforts are ongoing to take full advantage of hyperspectral band information in fluvial research. Although geo-referenced hyperspectral images can be acquired for satellites and manned airplanes, few attempts have been made using UAVs. This is mainly because the synthesis of line-scanned images on top of image registration using UAVs is more difficult owing to the highly sensitive and heavy image driven by dense spatial resolution. Therefore, in this study, we propose a practical technique for achieving high spatial accuracy in UAV-based fluvial hyperspectral imaging through efficient image registration using an optical flow algorithm. Template matching algorithms are the most common image registration technique in RGB-based remote sensing; however, they require many calculations and can be error-prone depending on the user, as decisions regarding various parameters are required. Furthermore, the spatial accuracy of this technique needs to be verified, as it has not been widely applied to hyperspectral imagery. The proposed technique resulted in an average reduction of spatial errors by 91.9%, compared to the case where the image registration technique was not applied, and by 78.7% compared to template matching.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 970
Author(s):  
Miguel Ángel Martínez-Domingo ◽  
Juan Luis Nieves ◽  
Eva M. Valero

Saliency prediction is a very important and challenging task within the computer vision community. Many models exist that try to predict the salient regions on a scene from its RGB image values. Several new models are developed, and spectral imaging techniques may potentially overcome the limitations found when using RGB images. However, the experimental study of such models based on spectral images is difficult because of the lack of available data to work with. This article presents the first eight-channel multispectral image database of outdoor urban scenes together with their gaze data recorded using an eyetracker over several observers performing different visualization tasks. Besides, the information from this database is used to study whether the complexity of the images has an impact on the saliency maps retrieved from the observers. Results show that more complex images do not correlate with higher differences in the saliency maps obtained.


Forests ◽  
2019 ◽  
Vol 10 (11) ◽  
pp. 1047 ◽  
Author(s):  
Ying Sun ◽  
Jianfeng Huang ◽  
Zurui Ao ◽  
Dazhao Lao ◽  
Qinchuan Xin

The monitoring of tree species diversity is important for forest or wetland ecosystem service maintenance or resource management. Remote sensing is an efficient alternative to traditional field work to map tree species diversity over large areas. Previous studies have used light detection and ranging (LiDAR) and imaging spectroscopy (hyperspectral or multispectral remote sensing) for species richness prediction. The recent development of very high spatial resolution (VHR) RGB images has enabled detailed characterization of canopies and forest structures. In this study, we developed a three-step workflow for mapping tree species diversity, the aim of which was to increase knowledge of tree species diversity assessment using deep learning in a tropical wetland (Haizhu Wetland) in South China based on VHR-RGB images and LiDAR points. Firstly, individual trees were detected based on a canopy height model (CHM, derived from LiDAR points) by the local-maxima-based method in the FUSION software (Version 3.70, Seattle, USA). Then, tree species at the individual tree level were identified via a patch-based image input method, which cropped the RGB images into small patches (the individually detected trees) based on the tree apexes detected. Three different deep learning methods (i.e., AlexNet, VGG16, and ResNet50) were modified to classify the tree species, as they can make good use of the spatial context information. Finally, four diversity indices, namely, the Margalef richness index, the Shannon–Wiener diversity index, the Simpson diversity index, and the Pielou evenness index, were calculated from the fixed subset with a size of 30 × 30 m for assessment. In the classification phase, VGG16 had the best performance, with an overall accuracy of 73.25% for 18 tree species. Based on the classification results, mapping of tree species diversity showed reasonable agreement with field survey data (R2Margalef = 0.4562, root-mean-square error RMSEMargalef = 0.5629; R2Shannon–Wiener = 0.7948, RMSEShannon–Wiener = 0.7202; R2Simpson = 0.7907, RMSESimpson = 0.1038; and R2Pielou = 0.5875, RMSEPielou = 0.3053). While challenges remain for individual tree detection and species classification, the deep-learning-based solution shows potential for mapping tree species diversity.


2019 ◽  
Author(s):  
Jeffrey Chambers ◽  
Caralyn Gorman ◽  
Yanlei Feng ◽  
Margaret Torn ◽  
Jared Stapp

The Camp Fire rapidly spread across a landscape in Butte County, California, toward the city of Paradise in the early morning hours of 8 November 2018. Here we provide a set of initial tools and analyses that are useful to a variety of stakeholders, including: (1) a visualization app for GOES 16 data and the surrounding landscape showing the rapid spread of the fire from 6:37-10:47 a.m. local time; (2) processed Landsat 8 images for before, during, and after the fire, along with a tool for visualizing regional impacts; (3) a timeline of fire spread from ignition over the first four hours; and (4) a description of a potential early warning app that could make use of existing data, visualization, and analysis tools, to provide additional information for effective evacuation of communities threatened by rapidly moving wildfires. Using these tools we estimate that, over the first hour, the Camp Fire was consuming ~200 ha/minute of vegetation with a linear spread rate of 14 km over the fire’s first 25 minutes, or ~33km/hr. We briefly discuss broader use of remote sensing and geospatial analysis for fire research and risk management.


Author(s):  
Ali Cam ◽  
Hüseyin Topan ◽  
Murat Oruç ◽  
Mustafa Özendi ◽  
Çağlar Bayık

RASAT, the second remote sensing satellite of Turkey, was designed and assembled, and also is being operated by TÜBİTAK Uzay (Space) Technologies Research Institute (Ankara). RASAT images in various levels are available free-of-charge via Gezgin portal for Turkish citizens. In this paper, the images in panchromatic (7.5 m GSD) and RGB (15 m GSD) bands in various levels were investigated with respect to its geometric and radiometric characteristics. The first geometric analysis is the estimation of the effective GSD as less than 1 pixel for radiometrically processed level (L1R) of both panchromatic and RGB images. Secondly, 2D georeferencing accuracy is estimated by various non-physical transformation models (similarity, 2D affine, polynomial, affine projection, projective, DLT and GCP based RFM) reaching sub-pixel accuracy using minimum 39 and maximum 52 GCPs. The radiometric characteristics are also investigated for 8 bits, estimating SNR between 21.8-42.2, and noise 0.0-3.5 for panchromatic and MS images for L1R when the sea is masked to obtain the results for land areas. The analysis show that RASAT images satisfies requirements for various applications. The research is carried out in Zonguldak test site which is mountainous and partly covered by dense forest and urban areas.


2020 ◽  
Author(s):  
Guoliang Liu

Full resolution depth is required in many realworld engineering applications. However, exist depth sensorsonly offer sparse depth sample points with limited resolutionand noise, e.g., LiDARs. We here propose a deep learningbased full resolution depth recovery method from monocularimages and corresponding sparse depth measurements of targetenvironment. The novelty of our idea is that the structure similarinformation between the RGB image and depth image is used torefine the dense depth estimation result. This important similarstructure information can be found using a correlation layerin the regression neural network. We show that the proposedmethod can achieve higher estimation accuracy compared tothe state of the art methods. The experiments conducted on theNYU Depth V2 prove the novelty of our idea.<br>


Author(s):  
Ali Cam ◽  
Hüseyin Topan ◽  
Murat Oruç ◽  
Mustafa Özendi ◽  
Çağlar Bayık

RASAT, the second remote sensing satellite of Turkey, was designed and assembled, and also is being operated by TÜBİTAK Uzay (Space) Technologies Research Institute (Ankara). RASAT images in various levels are available free-of-charge via Gezgin portal for Turkish citizens. In this paper, the images in panchromatic (7.5 m GSD) and RGB (15 m GSD) bands in various levels were investigated with respect to its geometric and radiometric characteristics. The first geometric analysis is the estimation of the effective GSD as less than 1 pixel for radiometrically processed level (L1R) of both panchromatic and RGB images. Secondly, 2D georeferencing accuracy is estimated by various non-physical transformation models (similarity, 2D affine, polynomial, affine projection, projective, DLT and GCP based RFM) reaching sub-pixel accuracy using minimum 39 and maximum 52 GCPs. The radiometric characteristics are also investigated for 8 bits, estimating SNR between 21.8-42.2, and noise 0.0-3.5 for panchromatic and MS images for L1R when the sea is masked to obtain the results for land areas. The analysis show that RASAT images satisfies requirements for various applications. The research is carried out in Zonguldak test site which is mountainous and partly covered by dense forest and urban areas.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7032
Author(s):  
Jifa Chen ◽  
Gang Chen ◽  
Lizhe Wang ◽  
Bo Fang ◽  
Ping Zhou ◽  
...  

Low inter-class variance and complex spatial details exist in ground objects of the coastal zone, which leads to a challenging task for coastal land cover classification (CLCC) from high-resolution remote sensing images. Recently, fully convolutional neural networks have been widely used in CLCC. However, the inherent structure of the convolutional operator limits the receptive field, resulting in capturing the local context. Additionally, complex decoders bring additional information redundancy and computational burden. Therefore, this paper proposes a novel attention-driven context encoding network to solve these problems. Among them, lightweight global feature attention modules are employed to aggregate multi-scale spatial details in the decoding stage. Meanwhile, position and channel attention modules with long-range dependencies are embedded to enhance feature representations of specific categories by capturing the multi-dimensional global context. Additionally, multiple objective functions are introduced to supervise and optimize feature information at specific scales. We apply the proposed method in CLCC tasks of two study areas and compare it with other state-of-the-art approaches. Experimental results indicate that the proposed method achieves the optimal performances in encoding long-range context and recognizing spatial details and obtains the optimum representations in evaluation indexes.


Sign in / Sign up

Export Citation Format

Share Document