scholarly journals Study of the Effects of Target Geometry on Synthetic Aperture Radar Images using Simulation Studies

Author(s):  
K. Tummala ◽  
A. K. Jha ◽  
S. Kumar

Synthetic aperture radar technology has revolutionized earth observation with very high resolutions of below 5m, making it possible to distinguish individual urban features like buildings and even cars on the surface of the earth. But, the difficulty in interpretation of these images has hindered their use. The geometry of target objects and their orientation with respect to the SAR sensor contribute enormously to unexpected signatures on SAR images. Geometry of objects can cause single, double or multiple reflections which, in turn, affect the brightness value on the SAR images. Occlusions, shadow and layover effects are present in the SAR images as a result of orientation of target objects with respect to the incident microwaves. Simulation of SAR images is the best and easiest way to study and understand the anomalies. This paper discusses synthetic aperture radar image simulation, with the study of effect of target geometry as the main aim. Simulation algorithm has been developed in the time domain to provide greater modularity and to increase the ease of implementation. This algorithm takes into account the sensor and target characteristics, their locations with respect to the earth, 3-dimensional model of the target, sensor velocity, and SAR parameters. two methods have been discussed to obtain position and velocity vectors of SAR sensor – the first, from the metadata of real SAR image used to verify the simulation algorithm, and the second, from satellite orbital parameters. Using these inputs, the SAR image coordinates and backscatter coefficients for each point on the target are calculated. The backscatter coefficients at target points are calculated based on the local incidence angles using Muhleman's backscatter model. The present algorithm has been successfully implemented on radarsat-2 image of San Francisco bay area. Digital elevation models (DEMs) of the area under consideration are used as the 3d models of the target area. DEMs of different resolutions have been used to simulate SAR images in order to study how the target models affect the accuracy of simulation algorithm. The simulated images have been compared with radarsat-2 images to observe the efficiency of the simulation algorithm in accurately representing the locations and extents of different objects in the target area. The simulated algorithm implemented in this paper has given satisfactory results as the simulated images accurately show the different features present in the DEM of the target area.

Author(s):  
Ayoub Karine ◽  
Abdelmalek Toumi ◽  
Ali Khenchaf ◽  
Mohammed El Hassouni

In this paper, we propose a novel approach to recognize radar targets on inverse synthetic aperture radar (ISAR) and synthetic aperture radar (SAR) images. This approach is based on the multiple salient keypoint descriptors (MSKD) and multitask sparse representation based classification (MSRC). Thus, to characterize the targets in the radar images, we combine the scale-invariant feature transform (SIFT) and the saliency map. The goal of this combination is to reduce the SIFT keypoints and their time computing time by maintaining only those located in the target area (salient region). Then, we compute the feature vectors of the resulting salient SIFT keypoints (MSKD). This methodology is applied for both training and test images. The MSKD of the training images leads to construct the dictionary of a sparse convex optimization problem. To achieve the recognition, we adopt the MSRC taking into consideration each vector in the MSKD as a task. This classifier solves the sparse representation problem for each task over the dictionary and determines the class of the radar image according to all sparse reconstruction errors (residuals). The effectiveness of the proposed approach method has been demonstrated by a set of extensive empirical results on ISAR and SAR images databases. The results show the ability of our method to predict adequately the aircraft and the ground targets.


2021 ◽  
Vol 58 (1) ◽  
pp. 4289-4295
Author(s):  
Dr. D. Suresh Et al.

Noise will be unavoidable in image securing practice and denoising is a fundamental advance to recoup the image quality. The image of Synthetic Aperture Radar (SAR) is intrinsically misrepresented in dot noise that happens because of coherent nature of the dispersing wonders. Denoising SAR images target eliminating dot while safeguarding image highlights, for example, surface, edges, and point targets. The blend of nonlocal gathering and changed area filtering has coordinated the cutting edge denoising methods. Notwithstanding, this methodology makes an intense suspicion that image fix itself gives a brilliant guess on the genuine boundary, which prompts predisposition issue transcendently under genuine dot noise. Another impediment is that the for the most part utilized fix pre-determination techniques can't productively avoid the exceptions and harm the edges. The SAR image is infused with spot noise, and afterward edge based marker controlled watershed division is applied to recognize the homogeneous locales in SAR image. For every locale, the local pixels are distinguished by utilizing Intensity Coherence Vector (ICV) and are denoised autonomously by utilizing a half and half filtering, which contains the improved forms of ice, middle and mean channel. The exploratory outcomes show that the proposed strategy outflanks different techniques, for example, fix based filtering, non-nearby methods, wavelets and old style dot channels in wording higher wavelets signal-to-noise and edge conservation proportions relatively.


2020 ◽  
Author(s):  
Aron Sommer

Radar images of the open sea taken by airborne synthetic aperture radar (SAR) show typically several smeared ships. Due to their non-linear motions on a rough sea, these ships are smeared beyond recognition, such that their images are useless for classification or identification tasks. The ship imaging algorithm presented in this thesis consists of a fast image reconstruction using the fast factorized backprojection algorithm and an extended autofocus algorithm of large moving ships. This thesis analysis the factorization parameters of the fast factorized backprojection algorithm and describes how to choose them nearoptimally in order to reconstruct SAR images with minimal computational costs and without any loss of quality. Furthermore, this thesis shows how to estimate and compensate for the translation, the rotation and the deformation of a large arbitrarily moving ship in order to reconstruct a sharp image of the ship. The proposed autofocus technique generates images in which the ...


2021 ◽  
Vol 13 (24) ◽  
pp. 5091
Author(s):  
Jinxiao Wang ◽  
Fang Chen ◽  
Meimei Zhang ◽  
Bo Yu

Glacial lake extraction is essential for studying the response of glacial lakes to climate change and assessing the risks of glacial lake outburst floods. Most methods for glacial lake extraction are based on either optical images or synthetic aperture radar (SAR) images. Although deep learning methods can extract features of optical and SAR images well, efficiently fusing two modality features for glacial lake extraction with high accuracy is challenging. In this study, to make full use of the spectral characteristics of optical images and the geometric characteristics of SAR images, we propose an atrous convolution fusion network (ACFNet) to extract glacial lakes based on Landsat 8 optical images and Sentinel-1 SAR images. ACFNet adequately fuses high-level features of optical and SAR data in different receptive fields using atrous convolution. Compared with four fusion models in which data fusion occurs at the input, encoder, decoder, and output stages, two classical semantic segmentation models (SegNet and DeepLabV3+), and a recently proposed model based on U-Net, our model achieves the best results with an intersection-over-union of 0.8278. The experiments show that fully extracting the characteristics of optical and SAR data and appropriately fusing them are vital steps in a network’s performance of glacial lake extraction.


2010 ◽  
Vol 138 (2) ◽  
pp. 475-496 ◽  
Author(s):  
Werner Alpers ◽  
Jen-Ping Chen ◽  
Chia-Jung Pi ◽  
I-I. Lin

Abstract Frontal lines having offshore distances typically between 40 and 80 km are often visible on synthetic aperture radar (SAR) images acquired over the east coast of Taiwan by the European Remote Sensing Satellites 1 and 2 (ERS-1 and ERS-2) and Envisat. In a previous paper the authors showed that they are of atmospheric and not of oceanic origin; however, in that paper they did not give a definite answer to the question of which physical mechanism causes them. In this paper the authors present simulations carried out with the fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model, which shows that the frontal lines are associated with a quasi-stationary low-level convergence zone generated by the dynamic interaction of onshore airflow of the synoptic-scale wind with the coastal mountain range of the island of Taiwan. Reversed airflow collides with the onshore-flowing air leading to an uplift of air, which is often accompanied by the formation of bands of increased cloud density and of rainbands. The physical mechanism causing the generation of the frontal lines is similar to the one responsible for the formation of cloud bands off the Island of Hawaii as described by Smolarkiewicz et al. Four SAR images are shown, one acquired by ERS-2 and three by Envisat, showing frontal lines at the east coast of Taiwan caused by this generation mechanism. For these events the recirculation pattern, as well as the frontal (or convective) lines observed, were reproduced quite well with the meteorological model. So, it is argued that the observed frontal lines are not seaward boundaries of (classical) barrier jets or of katabatic wind fields, which have characteristics that are quite different from the flow patterns around the east coast of Taiwan as indicated by the SAR images.


2021 ◽  
Vol 13 (22) ◽  
pp. 4637
Author(s):  
Runzhi Jiao ◽  
Qingsong Wang ◽  
Tao Lai ◽  
Haifeng Huang

The dramatic undulations of a mountainous terrain will introduce large geometric distortions in each Synthetic Aperture Radar (SAR) image with different look angles, resulting in a poor registration performance. To this end, this paper proposes a multi-hypothesis topological isomorphism matching method for SAR images with large geometric distortions. The method includes the Ridge-Line Keypoint Detection (RLKD) and Multi-Hypothesis Topological Isomorphism Matching (MHTIM). Firstly, based on the analysis of the ridge structure, a ridge keypoint detection module and a keypoint similarity description method are designed, which aim to quickly produce a small number of stable matching keypoint pairs under large look angle differences and large terrain undulations. The keypoint pairs are further fed into the MHTIM module. Subsequently, the MHTIM method is proposed, which uses the stability and isomorphism of the topological structure of the keypoint set under different perspectives to generate a variety of matching hypotheses, and iteratively achieves the keypoint matching. This method uses both local and global geometric relationships between two keypoints, hence it achieving better performance compared with traditional methods. We tested our approach on both simulated and real mountain SAR images with different look angles and different elevation ranges. The experimental results demonstrate the effectiveness and stable matching performance of our approach.


2021 ◽  
Vol 7 (3) ◽  
pp. 267
Author(s):  
Pollen Chakma ◽  
Aysha Akter

Floods are triggered by water overflow into drylands from several sources, including rivers, lakes, oceans, or heavy rainfall. Near real-time (NRT) flood mapping plays an important role in taking strategic measures to reduce flood damage after a flood event. There are many satellite imagery based remote sensing techniques that are widely used to generate flood maps. Synthetic aperture radar (SAR) images have proven to be more effective in flood mapping due to its high spatial resolution and cloud penetration capacity. This case study is focused on the super cyclone, commonly known as Amphan, stemming from the west Bengal-Bangladesh coast across the Sundarbans on 20 May 2020, with a wind speed between 155 -165  gusting up to 185 . The flooding extent is determined by analyzing the pre and post-event synthetic aperture radar images, using the change detection and thresholding (CDAT) method. The results showed an inundated landmass of 2146 on 22 May 2020, excluding Sundarban. However, the area became 1425 about a week after the event, precisely on 28 May 2020 . This persistency generated a more severe and intense flood, due to the broken embankments. Furthermore, 13 out of 19 coastal districts were affected by the flooding, while 8 were highly inundated, including Bagerhat, Pirojpur, Satkhira, Khulna, Barisal, Jhalokati, Patuakhali and Barguna. These findings were subsequently compared with an inundation map created with a validation survey immediately after the event and also with the disposed location using a machine learning-based image classification technique. Consequently, the comparison showed a close similarity between the inundation scenario and the flood reports from the secondary sources. This circumstance envisages the significant role of CDAT application in providing relevant information for an effective decision support system.


2021 ◽  
Vol 13 (21) ◽  
pp. 4383
Author(s):  
Gang Zhang ◽  
Zhi Li ◽  
Xuewei Li ◽  
Sitong Liu

Self-supervised method has proven to be a suitable approach for despeckling on synthetic aperture radar (SAR) images. However, most self-supervised despeckling methods are trained by noisy-noisy image pairs, which are constructed by using natural images with simulated speckle noise, time-series real-world SAR images or generative adversarial network, limiting the practicability of these methods in real-world SAR images. Therefore, in this paper, a novel self-supervised despeckling algorithm with an enhanced U-Net is proposed for real-world SAR images. Firstly, unlike previous self-supervised despeckling works, the noisy-noisy image pairs are generated from real-word SAR images through a novel generation training pairs module, which makes it possible to train deep convolutional neural networks using real-world SAR images. Secondly, an enhanced U-Net is designed to improve the feature extraction and fusion capabilities of the network. Thirdly, a self-supervised training loss function with a regularization loss is proposed to address the difference of target pixel values between neighbors on the original SAR images. Finally, visual and quantitative experiments on simulated and real-world SAR images show that the proposed algorithm notably removes speckle noise with better preserving features, which exceed several state-of-the-art despeckling methods.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4133 ◽  
Author(s):  
Bing Sun ◽  
Chuying Fang ◽  
Hailun Xu ◽  
Anqi Gao

In general, synthetic aperture radar (SAR) imaging and image processing are two sequential steps in SAR image processing. Due to the large size of SAR images, most image processing algorithms require image segmentation before processing. However, the existence of speckle noise in SAR images, as well as poor contrast and the uneven distribution of gray values in the same target, make SAR images difficult to segment. In order to facilitate the subsequent processing of SAR images, this paper proposes a new method that combines the back-projection algorithm (BPA) and a first-order gradient operator to enhance the edges of SAR images to overcome image segmentation problems. For complex-valued signals, the gradient operator was applied directly to the imaging process. The experimental results of simulated images and real images validate our proposed method. For the simulated scene, the supervised image segmentation evaluation indexes of our method have more than 1.18%, 11.2% and 11.72% improvement on probabilistic Rand index (PRI), variability index (VI), and global consistency error (GCE). The proposed imaging method will make SAR image segmentation and related applications easier.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1154 ◽  
Author(s):  
Xiangli Huang ◽  
Kefeng Ji ◽  
Xiangguang Leng ◽  
Ganggang Dong ◽  
Xiangwei Xing

Moving ship targets appear blurred and defocused in synthetic aperture radar (SAR) images due to the translation motion during the coherent processing. Motion compensation is required for refocusing moving ship targets in SAR scenes. A novel refocusing method for moving ship is developed in this paper. The method is exploiting inverse synthetic aperture radar (ISAR) technique to refocus the ship target in SAR image. Generally, most cases of refocusing are for raw echo data, not for SAR image. Taking into account the advantages of processing in SAR image, the processing data are SAR image rather than raw echo data in this paper. The ISAR processing is based on fast minimum entropy phase compensation method, an iterative approach to obtain the phase error. The proposed method has been tested using Spaceborne TerraSAR-X, Gaofeng-3 images and airborne SAR images of maritime targets.


Sign in / Sign up

Export Citation Format

Share Document