scholarly journals Two-Dimensional l1-Norm Minimization in SAR Image Reconstriction

2015 ◽  
Vol 15 (7) ◽  
pp. 77-87
Author(s):  
A. Lazarov ◽  
D. Minchev

Abstract A nonconventional image algorithm, based on compressed sensing and l1-norm minimization in Synthetic Aperture Radar (SAR) application is discussed. A discrete model of the earth surface relief and mathematical modeling of SAR signal formation are analytically described. Sparse decomposition in Fourier basis to solve the SAR image reconstruction problem is discussed. In contrast to the classical one-dimensional definition of l1-norm minimization in SAR image reconstruction, applied to an image vector, the present work proposes a two-dimensional definition of l1-norm minimization to the image. To verify the correctness of the algorithm, results of numerical experiments are presented.

Author(s):  
Krzysztof Podgo´rski ◽  
Igor Rychlik

The envelope process is a useful analytical tool which is often used to study wave groups. Most research on statistical properties of the envelope, and thus of wave groups, was focused on one dimensional records. However for the marine application, an appropriate concept should be two dimensional in space and variable in time. Although a generalization to higher dimensions was introduced by Adler (1978), little work was done to investigate its features. Since the envelope is not defined uniquely and its properties depend on a chosen version, we discuss the definition of the envelope field for a two dimensional random field evolving in time which serves as a model of irregular sea surface. Assuming Gaussian distribution of this field we derive sampling properties of the height of the envelope field as well as of its velocity. The latter is important as the velocity of the envelope is related to the rate at which energy is transported by propagating waves. We also study how statistical distributions of group waves differ from the corresponding ones for individual waves and how a choice of a version of the envelope affects its sampling distributions. Analyzing the latter problem helps in determination of the version which is appropriate in an application in hand.


2021 ◽  
Vol 1 (73) ◽  
pp. 59-61
Author(s):  
M. Ulyanov

The article considers the formulation of the problem of reconstruction of two-dimensional words by a given multiset of subwords, under the hypothesis that this subset is generated by the displacement of a two-dimensional window of fixed size by an unknown two-dimensional word with a shift 1. A variant of the combinatorial solution of this reconstruction problem is proposed, based on a two-fold application of the one-dimensional word reconstruction method using the search for Eulerian paths or cycles in the de Bruyne multiorgraph. The efficiency of the method is discussed under the conditions of a square two-dimensional shift window one having a large linear size.


2017 ◽  
Vol 24 (3) ◽  
pp. 609-614 ◽  
Author(s):  
V. G. Kohn

A new definition of the effective aperture of the X-ray compound refractive lens (CRL) is proposed. Both linear (one-dimensional) and circular (two-dimensional) CRLs are considered. It is shown that for a strongly absorbing CRL the real aperture does not influence the focusing properties and the effective aperture is determined by absorption. However, there are three ways to determine the effective aperture in terms of transparent CRLs. In the papers by Kohn [(2002). JETP Lett. 76, 600–603; (2003). J. Exp. Theor. Phys. 97, 204–215; (2009). J. Surface Investig. 3, 358–364; (2012). J. Synchrotron Rad. 19, 84–92; Kohn et al. (2003). Opt. Commun. 216, 247–260; (2003). J. Phys. IV Fr, 104, 217–220], the FWHM of the X-ray beam intensity just behind the CRL was used. In the papers by Lengeler et al. [(1999). J. Synchrotron Rad. 6, 1153–1167; (1998). J. Appl. Phys. 84, 5855–5861], the maximum intensity value at the focus was used. Numerically, these two definitions differ by 50%. The new definition is based on the integral intensity of the beam behind the CRL over the real aperture. The integral intensity is the most physical value and is independent of distance. The new definition gives a value that is greater than that of the Kohn definition by 6% and less than that of the Lengeler definition by 41%. A new approximation for the aperture function of a two-dimensional CRL is proposed which allows one to calculate the two-dimensional CRL through the one-dimensional CRL and to obtain an analytical solution for a complex system of many CRLs.


1992 ◽  
Vol 2 (1) ◽  
pp. 1-28 ◽  
Author(s):  
A. J. Power ◽  
Charles Wells

A type of higher-order two-dimensional sketch is defined which has models in suitable 2-categories. It has as special cases the ordinary sketches of Ehresmann and certain previously defined generalizations of one-dimensional sketches. These sketches allow the specification of constructions in 2-categories such as weighted limits, as well as higher-order constructions such as exponential objects and subobject classifiers, that cannot be sketched by limits and colimits. These sketches are designed to be the basis of a category-based methodology for the description of functional programming languages, complete with rewrite rules giving the operational semantics, that is independent of the usual specification methods based on formal languages and symbolic logic. A definition of ‘path grammar’, generalizing the usual notion of grammar, is given as a step towards this goal.


2021 ◽  
Vol 13 (14) ◽  
pp. 2812
Author(s):  
Changyu Hu ◽  
Ling Wang ◽  
Daiyin Zhu ◽  
Otmar Loffeld

Sparse imaging relies on sparse representations of the target scenes to be imaged. Predefined dictionaries have long been used to transform radar target scenes into sparse domains, but the performance is limited by the artificially designed or existing transforms, e.g., Fourier transform and wavelet transform, which are not optimal for the target scenes to be sparsified. The dictionary learning (DL) technique has been exploited to obtain sparse transforms optimized jointly with the radar imaging problem. Nevertheless, the DL technique is usually implemented in a manner of patch processing, which ignores the relationship between patches, leading to the omission of some feature information during the learning of the sparse transforms. To capture the feature information of the target scenes more accurately, we adopt image patch group (IPG) instead of patch in DL. The IPG is constructed by the patches with similar structures. DL is performed with respect to each IPG, which is termed as group dictionary learning (GDL). The group oriented sparse representation (GOSR) and target image reconstruction are then jointly optimized by solving a l1 norm minimization problem exploiting GOSR, during which a generalized Gaussian distribution hypothesis of radar image reconstruction error is introduced to make the imaging problem tractable. The imaging results using the real ISAR data show that the GDL-based imaging method outperforms the original DL-based imaging method in both imaging quality and computational speed.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Xueyan Liu ◽  
Limei Zhang ◽  
Yining Zhang ◽  
Lishan Qiao

The recently emerging technique of sparse reconstruction has received much attention in the field of photoacoustic imaging (PAI). Compressed sensing (CS) has large potential in efficiently reconstructing high-quality PAI images with sparse sampling signal. In this article, we propose a CS-based error-tolerant regularized smooth L0 (ReSL0) algorithm for PAI image reconstruction, which has the same computational advantages as the SL0 algorithm while having a higher degree of immunity to inaccuracy caused by noise. In order to evaluate the performance of the ReSL0 algorithm, we reconstruct the simulated dataset obtained from three phantoms. In addition, a real experimental dataset from agar phantom is also used to verify the effectiveness of the ReSL0 algorithm. Compared to three L0 norm, L1 norm, and TV norm-based CS algorithms for signal recovery and image reconstruction, experiments demonstrated that the ReSL0 algorithm provides a good balance between the quality and efficiency of reconstructions. Furthermore, the PSNR of the reconstructed image calculated by the introduced method was better than the other three methods. In particular, it can notably improve reconstruction quality in the case of noisy measurement.


2004 ◽  
Vol 49 (11-12) ◽  
pp. 169-176 ◽  
Author(s):  
D.R. Noguera ◽  
C. Picioreanu

In addition to the one-dimensional solutions of a multi-species benchmark problem (BM3) presented earlier (Rittmann et al., 2004), we offer solutions using two-dimensional (2-D) models. Both 2-D models (called here DN and CP) used numerical solutions to BM3 based on a similar mathematical framework of the one-dimensional AQUASIM-built models submitted by Wanner (model W) and Morgenroth (model M1), described in detail elsewhere (Rittmann et al., 2004). The CP model used differential equations to simulate substrate gradients and biomass growth and a particle-based approach to describe biomass division and biofilm growth. The DN model simulated substrate and biomass using a cellular automaton approach. For several conditions stipulated in BM3, the multidimensional models provided very similar results to the 1-D models in terms of bulk substrate concentrations and fluxes into the biofilm. The similarity can be attributed to the definition of BM3, which restricted the problem to a flat biofilm in contact with a completely mixed liquid phase, and therefore, without any salient characteristics to be captured in a multidimensional domain. On the other hand, the models predicted significantly different accumulations of the different types of biomass, likely reflecting differences in the way biomass spread within the biofilm is simulated.


Sign in / Sign up

Export Citation Format

Share Document