Fast sparse image reconstruction method in through-the-wall radars using limited memory Broyden–Fletcher–Goldfarb–Shanno algorithm

Author(s):  
Candida Mwisomba ◽  
Abdi T. Abdalla ◽  
Idrissa Amour ◽  
Florian Mkemwa ◽  
Baraka Maiseli

Abstract Compressed sensing allows recovery of image signals using a portion of data – a technique that has drastically revolutionized the field of through-the-wall radar imaging (TWRI). This technique can be accomplished through nonlinear methods, including convex programming and greedy iterative algorithms. However, such (nonlinear) methods increase the computational cost at the sensing and reconstruction stages, thus limiting the application of TWRI in delicate practical tasks (e.g. military operations and rescue missions) that demand fast response times. Motivated by this limitation, the current work introduces the use of a numerical optimization algorithm, called Limited Memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS), to the TWRI framework to lower image reconstruction time. LBFGS, a well-known Quasi-Newton algorithm, has traditionally been applied to solve large scale optimization problems. Despite its potential applications, this algorithm has not been extensively applied in TWRI. Therefore, guided by LBFGS and using the Euclidean norm, we employed the regularized least square method to solve the cost function of the TWRI problem. Simulation results show that our method reduces the computational time by 87% relative to the classical method, even under situations of increased number of targets or large data volume. Moreover, the results show that the proposed method remains robust when applied to noisy environment.

2017 ◽  
Vol 89 (4) ◽  
pp. 609-619 ◽  
Author(s):  
Witold Artur Klimczyk ◽  
Zdobyslaw Jan Goraj

Purpose This paper aims to address the issue of designing aerodynamically robust empennage. Aircraft design optimization often narrowed to analysis of cruise conditions does not take into account other flight phases (manoeuvres). These, especially in unmanned air vehicle sector, can be significant part of the whole flight. Empennage is a part of the aircraft, with crucial function for manoeuvres. It is important to consider robustness for highest performance. Design/methodology/approach Methodology for robust wing design is presented. Surrogate modelling using kriging is used to reduce the optimization cost for high-fidelity aerodynamic calculations. Analysis of varying flight conditions, angle of attack, is made to assess robustness of design for particular mission. Two cases are compared: global optimization of 11 parameters and optimization divided into two consecutive sub-optimizations. Findings Surrogate modelling proves its usefulness for cutting computational time. Optimum design found by splitting problem into sub-optimizations finds better design at lower computational cost. Practical implications It is demonstrated, how surrogate modelling can be used for analysis of robustness, and why it is important to consider it. Intuitive split of wing design into airfoil and planform sub-optimizations brings promising savings in the optimization cost. Originality/value Methodology presented in this paper can be used in various optimization problems, especially those involving expensive computations and requiring top quality design.


2014 ◽  
Vol 118 (1204) ◽  
pp. 601-624
Author(s):  
G. Guglieri ◽  
P. Marguerettaz ◽  
G. Simioni

AbstractThe present work evaluates the performance of different optimisation techniques on a parameter identification problem of aeronautical interest. In particular, the focus is on the classical Least Square (LS) and Maximum Likelihood (ML) methods and on the CMAES (Covariance Matrix Adaptation Evolution Strategy), DE (Differential Evolution), GA (Genetic Algorithm) and PSO (Particle Swarm Optimisation) Meta-Heuristic methods. The test problem is the reconstruction from flight test data of the aerodynamic parameters of an external store jettisoned from a helicopter. Different initial conditions and the presence of measurement noise are considered. This case is representative of a class of problems of difficult solution because of nonlinearity, ill-conditioning, multidimensionality, non separability, and fitness function dispersion. Only reference algorithm implementations found in literature are used. The performance of each algorithm are defined in terms of fitness function value, sum of absolute errors of the estimated coefficients, computational time and number of function evaluations. The results show the efficiency of CMAES in finding the best estimates with the least computational cost. Moreover, tests reveal that traditional methods depend heavily on problem characteristics and loose accuracy at the increase of the number of unknowns.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Ziang Lei

3D reconstruction techniques for animated images and animation techniques for faces are important research in computer graphics-related fields. Traditional 3D reconstruction techniques for animated images mainly rely on expensive 3D scanning equipment and a lot of time-consuming postprocessing manually and require the scanned animated subject to remain in a fixed pose for a considerable period. In recent years, the development of large-scale computing power of computer-related hardware, especially distributed computing, has made it possible to come up with a real-time and efficient solution. In this paper, we propose a 3D reconstruction method for multivisual animated images based on Poisson’s equation theory. The calibration theory is used to calibrate the multivisual animated images, obtain the internal and external parameters of the camera calibration module, extract the feature points from the animated images of each viewpoint by using the corner point detection operator, then match and correct the extracted feature points by using the least square median method, and complete the 3D reconstruction of the multivisual animated images. The experimental results show that the proposed method can obtain the 3D reconstruction results of multivisual animation images quickly and accurately and has certain real-time and reliability.


Author(s):  
Quanhu Zhang ◽  
Weihua Hui ◽  
Feng Li ◽  
Zhongmao Gu ◽  
Shaojun Qian

Tomographic Gamma Scanning (TGS) method is one of the most advanced non-destructive assay (NDA) methods. But for measuring heterogeneously distributed media with medium- and high-density, there are still three main problems: experiment’s method of the calibration of detection efficiency of TGS is more difficult and complicated because of large voxels, “point-to-point” model and average model can’t calculate high-density samples accurately in transmission image reconstruction and computational cost is very large for correction factor in emission image. Calibration of detection efficiency using Monte Carlo method shorten calibration cycle greatly, a new Monte Carlo statistical iteration in TGS transmission image reconstruction method which is based on MC calculation and Numerical Analysis is presented, give a chance for measuring high-density samples; the division method and pre-calculation method in reconstructing TGS emission image is used which saves a great lot of computation time and provide a fast reconstruction algorithm for emission image. Above methods apply to TGS experiment device, the relative errors between experiment and MC calibration were less 5%; the relative errors between reconstructed values and the reference values were less than 4% in transmission image; the corrected experimental results were compared to the standard values and the relative deviation was found to be 7%. It took no more than one hour to complete the reconstruction of TGS emission image for a sample model with 3×3×3 voxels using a 2.0G computer.


Author(s):  
Emily Earl ◽  
Hadi Mohammadi

Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.


Author(s):  
Martin Buhmann ◽  
Dirk Siegel

Abstract We consider Broyden class updates for large scale optimization problems in n dimensions, restricting attention to the case when the initial second derivative approximation is the identity matrix. Under this assumption we present an implementation of the Broyden class based on a coordinate transformation on each iteration. It requires only $$2nk + O(k^{2}) + O(n)$$ 2 n k + O ( k 2 ) + O ( n ) multiplications on the kth iteration and stores $$nK+ O(K^2) + O(n)$$ n K + O ( K 2 ) + O ( n ) numbers, where K is the total number of iterations. We investigate a modification of this algorithm by a scaling approach and show a substantial improvement in performance over the BFGS method. We also study several adaptations of the new implementation to the limited memory situation, presenting algorithms that work with a fixed amount of storage independent of the number of iterations. We show that one such algorithm retains the property of quadratic termination. The practical performance of the new methods is compared with the performance of Nocedal’s (Math Comput 35:773--782, 1980) method, which is considered the benchmark in limited memory algorithms. The tests show that the new algorithms can be significantly more efficient than Nocedal’s method. Finally, we show how a scaling technique can significantly improve both Nocedal’s method and the new generalized conjugate gradient algorithm.


2014 ◽  
Vol 530-531 ◽  
pp. 367-371
Author(s):  
Ting Feng Li ◽  
Yu Ting Zhang ◽  
Sheng Hui Yan

In this paper, a modified limited memory BFGS method for solving large-scale unconstrained optimization problems is proposed. A remarkable feature of the proposed method is that it possesses a global convergence property even without convexity assumption on the objective function. The implementations of the algorithm on CUTE test problems are reported, which suggest that a slight improvement has been achieved.


2021 ◽  
Author(s):  
Ruilin Li ◽  
Christopher Chang ◽  
Yosuke Tanigawa ◽  
Balasubramanian Narasimhan ◽  
Trevor Hastie ◽  
...  

AbstractWe develop two efficient solvers for optimization problems arising from large-scale regularized regressions on millions of genetic variants sequenced from hundreds of thousands of individuals. These genetic variants are encoded by the values in the set {0, 1, 2, NA}. We take advantage of this fact and use two bits to represent each entry in a genetic matrix, which reduces memory requirement by a factor of 32 compared to a double precision floating point representation. Using this representation, we implemented an iteratively reweighted least square algorithm to solve Lasso regressions on genetic matrices, which we name snpnet-2.0. When the dataset contains many rare variants, the predictors can be encoded in a sparse matrix. We utilize the sparsity in the predictor matrix to further reduce memory requirement and computational speed. Our sparse genetic matrix implementation uses both the compact 2-bit representation and a simplified version of compressed sparse block format so that matrix-vector multiplications can be effectively parallelized on multiple CPU cores. To demonstrate the effectiveness of this representation, we implement an accelerated proximal gradient method to solve group Lasso on these sparse genetic matrices. This solver is named sparse-snpnet, and will also be included as part of snpnet R package. Our implementation is able to solve group Lasso problems on sparse genetic matrices with more than 1, 000, 000 columns and almost 100, 000 rows within 10 minutes and using less than 32GB of memory.


2021 ◽  
Author(s):  
Changyu Deng ◽  
Yizhou Wang ◽  
Can Qin ◽  
Wei Lu

Abstract Topology optimization by optimally distributing materials in a given domain requires gradient-free optimizers to solve highly complicated problems. However, with hundreds of design variables or more involved, solving such problems would require millions of Finite Element Method (FEM) calculations whose computational cost is huge and impractical. Here we report a Self-directed Online Learning Optimization (SOLO) which integrates Deep Neural Network (DNN) with FEM calculations. A DNN learns and substitutes the objective as a function of design variables. A small number of training data is generated dynamically based on the DNN's prediction of the global optimum. The DNN adapts to the new training data and gives better prediction in the region of interest until convergence. Our algorithm was tested by compliance minimization problems and fluid-structure optimization problems. It reduced the computational time by 2 ~ 5 orders of magnitude compared with directly using heuristic methods, and outperformed all state-of-the-art algorithms tested in our experiments. This approach enables solving large multi-dimensional optimization problems.


2005 ◽  
Author(s):  
Manuchehr Soleimani

Electrical resistance tomography (ERT) has great potential to be used for multi-phase flow monitoring. The Image reconstruction in ERT is computationally costly, so the online monitoring is a difficult task. The linear reconstruction methods are currently used as fast methods. The image reconstruction is a nonlinear inverse problem and the linear methods are not sufficient in many cases. The application of a recently proposed non-iterative inversion method for two-phase materials in has been studied. The method works based on Monotonicity property of the resistance matrix in ERT and it requires modest computational cost. In this paper we explain the application of this inversion method. We demonstrate the capabilities and drawbacks of the method by using 2D test examples. A major contribution of this paper is to optimize the software program for the inversion (by doing most of the computations offline), so it can be used for online application.


Sign in / Sign up

Export Citation Format

Share Document