scholarly journals MODERN METHODS OF BUNDLE ADJUSTMENT ON THE GPU

Author(s):  
R. Hänsch ◽  
I. Drude ◽  
O. Hellwich

The task to compute 3D reconstructions from large amounts of data has become an active field of research within the last years. Based on an initial estimate provided by structure from motion, bundle adjustment seeks to find a solution that is optimal for all cameras and 3D points. The corresponding nonlinear optimization problem is usually solved by the Levenberg-Marquardt algorithm combined with conjugate gradient descent. While many adaptations and extensions to the classical bundle adjustment approach have been proposed, only few works consider the acceleration potentials of GPU systems. This paper elaborates the possibilities of time and space savings when fitting the implementation strategy to the terms and requirements of realizing a bundler on heterogeneous CPUGPU systems. Instead of focusing on the standard approach of Levenberg-Marquardt optimization alone, nonlinear conjugate gradient descent and alternating resection-intersection are studied as two alternatives. The experiments show that in particular alternating resection-intersection reaches low error rates very fast, but converges to larger error rates than Levenberg-Marquardt. PBA, as one of the current state-of-the-art bundlers, converges slower in 50 % of the test cases and needs 1.5-2 times more memory than the Levenberg- Marquardt implementation.

Author(s):  
R. Hänsch ◽  
I. Drude ◽  
O. Hellwich

The task to compute 3D reconstructions from large amounts of data has become an active field of research within the last years. Based on an initial estimate provided by structure from motion, bundle adjustment seeks to find a solution that is optimal for all cameras and 3D points. The corresponding nonlinear optimization problem is usually solved by the Levenberg-Marquardt algorithm combined with conjugate gradient descent. While many adaptations and extensions to the classical bundle adjustment approach have been proposed, only few works consider the acceleration potentials of GPU systems. This paper elaborates the possibilities of time and space savings when fitting the implementation strategy to the terms and requirements of realizing a bundler on heterogeneous CPUGPU systems. Instead of focusing on the standard approach of Levenberg-Marquardt optimization alone, nonlinear conjugate gradient descent and alternating resection-intersection are studied as two alternatives. The experiments show that in particular alternating resection-intersection reaches low error rates very fast, but converges to larger error rates than Levenberg-Marquardt. PBA, as one of the current state-of-the-art bundlers, converges slower in 50 % of the test cases and needs 1.5-2 times more memory than the Levenberg- Marquardt implementation.


2010 ◽  
Vol 163-167 ◽  
pp. 2756-2760 ◽  
Author(s):  
Goh Lyn Dee ◽  
Norhisham Bakhary ◽  
Azlan Abdul Rahman ◽  
Baderul Hisham Ahmad

This paper investigates the performance of Artificial Neural Network (ANN) learning algorithms for vibration-based damage detection. The capabilities of six different learning algorithms in detecting damage are studied and their performances are compared. The algorithms are Levenberg-Marquardt (LM), Resilient Backpropagation (RP), Scaled Conjugate Gradient (SCG), Conjugate Gradient with Powell-Beale Restarts (CGB), Polak-Ribiere Conjugate Gradient (CGP) and Fletcher-Reeves Conjugate Gradient (CGF) algorithms. The performances of these algorithms are assessed based on their generalisation capability in relating the vibration parameters (frequencies and mode shapes) with damage locations and severities under various numbers of input and output variables. The results show that Levenberg-Marquardt algorithm provides the best generalisation performance.


Filomat ◽  
2021 ◽  
Vol 35 (3) ◽  
pp. 737-758
Author(s):  
Yue Hao ◽  
Shouqiang Du ◽  
Yuanyuan Chen

In this paper, we consider the method for solving the finite minimax problems. By using the exponential penalty function to smooth the finite minimax problems, a new three-term nonlinear conjugate gradient method is proposed for solving the finite minimax problems, which generates sufficient descent direction at each iteration. Under standard assumptions, the global convergence of the proposed new three-term nonlinear conjugate gradient method with Armijo-type line search is established. Numerical results are given to illustrate that the proposed method can efficiently solve several kinds of optimization problems, including the finite minimax problem, the finite minimax problem with tensor structure, the constrained optimization problem and the constrained optimization problem with tensor structure.


Author(s):  
V Baiju ◽  
C Muraleedharan

This article analyses the adsorbent bed in an adsorption refrigeration system. After establishing the similarity to the compression process in a vapour compression system, thermodynamic analysis of the adsorbent bed in vapour adsorption system is carried out for evaluating the performance index, exergy destruction, uptake efficiency and exergetic efficiency of the adsorbent bed in a typical solar adsorption refrigeration system. This article also presents isothermal and isobaric modelling of methanol on highly porous activated carbon. The experimental data have been fitted with Dubinin–Astakhov and Dubinin–Radushkevitch equations. The isosteric heat of adsorption is also extracted from the present experimental data. The use of artificial neural network model is proposed to predict the performance of the adsorbent bed used. The back propagation algorithm with three different variants namely scaled conjugate gradient, Pola–Ribiere conjugate gradient and Levenberg–Marquardt and logistic sigmoid transfer function are used, so that the best approach could be found. After training, it is found that Levenberg–Marquardt algorithm with 14 neurons is the most suitable for modelling, the adsorbent bed in a solar adsorption refrigeration system. The artificial neural network predictions of performance parameters agrees well with experimental values with correlation coefficient ( R2) values close to 1 and maximum percentage of error less than 5%. The root mean square and covariance values are also found to be within the acceptable limits.


Author(s):  
Salim Lahmiri

This chapter focuses on comparing the forecasting ability of the backpropagation neural network (BPNN) and the nonlinear autoregressive moving average with exogenous inputs (NARX) network trained with different algorithms; namely the quasi-Newton (Broyden-Fletcher-Goldfarb-Shanno, BFGS), conjugate gradient (Fletcher-Reeves update, Polak-Ribiére update, Powell-Beale restart), and Levenberg-Marquardt algorithm. Three synthetic signals are generated to conduct experiments. The simulation results showed that in general the NARX which is a dynamic system outperforms the popular BPNN. In addition, conjugate gradient algorithms provide better prediction accuracy than the Levenberg-Marquardt algorithm widely used in the literature in modeling exponential signal. However, the LM performed the best when used for forecasting the Moroccan and South African stock price indices under both the BPNN and NARX systems.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Tatiana Novikova ◽  
Pavel Bulkin

Abstract Inverse problem of Mueller polarimetry is defined as a determination of geometrical features of the metrological structures (i.e. 1D diffraction gratings) from its experimental Mueller polarimetric signature. This nonlinear problem was considered as an optimization problem in a multi-parametric space using the least square criterion and the Levenberg–Marquardt algorithm. We demonstrated that solving optimization problem with the experimental Mueller matrix spectra taken in conical diffraction configuration helps finding a global minimum and results in smaller variance values of reconstructed dimensions of the grating profile.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Mohammad Subhi Al-batah ◽  
Mutasem Sh. Alkhasawneh ◽  
Lea Tien Tay ◽  
Umi Kalthum Ngah ◽  
Habibah Hj Lateh ◽  
...  

Landslides are one of the dangerous natural phenomena that hinder the development in Penang Island, Malaysia. Therefore, finding the reliable method to predict the occurrence of landslides is still the research of interest. In this paper, two models of artificial neural network, namely, Multilayer Perceptron (MLP) and Cascade Forward Neural Network (CFNN), are introduced to predict the landslide hazard map of Penang Island. These two models were tested and compared using eleven machine learning algorithms, that is, Levenberg Marquardt, Broyden Fletcher Goldfarb, Resilient Back Propagation, Scaled Conjugate Gradient, Conjugate Gradient with Beale, Conjugate Gradient with Fletcher Reeves updates, Conjugate Gradient with Polakribiere updates, One Step Secant, Gradient Descent, Gradient Descent with Momentum and Adaptive Learning Rate, and Gradient Descent with Momentum algorithm. Often, the performance of the landslide prediction depends on the input factors beside the prediction method. In this research work, 14 input factors were used. The prediction accuracies of networks were verified using the Area under the Curve method for the Receiver Operating Characteristics. The results indicated that the best prediction accuracy of 82.89% was achieved using the CFNN network with the Levenberg Marquardt learning algorithm for the training data set and 81.62% for the testing data set.


2020 ◽  
Vol 20 (1) ◽  
pp. 20-33
Author(s):  
C. K. Arthur ◽  
V. A. Temeng ◽  
Y. Y. Ziggah

Abstract Backpropagation Neural Network (BPNN) is an artificial intelligence technique that has seen several applications in many fields of science and engineering. It is well-known that, the critical task in developing an effective and accurate BPNN model depends on an appropriate training algorithm, transfer function, number of hidden layers and number of hidden neurons. Despite the numerous contributing factors for the development of a BPNN model, training algorithm is key in achieving optimum BPNN model performance. This study is focused on evaluating and comparing the performance of 13 training algorithms in BPNN for the prediction of blast-induced ground vibration. The training algorithms considered include: Levenberg-Marquardt, Bayesian Regularisation, Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton, Resilient Backpropagation, Scaled Conjugate Gradient, Conjugate Gradient with Powell/Beale Restarts, Fletcher-Powell Conjugate Gradient, Polak-Ribiére Conjugate Gradient, One Step Secant, Gradient Descent with Adaptive Learning Rate, Gradient Descent with Momentum, Gradient Descent, and Gradient Descent with Momentum and Adaptive Learning Rate. Using ranking values for the performance indicators of Mean Squared Error (MSE), correlation coefficient (R), number of training epoch (iteration) and the duration for convergence, the performance of the various training algorithms used to build the BPNN models were evaluated. The obtained overall ranking results showed that the BFGS Quasi-Newton algorithm outperformed the other training algorithms even though the Levenberg Marquardt algorithm was found to have the best computational speed and utilised the smallest number of epochs.   Keywords: Artificial Intelligence, Blast-induced Ground Vibration, Backpropagation Training Algorithms


2020 ◽  
Vol 71 (6) ◽  
pp. 66-74
Author(s):  
Younis M. Younis ◽  
Salman H. Abbas ◽  
Farqad T. Najim ◽  
Firas Hashim Kamar ◽  
Gheorghe Nechifor

A comparison between artificial neural network (ANN) and multiple linear regression (MLR) models was employed to predict the heat of combustion, and the gross and net heat values, of a diesel fuel engine, based on the chemical composition of the diesel fuel. One hundred and fifty samples of Iraqi diesel provided data from chromatographic analysis. Eight parameters were applied as inputs in order to predict the gross and net heat combustion of the diesel fuel. A trial-and-error method was used to determine the shape of the individual ANN. The results showed that the prediction accuracy of the ANN model was greater than that of the MLR model in predicting the gross heat value. The best neural network for predicting the gross heating value was a back-propagation network (8-8-1), using the Levenberg�Marquardt algorithm for the second step of network training. R = 0.98502 for the test data. In the same way, the best neural network for predicting the net heating value was a back-propagation network (8-5-1), using the Levenberg�Marquardt algorithm for the second step of network training. R = 0.95112 for the test data.


Sign in / Sign up

Export Citation Format

Share Document