scholarly journals Unified Mathematical Formulation of Monogenic Phase Congruency

Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3080
Author(s):  
Manuel G. Forero ◽  
Carlos A. Jacanamejoy

Phase congruency is a technique that has been used for edge, corner and symmetry detection. Its implementation through the use of monogenic filters has improved its computational cost. For this purpose, different methods of implementation have been published, but they do not use a common notation, which makes it difficult to understand. Therefore, this paper presents a unified mathematical formulation that allows a general understanding of the Monogenic phase congruency concepts and establishes criteria for its use. A new protocol for parameter tuning is also described, allowing better practical results to be obtained with this technique. Some examples are presented allowing one to observe the changes produced in the parameter tuning, evidencing the validity of the proposed criteria.


2015 ◽  
Vol 769 ◽  
pp. 369-386 ◽  
Author(s):  
A. Lefebvre-Lepot ◽  
B. Merlet ◽  
T. N. Nguyen

We address the problem of computing the hydrodynamic forces and torques among $N$ solid spherical particles moving with given rotational and translational velocities in Stokes flow. We consider the original fluid–particle model without introducing new hypotheses or models. Our method includes the singular lubrication interactions which may occur when some particles come close to one another. The main new feature is that short-range interactions are propagated to the whole flow, including accurately the many-body lubrication interactions. The method builds on a pre-existing fluid solver and is flexible with respect to the choice of this solver. The error is the error generated by the fluid solver when computing non-singular flows (i.e. with negligible short-range interactions). Therefore, only a small number of degrees of freedom are required and we obtain very accurate simulations within a reasonable computational cost. Our method is closely related to a method proposed by Sangani & Mo (Phys. Fluids, vol. 6, 1994, pp. 1653–1662) but, in contrast with the latter, it does not require parameter tuning. We compare our method with the Stokesian dynamics of Durlofsky et al. (J. Fluid Mech., vol. 180, 1987, pp. 21–49) and show the higher accuracy of the former (both by analysis and by numerical experiments).



Geophysics ◽  
2021 ◽  
pp. 1-64
Author(s):  
Claudia Haindl ◽  
Kuangdai Leng ◽  
Tarje Nissen-Meyer

We present an adaptive approach to seismic modeling by which the computational cost of a 3D simulation can be reduced while retaining resolution and accuracy. This Azimuthal Complexity Adaptation (ACA) approach relies upon the inherent smoothness of wavefields around the azimuth of a source-centered cylindrical coordinate system. Azimuthal oversampling is thereby detected and eliminated. The ACA method has recently been introduced as part of AxiSEM3D, an open-source solver for global seismology. We employ a generalization of this solver which can handle local-scale Cartesian models, and which features a combination of an absorbing boundary condition and a sponge boundary with automated parameter tuning. The ACA method is benchmarked against an established 3D method using a model featuring bathymetry and a salt body. We obtain a close fit where the models are implemented equally in both solvers and an expectedly poor fit otherwise, with the ACA method running an order of magnitude faster than the classic 3D method. Further, we present maps of maximum azimuthal wavenumbers that are created to facilitate azimuthal complexity adaptation. We show how these maps can be interpreted in terms of the 3D complexity of the wavefield and in terms of seismic resolution. The expected performance limits of the ACA method for complex 3D structures are tested on the SEG/EAGE salt model. In this case, ACA still reduces the overall degrees of freedom by 92% compared to a complexity-blind AxiSEM3D simulation. In comparison with the reference 3D method, we again find a close fit and a speed-up of a factor 7. We explore how the performance of ACA is affected by model smoothness by subjecting the SEG/EAGE salt model to Gaussian smoothing. This results in a doubling of the speed-up. ACA thus represents a convergent, versatile and efficient method for a variety of complex settings and scales.



Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2616 ◽  
Author(s):  
Photis Patonis ◽  
Petros Patias ◽  
Ilias N. Tziavos ◽  
Dimitrios Rossikopoulos ◽  
Konstantinos G. Margaritis

This paper presents a fusion method for combining outputs acquired by low-cost inertial measurement units and electronic magnetic compasses. Specifically, measurements of inertial accelerometer and gyroscope sensors are combined with no-inertial magnetometer sensor measurements to provide the optimal three-dimensional (3D) orientation of the sensors’ axis systems in real time. The method combines Euler–Cardan angles and rotation matrix for attitude and heading representation estimation and deals with the “gimbal lock” problem. The mathematical formulation of the method is based on Kalman filter and takes into account the computational cost required for operation on mobile devices as well as the characteristics of the low-cost microelectromechanical sensors. The method was implemented, debugged, and evaluated in a desktop software utility by using a low-cost sensor system, and it was tested in an augmented reality application on an Android mobile device, while its efficiency was evaluated experimentally.



2020 ◽  
Vol 34 (04) ◽  
pp. 5053-5060
Author(s):  
Linjian Ma ◽  
Gabe Montague ◽  
Jiayu Ye ◽  
Zhewei Yao ◽  
Amir Gholami ◽  
...  

There have been several recent work claiming record times for ImageNet training. This is achieved by using large batch sizes during training to leverage parallel resources to produce faster wall-clock training times per training epoch. However, often these solutions require massive hyper-parameter tuning, which is an important cost that is often ignored. In this work, we perform an extensive analysis of large batch size training for two popular methods that is Stochastic Gradient Descent (SGD) as well as Kronecker-Factored Approximate Curvature (K-FAC) method. We evaluate the performance of these methods in terms of both wall-clock time and aggregate computational cost, and study the hyper-parameter sensitivity by performing more than 512 experiments per batch size for each of these methods. We perform experiments on multiple different models on two datasets of CIFAR-10 and SVHN. The results show that beyond a critical batch size both K-FAC and SGD significantly deviate from ideal strong scaling behaviour, and that despite common belief K-FAC does not exhibit improved large-batch scalability behavior, as compared to SGD.



2021 ◽  
Author(s):  
Evan Baker ◽  
Anna Harper ◽  
Daniel Williamson ◽  
Peter Challenor

Abstract. Land surface models are typically integrated into global climate projections, but as their spatial resolution increases the prospect of using them to aid in local policy decisions becomes more appealing. If these complex models are to be used to make local decisions, then a full quantification of uncertainty is necessary, but the computational cost of running just one simulation at high resolution can hinder proper analysis. Statistical emulation is an increasingly common technique for developing fast approximate models in a way that maintains accuracy but also provides comprehensive uncertainty bounds for the approximation. In this work, we develop a statistical emulation framework for land surface models which acknowledges the forcing data fed into the model, providing predictions at a high resolution. We use The Joint UK Land Environment Simulator (JULES) as a case study for this strategy, and perform initial sensitivity analysis and parameter tuning to showcase its capabilities. JULES is perhaps one of the most complex land surface models, and so our success here suggests incredible gains can be made for all types of land surface model.



Author(s):  
Viviana Gómez-Orozco ◽  
Iván De La Pava Panche ◽  
Andrés Marino Álvarez-Meza ◽  
Mauricio Alexander Álvarez-López ◽  
Álvaro Ángel Orozco-Gutiérrez

Adjusting the stimulation parameters is a challenge in deep brain stimulation (DBS) therapy due to the vast number of different configurations available. As a result, systems based on the visualization of the volume of tissue activated (VTA) produced by a particular stimulation setting have been developed. However, the medical specialist still has to search, by trial and error, for a DBS set-up that generates the desired VTA. Therefore, our goal is developing a DBS parameter tuning strategy for current clinical devices that allows defining a target VTA under biophysically viable constraints. We propose a machine learning approach that allows estimating the DBS parameter values for a given VTA, which comprises two main stages: i) A K-nearest neighbors-based deformation to define a target VTA preserving biophysically viable constraints. ii) A parameter estimation stage that consists of a data projection using metric learning to highlight relevant VTA properties, and a regression/classification algorithm to estimate the DBS parameters that generate the target VTA. Our methodology allows setting a biophysically compliant target VTA and accurately predicts the required configuration of stimulation parameters. Also, the performance of our approach is stable for both isotropic and anisotropic tissue conductivities. Furthermore, the computational cost of the trained system is acceptable for real-world implementations.



Author(s):  
A P Shuravin ◽  
S V Vologdin

The article substantiates the relevance of optimization algorithms research for solving various applied problems and for the science of artificial intelligence. The need to solve problems of optimizing the thermal-hydraulic modes of buildings (as part of the project "Smart City") is explained. The paper presents a mathematical formulation of the problem of optimizing the temperature mode of rooms using adjustable devices. Existing work provides two methods for solving the posed problem. They are the coordinates search method and the genetic algorithm. The article contains the description of the above mentioned algorithms (including the mathematical apparatus used). The results of the computational experiment (for the considered optimization methods) are presented. These experimental results show that the genetic algorithm provides better optimization results than the coordinates search method, but it has a large computational cost. The hypothesis was confirmed that in order to increase the efficiency of solving the considered class of problems it is necessary to combine the genetic algorithm and the coordinates search method.



2018 ◽  
Vol 6 (1) ◽  
pp. 49-59 ◽  
Author(s):  
Ali Kaveh ◽  
Vahid Reza Mahdavi

Abstract This article presents a new population-based optimization algorithm to solve the multi-objective optimization problems of truss structures. This method is based on the recently developed single-solution algorithm proposed by the present authors, so called colliding bodies optimization (CBO), with each agent solution being considered as an object or body with mass. In the proposed multi-objective colliding bodies optimization (MOCBO) algorithm, the collision theory strategy as the search process is utilized and the Maximin fitness procedure is incorporated to the CBO for sorting the agents. A series of well-known test functions with different characteristics and number of objective functions are studied. In order to measure the accuracy and efficiency of the proposed algorithm, its results are compared to those of the previous methods available in the literature, such as SPEA2, NSGA-II and MOPSO algorithms. Thereafter, two truss structural examples considering bi-objective functions are optimized. The performance of the proposed algorithm is more accurate and requires a lower computational cost than the other considered algorithms. In addition, the present methodology uses simple formulation and does not require internal parameter tuning. Highlights A new population-based algorithm is presented for multi-objective optimization. The algorithm is based on the recently developed single-solution colliding bodies optimization (CBO). The proposed multi-objective colliding bodies optimization is abbreviated as MOCBO. MOCBO utilizes the maximin fitness procedure for sorting the agents. A series of well-known test functions and number of objective functions are studied. The MOCBO is more accurate and requires lower computational cost. The MOCBO method uses simple formulation and requires no internal parameter tuning.



1976 ◽  
Vol 32 ◽  
pp. 577-588
Author(s):  
C. Mégessier ◽  
V. Khokhlova ◽  
T. Ryabchikova

My talk will be on the oblique rotator model which was first proposed by Stibbs (1950), and since received success and further developments. I shall present two different attempts at describing a star according to this model and the first results obtained in the framework of a Russian-French collaboration in order to test the precision of the two methods. The aim is to give the best possible representation of the element distributions on the Ap stellar surfaces. The first method is the mathematical formulation proposed by Deutsch (1958-1970) and applied by Deutsch (1958) to HD 125248, by Pyper (1969) to α2CVn and by Mégessier (1975) to 108 Aqr. The other one was proposed by Khokhlova (1974) and used by her group.



2012 ◽  
Author(s):  
Todd Wareham ◽  
Robert Robere ◽  
Iris van Rooij
Keyword(s):  


Sign in / Sign up

Export Citation Format

Share Document