Unstructured Grid based Fuzzy Cooperative Resistivity Tomography for Electrical and Electromagnetic data

Author(s):  
Anand Singh

<p>There are many inversion algorithms that have been developed in the literature to obtain the resistivity distribution of the subsurface. Recovered resistivity values are usually lower/higher than the actual resistivity as a consequence of the inversion algorithms. As a consequence, Identification of geologic units based on resistivity distribution can be done on a relative scale. In general, identification of different geologic units is a post step inversion process based on resistivity distribution in the study region.  I have presented a technique to enhance the resistivity image using cooperative inversion (named as fuzzy cooperative resistivity tomography) where two additional input parameters are added as the number of geologic units in the model (i.e. number of cluster) and the cluster center values of the geologic units (mean resistivity value of each geologic unit). Fuzzy cooperative resistivity tomography fulfills three needs: (1) to obtain a resistivity model which will satisfy the fitting between measured and modeled data, (2) the recovered resistivity model will be guided by additional a priori parametric information, and (3) resistivity distribution and geologic separation will be accomplished simultaneously (i.e. no post inversion step will be needed). Fuzzy cooperative resistivity tomography is based on fuzzy c-means clustering technique which is an unsupervised machine learning algorithm. The highest membership value which is a direct outcome from the FCRT corresponds to a geology separation result. To obtain a geology separation result, I adopted the defuzzification method to assign a single geologic unit for each model cell based on the membership values. Various synthetic and field example data show that FCRT is an effective approach to differentiate between various geologic units. However, I have also noted that this approach is only effective when measured data sets are sensitive to particular geologic units. This is the limitation of the presented FCRT.</p>

Geophysics ◽  
2018 ◽  
Vol 83 (1) ◽  
pp. E11-E24 ◽  
Author(s):  
Anand Singh ◽  
Shashi Prakash Sharma ◽  
İrfan Akca ◽  
Vikas Chand Baranwal

We evaluate the use of a fuzzy c-means clustering procedure to improve an inverted 2D resistivity model within the iterative error minimization procedure. The algorithm is coded in MATLAB language for the Lp-norm inversion of 2D direct current resistivity data and is referred to as fuzzy constrained inversion (FCI). Two additional input parameters are required to be provided by the interpreter: (1) the number of geologic units in the model (i.e., the number of clusters) and (2) the mean resistivity values of each geologic unit (i.e., cluster center values of the geologic units). The efficacy of our approach is evaluated by tests carried on the synthetic and field electrical resistivity tomography (ERT) data. Inversion results from the FCI algorithm are presented for conventional L1- and L2-norm minimization techniques. FCI indicates improvement over conventional inversion approaches in differentiating the geologic units if a proper number of the geologic units is provided to the algorithm. Inappropriate clustering information will affect the resulting resistivity models, particularly conductive geologic units existing in the model. We also determine that FCI is only effective when the observed ERT data can recognize the particular geologic units.


Genes ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 527
Author(s):  
Eran Elhaik ◽  
Dan Graur

In the last 15 years or so, soft selective sweep mechanisms have been catapulted from a curiosity of little evolutionary importance to a ubiquitous mechanism claimed to explain most adaptive evolution and, in some cases, most evolution. This transformation was aided by a series of articles by Daniel Schrider and Andrew Kern. Within this series, a paper entitled “Soft sweeps are the dominant mode of adaptation in the human genome” (Schrider and Kern, Mol. Biol. Evolut. 2017, 34(8), 1863–1877) attracted a great deal of attention, in particular in conjunction with another paper (Kern and Hahn, Mol. Biol. Evolut. 2018, 35(6), 1366–1371), for purporting to discredit the Neutral Theory of Molecular Evolution (Kimura 1968). Here, we address an alleged novelty in Schrider and Kern’s paper, i.e., the claim that their study involved an artificial intelligence technique called supervised machine learning (SML). SML is predicated upon the existence of a training dataset in which the correspondence between the input and output is known empirically to be true. Curiously, Schrider and Kern did not possess a training dataset of genomic segments known a priori to have evolved either neutrally or through soft or hard selective sweeps. Thus, their claim of using SML is thoroughly and utterly misleading. In the absence of legitimate training datasets, Schrider and Kern used: (1) simulations that employ many manipulatable variables and (2) a system of data cherry-picking rivaling the worst excesses in the literature. These two factors, in addition to the lack of negative controls and the irreproducibility of their results due to incomplete methodological detail, lead us to conclude that all evolutionary inferences derived from so-called SML algorithms (e.g., S/HIC) should be taken with a huge shovel of salt.


2017 ◽  
Author(s):  
Liang Feng ◽  
Paul I. Palmer ◽  
Robyn Butler ◽  
Stephen J. Andrews ◽  
Elliot L. Atlas ◽  
...  

Abstract. We infer surface fluxes of bromoform (CHBr3) and dibromoform (CH2Br2) from aircraft observations over the western Pacific using a tagged version of the GEOS-Chem global 3-D atmospheric chemistry model and a Maximum A Posteriori inverse model. The distribution of a priori ocean emissions of these gases are reasonably consistent with observed atmospheric mole fractions of CHBr3 (r = 0.62) and CH2Br2 (r = 0.38). These a priori emissions result in a positive model bias in CHBr3 peaking in the marine boundary layer, but capture observed values of CH2Br2 with no significant bias by virtue of its longer atmospheric lifetime. Using GEOS-Chem, we find that observed variations in atmospheric CHBr3 are determined equally by sources over the western Pacific and those outside the study region, but observed variations in CH2Br2 are determined mainly by sources outside the western Pacific. Numerical closed-loop experiments show that the spatial and temporal distribution of boundary layer aircraft data have the potential to substantially improve current knowledge of these fluxes, with improvements related to data density. Using the aircraft data, we estimate aggregated regional fluxes of 3.6 ± 0.3 × 108 g/month and 0.7 ± 0.1 × 108 g/month for CHBr3 and CH2Br2 over 130°–155° E and 0°–12° N, respectively, which represent reductions of 20–40 % and substantial spatial deviations from the a priori inventory. We find no evidence to support a robust linear relationship between CHBr3 and CH2Br2 oceanic emissions, as used by previous studies.


2013 ◽  
Vol 765-767 ◽  
pp. 670-673
Author(s):  
Li Bo Hou

Fuzzy C-means (FCM) clustering algorithm is one of the widely applied algorithms in non-supervision of pattern recognition. However, FCM algorithm in the iterative process requires a lot of calculations, especially when feature vectors has high-dimensional, Use clustering algorithm to sub-heap, not only inefficient, but also may lead to "the curse of dimensionality." For the problem, This paper analyzes the fuzzy C-means clustering algorithm in high dimensional feature of the process, the problem of cluster center is an np-hard problem, In order to improve the effectiveness and Real-time of fuzzy C-means clustering algorithm in high dimensional feature analysis, Combination of landmark isometric (L-ISOMAP) algorithm, Proposed improved algorithm FCM-LI. Preliminary analysis of the samples, Use clustering results and the correlation of sample data, using landmark isometric (L-ISOMAP) algorithm to reduce the dimension, further analysis on the basis, obtained the final results. Finally, experimental results show that the effectiveness and Real-time of FCM-LI algorithm in high dimensional feature analysis.


Author(s):  
DAVID GARCIA ◽  
ANTONIO GONZALEZ ◽  
RAUL PEREZ

In system identification process often a predetermined set of features is used. However, in many cases it is difficult to know a priori whether the selected features were really the more appropriate ones. This is the reason why the feature construction techniques have been very interesting in many applications. Thus, the current proposal introduces the use of these techniques in order to improve the description of fuzzy rule-based systems. In particular, the idea is to include feature construction in a genetic learning algorithm. The construction of attributes in this study will be restricted to the inclusion of functions defined on the initial attributes of the system. Since the number of functions and the number of attributes can be very large, a filter model, based on the use of information measures, is introduced. In this way, the genetic algorithm only needs to explore the particular new features that may be of greater interest to the final identification of the system. In order to manage the knowledge provided by the new attributes based on the use of functions we propose a new model of rule by extending a basic learning fuzzy rule-based model. Finally, we show the experimental study associated with this work.


Author(s):  
Franco Spettu ◽  
Simone Teruggi ◽  
Francesco Canali ◽  
Cristiana Achille ◽  
Francesco Fassi

Cultural Heritage (CH) 3D digitisation is getting increasing attention and importance. Advanced survey techniques provide as output a 3D point cloud, wholly and accurately describing even the most complex architectural geometry with a priori established accuracy. These 3D point models are generally used as the base for the realisation of 2D technical drawings and 3D advanced representations. During the last 12 years, the 3DSurveyGroup (3DSG, Politecnico di Milano) conduced an omni-comprehensive, multi-technique survey, obtaining the full point cloud of Milan Cathedral, from which were produced the 2D technical drawings and the 3D model of the Main Spire used by the Veneranda Fabbrica del Duomo di Milano (VF) to plan its periodic maintenance and inspection activities on the Cathedral. Using the survey product directly to plan VF activities would help to skip a long-lasting, uneconomical and manual process of 2D and 3D technical elaboration extraction. In order to do so, the unstructured point cloud data must be enriched with semantics, providing a hierarchical structure that can communicate with a powerful, flexible information system able to effectively manage both point clouds and 3D geometries as hybrid models. For this purpose, the point cloud was segmented using a machine-learning algorithm with multi-level multi-resolution (MLMR) approach in order to obtain a manageable, reliable and repeatable dataset. This reverse engineering process allowed to identify directly on the point cloud the main architectonic elements that are then re-organised in a logical structure inserted inside the informative system built inside the 3DExperience environment, developed by Dassault Systémes.


2007 ◽  
Vol 19 (1) ◽  
pp. 80-110 ◽  
Author(s):  
Colin Molter ◽  
Utku Salihoglu ◽  
Hugues Bersini

This letter aims at studying the impact of iterative Hebbian learning algorithms on the recurrent neural network's underlying dynamics. First, an iterative supervised learning algorithm is discussed. An essential improvement of this algorithm consists of indexing the attractor information items by means of external stimuli rather than by using only initial conditions, as Hopfield originally proposed. Modifying the stimuli mainly results in a change of the entire internal dynamics, leading to an enlargement of the set of attractors and potential memory bags. The impact of the learning on the network's dynamics is the following: the more information to be stored as limit cycle attractors of the neural network, the more chaos prevails as the background dynamical regime of the network. In fact, the background chaos spreads widely and adopts a very unstructured shape similar to white noise. Next, we introduce a new form of supervised learning that is more plausible from a biological point of view: the network has to learn to react to an external stimulus by cycling through a sequence that is no longer specified a priori. Based on its spontaneous dynamics, the network decides “on its own” the dynamical patterns to be associated with the stimuli. Compared with classical supervised learning, huge enhancements in storing capacity and computational cost have been observed. Moreover, this new form of supervised learning, by being more “respectful” of the network intrinsic dynamics, maintains much more structure in the obtained chaos. It is still possible to observe the traces of the learned attractors in the chaotic regime. This complex but still very informative regime is referred to as “frustrated chaos.”


1996 ◽  
Vol 8 (7) ◽  
pp. 1391-1420 ◽  
Author(s):  
David H. Wolpert

This is the second of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. The first paper discusses a particular set of ways to compare learning algorithms, according to which there are no distinctions between learning algorithms. This second paper concentrates on different ways of comparing learning algorithms from those used in the first paper. In particular this second paper discusses the associated a priori distinctions that do exist between learning algorithms. In this second paper it is shown, loosely speaking, that for loss functions other than zero-one (e.g., quadratic loss), there are a priori distinctions between algorithms. However, even for such loss functions, it is shown here that any algorithm is equivalent on average to its “randomized” version, and in this still has no first principles justification in terms of average error. Nonetheless, as this paper discusses, it may be that (for example) cross-validation has better head-to-head minimax properties than “anti-cross-validation” (choose the learning algorithm with the largest cross-validation error). This may be true even for zero-one loss, a loss function for which the notion of “randomization” would not be relevant. This paper also analyzes averages over hypotheses rather than targets. Such analyses hold for all possible priors over targets. Accordingly they prove, as a particular example, that cross-validation cannot be justified as a Bayesian procedure. In fact, for a very natural restriction of the class of learning algorithms, one should use anti-cross-validation rather than cross-validation (!).


2004 ◽  
Vol 16 (8) ◽  
pp. 1721-1762 ◽  
Author(s):  
De-Shuang Huang ◽  
Horace H.S. Ip ◽  
Zheru Chi

This letter proposes a novel neural root finder based on the root moment method (RMM) to find the arbitrary roots (including complex ones) of arbitrary polynomials. This neural root finder (NRF) was designed based on feedforward neural networks (FNN) and trained with a constrained learning algorithm (CLA). Specifically, we have incorporated the a priori information about the root moments of polynomials into the conventional backpropagation algorithm (BPA), to construct a new CLA. The resulting NRF is shown to be able to rapidly estimate the distributions of roots of polynomials. We study and compare the advantage of the RMM-based NRF over the previous root coefficient method—based NRF and the traditional Muller and Laguerre methods as well as the mathematica roots function, and the behaviors, the accuracies of the resulting root finders, and their training speeds of two specific structures corresponding to this FNN root finder: the log σand the σ FNN. We also analyze the effects of the three controlling parameters {δP0 θp η} with the CLA on the two NRFs theoretically and experimentally. Finally, we present computer simulation results to support our claims.


Author(s):  
Ajith Muralidharan ◽  
Roberto Horowitz

We present an adaptive iterative learning based flow imputation algorithm, to estimate missing flow profiles in on ramps and off ramps using a freeway traffic flow model. We use the Link-Node Cell transmission model to describe the traffic state evolution in freeways, with on ramp demand profiles and off ramp split ratios (which are derived from flows) as inputs. The model based imputation algorithm estimates the missing flow profiles that match observed freeway mainline detector data. It is carried out in two steps: (1) adaptive iterative learning of an “effective demand” parameter, which is a function of ramp demands and off ramp flows/ split ratios; (2) estimation of on ramp demands/ off ramp split ratios from the effective demand profile using a linear program. This paper concentrates on the design and analysis of the adaptive iterative learning algorithm. The adaptive iterative learning algorithm is based on a multi-mode (piecewise non-linear) equivalent model of the Link-Node Cell transmission model. The parameter learning update procedure is decentralized, with different update equations depending on the local a-priori state estimate and demand estimate. We present a detailed convergence analysis of our approach and finally demonstrate some examples of its application.


Sign in / Sign up

Export Citation Format

Share Document