Facing the Truth: Some Advantages of Direct Interpretation

Author(s):  
W. Coene ◽  
A. Thust ◽  
M. Op de Beeck ◽  
D. Van Dyck

Compared to conventional electron sources, the use of a highly coherent field-emission gun (FEG) in TEM improves the information resolution considerably. A direct interpretation of this extra information, however, is hampered since amplitude and phase of the electron wave are scrambled in a complicated way upon transfer from the specimen exit plane through the objective lens towards the image plane. In order to make the additional high-resolution information interpretable, a phase retrieval procedure is applied, which yields the aberration-corrected electron wave from a focal series of HRTEM images (Coene et al, 1992).Kirkland (1984) tackled non-linear image reconstruction using a recursive least-squares formalism in which the electron wave is modified stepwise towards the solution which optimally matches the contrast features in the experimental through-focus series. The original algorithm suffers from two major drawbacks : first, the result depends strongly on the quality of the initial guess of the first step, second, the processing time is impractically high.


Author(s):  
A. Thust ◽  
K. Urban

The alloy of composition Ni4Mo develops, at temperatures below 860 °C, an ordered Dla-structure which is based on the fcc-lattice. This alloy has been widely investigated with respect to its physical properties and its ordering behaviour. High resolution studies are rare and concentrated mainly on its short-range order structure. The aim of the present work was to develop a detailed understanding of image contrast and to apply the results to antiphase-boundary studies in ordered Ni4Mo by means of a JEOL 4000 EX electron microscope.In high-resolution electron microscopy, depending on defocus and foil thickness, a large variety of different images is obtained. Only a few of these allow a direct interpretation concerning the location and the type of the atoms. By computing a through-focus/through-thickness map (TFTT map) before starting experimental work it is possible to determine the proper conditions at which images can be obtained which are closely related to the projected potential.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Surendra Kumar ◽  
Mi-hyun Kim

AbstractIn drug discovery, rapid and accurate prediction of protein–ligand binding affinities is a pivotal task for lead optimization with acceptable on-target potency as well as pharmacological efficacy. Furthermore, researchers hope for a high correlation between docking score and pose with key interactive residues, although scoring functions as free energy surrogates of protein–ligand complexes have failed to provide collinearity. Recently, various machine learning or deep learning methods have been proposed to overcome the drawbacks of scoring functions. Despite being highly accurate, their featurization process is complex and the meaning of the embedded features cannot directly be interpreted by human recognition without an additional feature analysis. Here, we propose SMPLIP-Score (Substructural Molecular and Protein–Ligand Interaction Pattern Score), a direct interpretable predictor of absolute binding affinity. Our simple featurization embeds the interaction fingerprint pattern on the ligand-binding site environment and molecular fragments of ligands into an input vectorized matrix for learning layers (random forest or deep neural network). Despite their less complex features than other state-of-the-art models, SMPLIP-Score achieved comparable performance, a Pearson’s correlation coefficient up to 0.80, and a root mean square error up to 1.18 in pK units with several benchmark datasets (PDBbind v.2015, Astex Diverse Set, CSAR NRC HiQ, FEP, PDBbind NMR, and CASF-2016). For this model, generality, predictive power, ranking power, and robustness were examined using direct interpretation of feature matrices for specific targets.


2021 ◽  
pp. 147592172110339
Author(s):  
Guoqiang Liu ◽  
Binwen Wang ◽  
Li Wang ◽  
Yu Yang ◽  
Xiaguang Wang

Due to no requirement for direct interpretation of the guided wave signal, probability-based diagnostic imaging (PDI) algorithm is especially suitable for damage identification of complex composite structures. However, the weight distribution function of PDI algorithm is relatively inaccurate. It can reduce the damage localization accuracy. In order to improve the damage localization accuracy, an improved PDI algorithm is proposed. In the proposed algorithm, the weight distribution function is corrected by the acquired relative distances from defects to all actuator–sensor pairs and the reduction of the weight distribution areas. The validity of the proposed algorithm is assessed by identifying damages at different locations on a stiffened composite panel. The results show that the proposed algorithm can identify damage of a stiffened composite panel accurately.


Materials ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 2891
Author(s):  
Elena Fomenko ◽  
Igor Altman ◽  
Igor E. Agranovski

This paper attempts to demonstrate the importance of the nanoparticle charge in the synthesis flame, for the mechanism of their evolution during formation processes. An investigation was made of MgO nanoparticles formed during combustion of magnesium particles. The cubic shape of nanoparticles in an unaffected flame allows for direct interpretation of results on the external flame charging, using a continuous unipolar emission of ions. It was found that the emission of negative ions applied to the flame strongly affects the nanoparticle shape, while the positive ions do not lead to any noticeable change. The demonstrated effect emphasizes the need to take into account all of the phenomena responsible for the particle charge when modeling the nanoparticle formation in flames.


1981 ◽  
Vol 4 (1) ◽  
pp. 151-172
Author(s):  
Pierangelo Miglioli ◽  
Mario Ornaghi

The aim of this paper is to provide a general explanation of the “algorithmic content” of proofs, according to a point of view adequate to computer science. Differently from the more usual attitude of program synthesis, where the “algorithmic content” is captured by translating proofs into standard algorithmic languages, here we propose a “direct” interpretation of “proofs as programs”. To do this, a clear explanation is needed of what is to be meant by “proof-execution”, a concept which must generalize the usual “program-execution”. In the first part of the paper we discuss the general conditions to be satisfied by the executions of proofs and consider, as a first example of proof-execution, Prawitz’s normalization. According to our analysis, simple normalization is not fully adequate to the goals of the theory of programs: so, in the second section we present an execution-procedure based on ideas more oriented to computer science than Prawitz’s. We provide a soundness theorem which states that our executions satisfy an appropriate adequacy condition, and discuss the sense according to which our “proof-algorithms” inherently involve parallelism and non determinism. The Properties of our computation model are analyzed and also a completeness theorem involving a notion of “uniform evaluation” of open formulas is stated. Finally, an “algorithmic completeness” theorem is given, which essentially states that every flow-chart program proved to be totally correct can be simulated by an appropriate “purely logical proof”.


Geophysics ◽  
1973 ◽  
Vol 38 (4) ◽  
pp. 762-770 ◽  
Author(s):  
Terry Lee ◽  
Ronald Green

The potential function for a point electrode in the vicinity of a vertical fault or dike may be expressed as an infinite integral involving Bessel functions. Beginning with such an expression, two methods are presented for the direct analysis of resistivity data measured both normal and parallel to dikes or faults. The first method is based on the asymptotic expansion of the Hankel transform of the field data and is suitable for surveys done parallel to the strike of the dike or fault. The second method is based on a successive approximation technique which starts from an initial approximate solution and iterates until a solution with prescribed accuracy is found. Both methods are suitable for programming on a digital computer and some illustrative numerical results are presented. These examples show the limitations of the methods. In addition, the application of resistivity data to the interpretation of induced‐polarization data is pointed out.


2014 ◽  
Vol 11 (2) ◽  
pp. 339-350
Author(s):  
Khadidja Bouali ◽  
Fatima Kadid ◽  
Rachid Abdessemed

In this paper a design methodology of a magnetohydrodynamic pump is proposed. The methodology is based on direct interpretation of the design problem as an optimization problem. The simulated annealing method is used for an optimal design of a DC MHD pump. The optimization procedure uses an objective function which can be the minimum of the mass. The constraints are both of geometrics and electromagnetic in type. The obtained results are reported.


Sign in / Sign up

Export Citation Format

Share Document