Determination of site occupancies by the intermeasurement minimization method. I. Anomalous scattering usage for noncentrosymmetric crystals

2008 ◽  
Vol 41 (1) ◽  
pp. 83-95 ◽  
Author(s):  
Alexander Dudka

New methods for the determination of site occupancy factors are described. The methods are based on the analysis of differences between intensities of Friedel reflections in noncentrosymmetric crystals. In the first method (Anomalous-Expert) the site occupancy factor is determined by the condition that it is identical for two data sets: (1) initial data without averaging of Friedel intensities and (2) data that are averaged on Friedel pairs after the reduction of the anomalous scattering contribution. In the second method (anomalous anisotropic intermeasurement minimization method, Anomalous-AniMMM) the site occupancy factor is refined to satisfy the condition that the differences between the intensities of Friedel reflections that are reduced on the anomalous scattering contribution must be minimal. The methods were checked for three samples of RbTi1−xZrxOPO4crystals (A,BandC) with KTiOPO4structure, at 295 and 105 K (five experimental data sets). Microprobe measurements yield compositionsxA,B= 0.034 (5) andxC= 0.022 (4). The corresponding site occupancy factors areQA,B= 0.932 (10) andQC= 0.956 (8). Using Anomalous-AniMMM and three independent refinements for the first and second samples, the initial occupancy factor ofQA,B= 0.963 (15) was improved toQA,B= 0.938 (7). Of the three room-temperature data sets, one was improved toQA,B= 0.934 (2). For the third sample and one data set, the initial occupancy factor ofQC= 1.000 was improved toQC= 0.956 (1). The methods improve the Hirshfeld rigid-bond test. It is discussed how the description of chemical bonding influences the site occupancy factor.

2014 ◽  
Vol 7 (11) ◽  
pp. 4009-4022 ◽  
Author(s):  
H. Diémoz ◽  
A. M. Siani ◽  
A. Redondas ◽  
V. Savastiouk ◽  
C. T. McElroy ◽  
...  

Abstract. A new algorithm to retrieve nitrogen dioxide (NO2) column densities using MKIV ("Mark IV") Brewer spectrophotometers is described. The method includes several improvements, such as a more recent spectroscopic data set, the reduction of measurement noise, interference by other atmospheric species and instrumental settings, and a better determination of the zenith sky air mass factor. The technique was tested during an ad hoc calibration campaign at the high-altitude site of Izaña (Tenerife, Spain) and the results of the direct sun and zenith sky geometries were compared to those obtained by two reference instruments from the Network for the Detection of Atmospheric Composition Change (NDACC): a Fourier Transform Infrared Radiometer (FTIR) and an advanced visible spectrograph (RASAS-II) based on the differential optical absorption spectrometry (DOAS) technique. To determine the extraterrestrial constant, an easily implementable extension of the standard Langley technique for very clean sites without tropospheric NO2 was developed which takes into account the daytime linear drift of stratospheric nitrogen dioxide due to photochemistry. The measurement uncertainty was thoroughly determined by using a Monte Carlo technique. Poisson noise and wavelength misalignments were found to be the most influential contributors to the overall uncertainty, and possible solutions are proposed for future improvements. The new algorithm is backward-compatible, thus allowing for the reprocessing of historical data sets.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 1050
Author(s):  
Masengo Ilunga

This study assesses mainly the uncertainty of the mean annual runoff (MAR) for quaternary catchments (QCs) considered as metastable nonextensive systems (from Tsalllis entropy) in the Middle Vaal catchment. The study is applied to the surface water resources (WR) of the South Africa 1990 (WR90), 2005 (WR2005) and 2012 (WR2012) data sets. The q-information index (from the Tsalllis entropy) is used here as a deviation indicator for the spatial evolution of uncertainty for the different QCs, using the Shannon entropy as a baseline. It enables the determination of a (virtual) convergence point, zone of positive and negative uncertainty deviation, zone of null deviation and chaotic zone for each data set. Such a determination is not possible on the basis of the Shannon entropy alone as a measure for the MAR uncertainty of QCs, i.e., when they are viewed as extensive systems. Finally, the spatial distributions for the zones of the q-uncertainty deviation (gain or loss in information) of the MAR are derived and lead to iso q-uncertainty deviation maps.


Geophysics ◽  
1993 ◽  
Vol 58 (3) ◽  
pp. 408-418 ◽  
Author(s):  
L. R. Jannaud ◽  
P. M. Adler ◽  
C. G. Jacquin

A method developed for the determination of the characteristic lengths of an heterogeneous medium from the spectral analysis of codas is based on an extension of Aki’s theory to anisotropic elastic media. An equivalent Gaussian model is obtained and seems to be in good agreement with the two experimental data sets that illustrate the method. The first set was obtained in a laboratory experiment with an isotropic marble sample. This sample is characterized by a submillimetric length scale that can be directly observed on a thin section. The spectral analysis of codas and their inversion yields an equivalent correlation length that is in good agreement with the observed one. The second data set is obtained in a crosshole experiment at the usual scale of a seismic survey. The codas are recorded, analysed, and inverted. The analysis yields a vertical characteristic length for the studied subsurface that compares well with the characteristic length measured by seismic and stratigraphic logs.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Guoyu Du ◽  
Xuehua Li ◽  
Lanjie Zhang ◽  
Libo Liu ◽  
Chaohua Zhao

The K-means algorithm has been extensively investigated in the field of text clustering because of its linear time complexity and adaptation to sparse matrix data. However, it has two main problems, namely, the determination of the number of clusters and the location of the initial cluster centres. In this study, we propose an improved K-means++ algorithm based on the Davies-Bouldin index (DBI) and the largest sum of distance called the SDK-means++ algorithm. Firstly, we use the term frequency-inverse document frequency to represent the data set. Secondly, we measure the distance between objects by cosine similarity. Thirdly, the initial cluster centres are selected by comparing the distance to existing initial cluster centres and the maximum density. Fourthly, clustering results are obtained using the K-means++ method. Lastly, DBI is used to obtain optimal clustering results automatically. Experimental results on real bank transaction volume data sets show that the SDK-means++ algorithm is more effective and efficient than two other algorithms in organising large financial text data sets. The F-measure value of the proposed algorithm is 0.97. The running time of the SDK-means++ algorithm is reduced by 42.9% and 22.4% compared with that for K-means and K-means++ algorithms, respectively.


Author(s):  
Tushar ◽  
Tushar ◽  
Shibendu Shekhar Roy ◽  
Dilip Kumar Pratihar

Clustering is a potential tool of data mining. A clustering method analyzes the pattern of a data set and groups the data into several clusters based on the similarity among themselves. Clusters may be either crisp or fuzzy in nature. The present chapter deals with clustering of some data sets using Fuzzy C-Means (FCM) algorithm and Entropy-based Fuzzy Clustering (EFC) algorithm. In FCM algorithm, the nature and quality of clusters depend on the pre-defined number of clusters, level of cluster fuzziness and a threshold value utilized for obtaining the number of outliers (if any). On the other hand, the quality of clusters obtained by the EFC algorithm is dependent on a constant used to establish the relationship between the distance and similarity of two data points, a threshold value of similarity and another threshold value used for determining the number of outliers. The clusters should ideally be distinct and at the same time compact in nature. Moreover, the number of outliers should be as minimum as possible. Thus, the above problem may be posed as an optimization problem, which will be solved using a Genetic Algorithm (GA). The best set of multi-dimensional clusters will be mapped into 2-D for visualization using a Self-Organizing Map (SOM).


Author(s):  
Gavin B. M. Vaughan ◽  
Soeren Schmidt ◽  
Henning F. Poulsen

AbstractWe present a method in which the contributions from the individual crystallites in a polycrystalline sample are separated and treated as essentially single crystal data sets. The process involves the simultaneous determination of the orientation matrices of the individual crystallites in the sample, the subsequent integration of the individual peaks, and filtering and summing of the subsequent integrated intensities, in order to arrive at a single-crystal like data set which may be treated normally. In order to demonstrate the method, we consider as a test case a small molecule structure, cupric acetate monohyrade. We show that it is possible to obtain a single-crystal quality structure solution and refinement, in which accurate anisotropic thermal parameters and hydrogen atom positions are obtained.


Radiocarbon ◽  
2010 ◽  
Vol 52 (1) ◽  
pp. 165-170 ◽  
Author(s):  
Ugo Zoppi

Radiocarbon accelerator mass spectrometry (AMS) measurements are always carried out relative to internationally accepted standards with known 14C activities. The determination of accurate 14C concentrations relies on the fact that standards and unknown samples must be measured under the same conditions. When this is not the case, data reduction is either performed by splitting the collected data set into subsets with consistent measurement conditions or by applying correction factors.This paper introduces a mathematical framework that exploits the intrinsic variability of an AMS system by combining arbitrary measurement parameters into a normalization function. This novel approach allows the en-masse reduction of large data sets by providing individual normalization factors for each data point. Both general features and practicalities necessary for its efficient application are discussed.


Author(s):  
Derrick S. Boone

The accuracy of “stopping rules” for determining the number of clusters in a data set is examined as a function of the underlying clustering algorithm being used. Using a Monte Carlo study, various stopping rules, used in conjunction with six clustering algorithms, are compared to determine which rule/algorithm combinations best recover the true number of clusters. The rules and algorithms are tested using disparately sized, artificially generated data sets that contained multiple numbers and levels of clusters, variables, noise, outliers, and elongated and unequally sized clusters. The results indicate that stopping rule accuracy depends on the underlying clustering algorithm being used. The cubic clustering criterion (CCC), when used in conjunction with mixture models or Ward’s method, recovers the true number of clusters more accurately than other rules and algorithms. However, the CCC was more likely than other stopping rules to report more clusters than are actually present. Implications are discussed.


2014 ◽  
Vol 70 (4) ◽  
pp. o413-o414
Author(s):  
Alastair J. Nielson ◽  
Joyce M. Waters

In the title solvated salt, C9H14N+·Cl−·C19H24O2·0.5C7H7, two molecules of 4,4′-(propane-2,2-diyl)bis(2,6-dimethylphenol) are linkedviaO—H...Cl hydrogen bonds to two chloride ions, each of which is also engaged in N—H...Cl hydrogen bonding to a 4-tert-butylpyridinium cation, giving a cyclic hydrogen-bonded entity centred at 1/2, 1/2, 1/2. The toluene solvent molecule resides in the lattice and resides on an inversion centre; the disorder of the methyl group requires it to have a site-occupancy factor of 0.5. No crystal packing channels are observed.


1993 ◽  
Vol 8 (2) ◽  
pp. 122-126 ◽  
Author(s):  
Paul Predecki

A direct method is described for determining depth profiles (z-profiles) of diffraction data from experimentally determined τ-profiles, where z is the depth beneath the sample surface and τ is the 1/e penetration depth of the X-ray beam. With certain assumptions, the relation between these two profile functions can be expressed in the form of a Laplace transform. The criteria for fitting experimental τ-data to functions which can be utilized by the method are described. The method was applied to two τ-data sets taken from the literature: (1) of residual strain in an A1 thin film and (2) of residual stress in a surface ground A12O3/5vol% TiC composite. For each data set, it was found that the z-profiles obtained were of two types: oscillatory and nonoscillatory. The nonoscillatory profiles appeared to be qualitatively consistent for a given data set. The oscillatory profiles were considered to be not physically realistic. For the data sets considered, the nonoscillatory z-profiles were found to lie consistently above the corresponding τ-profiles, and to approach the τ-profiles at large z, as expected from the relation between the two.


Sign in / Sign up

Export Citation Format

Share Document