Uniqueness of principal points with respect to p-order distance for a class of univariate continuous distribution

2021 ◽  
pp. 109341
Author(s):  
Feng Yu
Author(s):  
M.A. O'Keefe ◽  
Sumio Iijima

We have extended the multi-slice method of computating many-beam lattice images of perfect crystals to calculations for imperfect crystals using the artificial superlattice approach. Electron waves scattered from faulted regions of crystals are distributed continuously in reciprocal space, and all these waves interact dynamically with each other to give diffuse scattering patterns.In the computation, this continuous distribution can be sampled only at a finite number of regularly spaced points in reciprocal space, and thus finer sampling gives an improved approximation. The larger cell also allows us to defocus the objective lens further before adjacent defect images overlap, producing spurious computational Fourier images. However, smaller cells allow us to sample the direct space cell more finely; since the two-dimensional arrays in our program are limited to 128X128 and the sampling interval shoud be less than 1/2Å (and preferably only 1/4Å), superlattice sizes are limited to 40 to 60Å. Apart from finding a compromis superlattice cell size, computing time must be conserved.


Author(s):  
Peter Rez

In high resolution microscopy the image amplitude is given by the convolution of the specimen exit surface wave function and the microscope objective lens transfer function. This is usually done by multiplying the wave function and the transfer function in reciprocal space and integrating over the effective aperture. For very thin specimens the scattering can be represented by a weak phase object and the amplitude observed in the image plane is1where fe (Θ) is the electron scattering factor, r is a postition variable, Θ a scattering angle and x(Θ) the lens transfer function. x(Θ) is given by2where Cs is the objective lens spherical aberration coefficient, the wavelength, and f the defocus.We shall consider one dimensional scattering that might arise from a cross sectional specimen containing disordered planes of a heavy element stacked in a regular sequence among planes of lighter elements. In a direction parallel to the disordered planes there will be a continuous distribution of scattering angle.


Author(s):  
Roger H. Stuewer

Serious contradictions to the existence of electrons in nuclei impinged in one way or another on the theory of beta decay and became acute when Charles Ellis and William Wooster proved, in an experimental tour de force in 1927, that beta particles are emitted from a radioactive nucleus with a continuous distribution of energies. Bohr concluded that energy is not conserved in the nucleus, an idea that Wolfgang Pauli vigorously opposed. Another puzzle arose in alpha-particle experiments. Walther Bothe and his co-workers used his coincidence method in 1928–30 and concluded that energetic gamma rays are produced when polonium alpha particles bombard beryllium and other light nuclei. That stimulated Frédéric Joliot and Irène Curie to carry out related experiments. These experimental results were thoroughly discussed at a conference that Enrico Fermi organized in Rome in October 1931, whose proceedings included the first publication of Pauli’s neutrino hypothesis.


2021 ◽  
pp. 014459872098811
Author(s):  
Yuanyuan Zhang ◽  
Zhanli Ren ◽  
Youlu Jiang ◽  
Jingdong Liu

To clarify the characteristics and enrichment rules of Paleogene tight sandstone reservoirs inside the rifted-basin of Eastern China, the third member of Shahejie Formation (abbreviated as Es3) in Wendong area of Dongpu Depression is selected as the research object. It not only clarified the geochemical characteristics of oil and natural gas in the Es3 of Wendong area through testing and analysis of crude oil biomarkers, natural gas components and carbon isotopes, etc.; but also compared and explained the types and geneses of oil and gas reservoirs in slope zone and sub-sag zone by matching relationship between the porosity evolution of tight reservoirs and the charging process of hydrocarbons. Significant differences have been found between the properties and the enrichment rules of hydrocarbon reservoirs in different structural areas in Wendong area. The study shows that the Paleogene hydrocarbon resources are quasi-continuous distribution in Wendong area. The late kerogen pyrolysis gas, light crude oil, medium crude oil, oil-cracked gas and the early kerogen pyrolysis gas are distributed in a semicircle successively, from the center of sub-sag zone to the uplift belt, that is the result of two discontinuous hydrocarbon charging. Among them, the slope zone is dominated by early conventional filling of oil-gas mixture (at the late deposition period of Dongying Formation, about 31–27 Ma ago), while the reservoirs are gradually densified in the late stage without large-scale hydrocarbon charging (since the deposition stage of Minghuazhen Formation, about 6–0 Ma). In contrast, the sub-sag zone is lack of oil reservoirs, but a lot of late kerogen pyrolysis gas reservoirs are enriched, and the reservoir densification and hydrocarbon filling occur in both early and late stages.


Energies ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 1909
Author(s):  
Konstantin Osintsev ◽  
Sergei Aliukov ◽  
Yuri Prikhodko

A method for evaluating the thermophysical characteristics of the torch is developed. Mathematically the temperature at the end of the zone of active combustion based on continuous distribution functions of particles of solid fuels, in particular coal dust. The particles have different average sizes, which are usually grouped and expressed as a fraction of the total mass of the fuel. The authors suggest taking into account the sequential nature of the entry into the chemical reactions of combustion of particles of different masses. In addition, for the application of the developed methodology, it is necessary to divide the furnace volume into zones and sections. In particular, the initial section of the torch, the zone of intense burning and the zone of afterburning. In this case, taking into account all the thermophysical characteristics of the torch, it is possible to make a thermal balance of the zone of intense burning. Then determines the rate of expiration of the fuel-air mixture, the time of combustion of particles of different masses and the temperature at the end of the zone of intensive combustion. The temperature of the torch, the speed of flame propagation, and the degree of particle burnout must be controlled. The authors propose an algorithm for controlling the thermophysical properties of the torch based on neural network algorithms. The system collects data for a certain time, transmits the information to the server. The data is processed and a forecast is made using neural network algorithms regarding the combustion modes. This allows to increase the reliability and efficiency of the combustion process. The authors present experimental data and compare them with the data of the analytical calculation. In addition, data for certain modes are given, taking into account the system’s operation based on neural network algorithms.


ZooKeys ◽  
2018 ◽  
Vol 772 ◽  
pp. 153-163 ◽  
Author(s):  
Atsunobu Murase ◽  
Ryohei Miki ◽  
Masaaki Wada ◽  
Masahide Itou ◽  
Hiroyuki Motomura ◽  
...  

The Potato Grouper, Epinephelustukula, is relatively rare worldwide. Records from the northernmost part of its range (Japan) have been few, resulting in a “Critically Endangered” listing on the Red List for Japan. The Japanese records were revised by examining literature, new specimens, photographs, and the internet, and a continuous distribution pattern from the tropical Ryukyu Islands (including adult individuals) to temperate regions affected by the Kuroshio Current were delineated; this suggests the species inhabits tropical Japan and can spread to temperate regions via the warm current. The species possibly reproduces in Japanese waters but further reproductive ecology research is required.


Author(s):  
Nils Damaschke ◽  
Volker Kühn ◽  
Holger Nobach

AbstractThe prediction and correction of systematic errors in direct spectral estimation from irregularly sampled data taken from a stochastic process is investigated. Different sampling schemes are investigated, which lead to such an irregular sampling of the observed process. Both kinds of sampling schemes are considered, stochastic sampling with non-equidistant sampling intervals from a continuous distribution and, on the other hand, nominally equidistant sampling with missing individual samples yielding a discrete distribution of sampling intervals. For both distributions of sampling intervals, continuous and discrete, different sampling rules are investigated. On the one hand, purely random and independent sampling times are considered. This is given only in those cases, where the occurrence of one sample at a certain time has no influence on other samples in the sequence. This excludes any preferred delay intervals or external selection processes, which introduce correlations between the sampling instances. On the other hand, sampling schemes with interdependency and thus correlation between the individual sampling instances are investigated. This is given whenever the occurrence of one sample in any way influences further sampling instances, e.g., any recovery times after one instance, any preferences of sampling intervals including, e.g., sampling jitter or any external source with correlation influencing the validity of samples. A bias-free estimation of the spectral content of the observed random process from such irregularly sampled data is the goal of this investigation.


2021 ◽  
Vol 16 (1) ◽  
Author(s):  
M. A. Spitz ◽  
F. Severac ◽  
C. Obringer ◽  
S. Baer ◽  
N. Le May ◽  
...  

Abstract Background Cockayne syndrome is a progressive multisystem genetic disorder linked to defective DNA repair and transcription. This rare condition encompasses a very wide spectrum of clinical severity levels ranging from severe prenatal onset to mild adult-onset subtypes. The rarity, complexity and variability of the disease make early diagnosis and severity assessment difficult. Based on similar approaches in other neurodegenerative disorders, we propose to validate diagnostic and severity scores for Cockayne syndrome. Methods Clinical, imaging and genetic data were retrospectively collected from 69 molecularly confirmed CS patients. A clinical diagnostic score and a clinical-radiological diagnostic score for CS were built using a multivariable logistic regression model with a stepwise variable selection procedure. A severity score for CS was designed on five items (head circumference, growth failure, neurosensorial signs, motor autonomy, communication skills) and validated by comparison with classical predefined severity subtypes of CS. Results Short stature, enophtalmos, hearing loss, cataracts, cutaneous photosensitivity, frequent dental caries, enamel hypoplasia, morphological abnormalities of the teeth, areflexia and spasticity were included in the clinical diagnostic score as being the most statistically relevant criteria. Appropriate weights and thresholds were assigned to obtain optimal sensitivity and specificity (95.7% and 86.4% respectively). The severity score was shown to be able to quantitatively differentiate classical predefined subtypes of CS and confirmed the continuous distribution of the clinical presentations in CS. Longitudinal follow-up of the severity score was able to reflect the natural course of the disease. Conclusion The diagnostic and severity scores for CS will facilitate early diagnosis and longitudinal evaluation of future therapeutic interventions. Prospective studies will be needed to confirm these findings.


Sign in / Sign up

Export Citation Format

Share Document