large deviation
Recently Published Documents


TOTAL DOCUMENTS

1558
(FIVE YEARS 299)

H-INDEX

39
(FIVE YEARS 7)

Author(s):  
S Sumedha ◽  
Mustansir Barma

Abstract We use large deviation theory to obtain the free energy of the XY model on a fully connected graph on each site of which there is a randomly oriented field of magnitude $h$. The phase diagram is obtained for two symmetric distributions of the random orientations: (a) a uniform distribution and (b) a distribution with cubic symmetry. In both cases, the ordered state reflects the symmetry of the underlying disorder distribution. The phase boundary has a multicritical point which separates a locus of continuous transitions (for small values of $h$) from a locus of first order transitions (for large $h$). The free energy is a function of a single variable in case (a) and a function of two variables in case (b), leading to different characters of the multicritical points in the two cases.


Author(s):  
G. Gouraud ◽  
Pierre Le Doussal ◽  
Gregory Schehr

Abstract The hole probability, i.e., the probability that a region is void of particles, is a benchmark of correlations in many body systems. We compute analytically this probability P (R) for a sphere of radius R in the case of N noninteracting fermions in their ground state in a d-dimensional trapping potential. Using a connection to the Laguerre-Wishart ensembles of random matrices, we show that, for large N and in the bulk of the Fermi gas, P (R) is described by a universal scaling function of kF R, for which we obtain an exact formula (kF being the local Fermi wave-vector). It exhibits a super exponential tail P (R) / e-κd(kF R)d+1 where κdis a universal amplitude, in good agreement with existing numerical simulations. When R is of the order of the radius of the Fermi gas, the hole probability is described by a large deviation form which is not universal and which we compute exactly for the harmonic potential. Similar results also hold in momentum space.


Author(s):  
Trifce Sandev ◽  
Viktor Domazetoski ◽  
Ljupco Kocarev ◽  
Ralf Metzler ◽  
Alexei Chechkin

Abstract We study a heterogeneous diffusion process with position-dependent diffusion coefficient and Poissonian stochastic resetting. We find exact results for the mean squared displacement and the probability density function. The nonequilibrium steady state reached in the long time limit is studied. We also analyze the transition to the non-equilibrium steady state by finding the large deviation function. We found that similarly to the case of the normal diffusion process where the diffusion length grows like $t^{1⁄2}$ while the length scale ξ(t) of the inner core region of the nonequilibrium steady state grows linearly with time t, in the heterogeneous diffusion process with diffusion length increasing like $t^{p⁄2}$ the length scale ξ(t) grows like $t^{p}$. The obtained results are verified by numerical solutions of the corresponding Langevin equation.


Author(s):  
Karol Jaśkiewicz ◽  
Mateusz Skwarski ◽  
Paweł Kaczyński ◽  
Zbigniew Gronostajski ◽  
Sławomir Polak ◽  
...  

AbstractThe article covers experimental research on the forming of products made of 7075 aluminum alloy. This aluminum alloy grade is characterized by high strength, but due to its low formability in T6 temper, its use in the stamping processes of complex structural elements is limited. The authors have manufactured a U-shaped element at an elevated temperature and determined the optimal parameters of the process. Conventional heating of the sheet and shaping it at the temperature of 100 and 150 °C allowed to obtain a product of high strength similar to the T6 state, above 540 MPa. Due to the excessive springback of the sheet during forming, these products were characterized by a large deviation of the shape geometry, exceeding the allowable values of + / − 1 mm. Only the use of an alternative method of heating the sheet to temperatures of 200 and 240 °C (between plates at 350 °C, heating time 2 min, heating rate 1.8 °C/s) allowed to obtain a product that meets both the strength and geometric requirements. The determined optimal process’ parameters were later transferred to the stamping process of elements of a more complex shape (lower part of the B-pillar). The sheet was heated up and formed in the previously pre-heated tools. In the subsequent series of tests, the heating method and the blank’s temperature were being analyzed. In the case of the foot of the B-pillar, it was necessary to lower the initial blank temperature to 200 °C (heating in a furnace with a temperature of 340 °C, heating speed 0.5 °C/s). The appropriate combination of the process parameters resulted in the satisfactory shape deviation and reaching the product’s strength comparable to the strength of the material in as-delivered T6 temper. Using electron microscopy, it was verified that the structure of the finished product contained particles MgZn2 that strongly strengthen the alloy. The obtained results complement the data on the possibility of using 7075 aluminum alloy to produce energy-absorbing elements of motor vehicles.


Author(s):  
Xi Chen ◽  
Yunxiao Chen ◽  
Xiaoou Li

A sequential design problem for rank aggregation is commonly encountered in psychology, politics, marketing, sports, etc. In this problem, a decision maker is responsible for ranking K items by sequentially collecting noisy pairwise comparisons from judges. The decision maker needs to choose a pair of items for comparison in each step, decide when to stop data collection, and make a final decision after stopping based on a sequential flow of information. Because of the complex ranking structure, existing sequential analysis methods are not suitable. In this paper, we formulate the problem under a Bayesian decision framework and propose sequential procedures that are asymptotically optimal. These procedures achieve asymptotic optimality by seeking a balance between exploration (i.e., finding the most indistinguishable pair of items) and exploitation (i.e., comparing the most indistinguishable pair based on the current information). New analytical tools are developed for proving the asymptotic results, combining advanced change of measure techniques for handling the level crossing of likelihood ratios and classic large deviation results for martingales, which are of separate theoretical interest in solving complex sequential design problems. A mirror-descent algorithm is developed for the computation of the proposed sequential procedures.


2022 ◽  
Vol 8 ◽  
Author(s):  
Na Zhao ◽  
Yang Gao ◽  
Bo Xu ◽  
Weixian Yang ◽  
Lei Song ◽  
...  

Aims: To explore the effect of coronary calcification severity on the measurements and diagnostic performance of computed tomography-derived fractional flow reserve (FFR; CT-FFR).Methods: This study included 305 patients (348 target vessels) with evaluable coronary calcification (CAC) scores from CT-FFR CHINA clinical trial. The enrolled patients all received coronary CT angiography (CCTA), CT-FFR, and invasive FFR examinations within 7 days. On both per-patient and per-vessel levels, the measured values, accuracy, and diagnostic performance of CT-FFR in identifying hemodynamically significant lesions were analyzed in all CAC score groups (CAC = 0, > 0 to <100, ≥ 100 to <400, and ≥ 400), with FFR as reference standard.Results: In total, the sensitivity, specificity, positive predictive value, negative predictive value, accuracy, and area under receiver operating characteristics curve (AUC) of CT-FFR were 85.8, 88.7, 86.9, 87.8, 87.1%, 0.90 on a per-patient level and 88.3, 89.3, 89.5, 88.2, 88.9%, 0.88 on a per-vessel level, respectively. Absolute difference of CT-FFR and FFR values tended to elevate with increased CAC scores (CAC = 0: 0.09 ± 0.10; CAC > 0 to <100: 0.06 ± 0.06; CAC ≥ 100 to <400: 0.09 ± 0.10; CAC ≥ 400: 0.11 ± 0.13; p = 0.246). However, no statistically significant difference was found in patient-based and vessel-based diagnostic performance of CT-FFR among all CAC score groups.Conclusion: This prospective multicenter trial supported CT-FFR as a viable tool in assessing coronary calcified lesions. Although large deviation of CT-FFR has a tendency to correlate with severe calcification, coronary calcification has no significant influence on CT-FFR diagnostic performance using the widely-recognized cut-off value of 0.8.


Author(s):  
Paola Bermolen ◽  
Valeria Goicoechea ◽  
Matthieu Jonckheere ◽  
Ernesto Mordecki

2022 ◽  
Vol 2022 (1) ◽  
pp. 013206
Author(s):  
Cécile Monthus

Abstract The large deviations at level 2.5 are applied to Markov processes with absorbing states in order to obtain the explicit extinction rate of metastable quasi-stationary states in terms of their empirical time-averaged density and of their time-averaged empirical flows over a large time-window T. The standard spectral problem for the slowest relaxation mode can be recovered from the full optimization of the extinction rate over all these empirical observables and the equivalence can be understood via the Doob generator of the process conditioned to survive up to time T. The large deviation properties of any time-additive observable of the Markov trajectory before extinction can be derived from the level 2.5 via the decomposition of the time-additive observable in terms of the empirical density and the empirical flows. This general formalism is described for continuous-time Markov chains, with applications to population birth–death model in a stable or in a switching environment, and for diffusion processes in dimension d.


Machines ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 30
Author(s):  
Liang Gong ◽  
Shengzhe Fan

The number of grains within a panicle is an important index for rice breeding. Counting manually is laborious and time-consuming and hardly meets the requirement of rapid breeding. It is necessary to develop an image-based method for automatic counting. However, general image processing methods cannot effectively extract the features of grains within a panicle, resulting in a large deviation. The convolutional neural network (CNN) is a powerful tool to analyze complex images and has been applied to many image-related problems in recent years. In order to count the number of grains in images both efficiently and accurately, this paper applied a CNN-based method to detecting grains. Then, the grains can be easily counted by locating the connected domains. The final error is within 5%, which confirms the feasibility of CNN-based method for counting grains within a panicle.


Sign in / Sign up

Export Citation Format

Share Document