statistical sense
Recently Published Documents


TOTAL DOCUMENTS

123
(FIVE YEARS 24)

H-INDEX

15
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Jonas Spaeth ◽  
Thomas Birner

Abstract. The Arctic Oscillation (AO) describes a seesaw pattern of variations in atmospheric mass over the polar cap. It is by now well established that the AO pattern is in part determined by the state of the stratosphere. In particular, sudden stratospheric warmings (SSWs) are known to nudge the tropospheric circulation toward a more negative phase of the AO, which is associated with a more equatorward shifted jet and enhanced likelihood for blocking and cold air outbreaks in mid-latitudes. SSWs are also thought to contribute to the occurrence of extreme AO events. However, statistically robust results about such extremes are difficult to obtain from observations or meteorological (re-)analyses due to the limited sample size of SSW events in the observational record (roughly 6 SSWs per decade). Here we exploit a large set of extended-range ensemble forecasts within the subseasonal-to-seasonal (S2S) framework to obtain an improved characterization of the modulation of AO extremes due to stratosphere-troposphere coupling. Specifically, we greatly boost the sample size of stratospheric events by using potential SSWs (p-SSWs), i.e., SSWs that are predicted to occur in individual forecast ensemble members regardless of whether they actually occurred in the real atmosphere. For example, for the ECMWF S2S ensemble this gives us a total of 6101 p-SSW events for the period 1997–2021. A standard lag-composite analysis around these p-SSWs validates our approach, i.e., the associated composite evolution of stratosphere-troposphere coupling matches the known evolution based on reanalyses data around real SSW events. Our statistical analyses further reveal that following p-SSWs, relative to climatology: 1) persistently negative AO states (> 1 week duration) are 16 % more likely, 2) the likelihood for extremely negative AO states (< −3σ) is enhanced by at least 35 %, while that for extremely positive AO states (> +3σ) is reduced to almost zero, 3) a p-SSW preceding an extremely negative AO state within 4 weeks is causal for this AO extreme (in a statistical sense) up to a degree of 27 %. A corresponding analysis relative to strong stratospheric vortex events reveals similar insights into the stratospheric modulation of positive AO extremes.


2021 ◽  
Vol 17 (11) ◽  
pp. e1009477
Author(s):  
Eva Loth ◽  
Jumana Ahmad ◽  
Chris Chatham ◽  
Beatriz López ◽  
Ben Carter ◽  
...  

Over the past decade, biomarker discovery has become a key goal in psychiatry to aid in the more reliable diagnosis and prognosis of heterogeneous psychiatric conditions and the development of tailored therapies. Nevertheless, the prevailing statistical approach is still the mean group comparison between “cases” and “controls,” which tends to ignore within-group variability. In this educational article, we used empirical data simulations to investigate how effect size, sample size, and the shape of distributions impact the interpretation of mean group differences for biomarker discovery. We then applied these statistical criteria to evaluate biomarker discovery in one area of psychiatric research—autism research. Across the most influential areas of autism research, effect size estimates ranged from small (d = 0.21, anatomical structure) to medium (d = 0.36 electrophysiology, d = 0.5, eye-tracking) to large (d = 1.1 theory of mind). We show that in normal distributions, this translates to approximately 45% to 63% of cases performing within 1 standard deviation (SD) of the typical range, i.e., they do not have a deficit/atypicality in a statistical sense. For a measure to have diagnostic utility as defined by 80% sensitivity and 80% specificity, Cohen’s d of 1.66 is required, with still 40% of cases falling within 1 SD. However, in both normal and nonnormal distributions, 1 (skewness) or 2 (platykurtic, bimodal) biologically plausible subgroups may exist despite small or even nonsignificant mean group differences. This conclusion drastically contrasts the way mean group differences are frequently reported. Over 95% of studies omitted the “on average” when summarising their findings in their abstracts (“autistic people have deficits in X”), which can be misleading as it implies that the group-level difference applies to all individuals in that group. We outline practical approaches and steps for researchers to explore mean group comparisons for the discovery of stratification biomarkers.


2021 ◽  
Vol 922 (1) ◽  
pp. 74
Author(s):  
Jaroslav Haas ◽  
Ladislav Šubr

Abstract Stellar motions in the innermost parts of galactic nuclei, where the gravity of a supermassive black hole dominates, follow Keplerian ellipses to the first order of approximation. These orbits may be subject to periodic (Kozai–Lidov) oscillations of their orbital elements if some nonspherically distributed matter (e.g., a secondary massive black hole, coherent stellar subsystem, or large-scale gaseous structure) perturbs the gravity of the central supermassive black hole. These oscillations are, however, affected by the overall potential of the host nuclear star cluster. In this paper, we show that its influence strongly depends on the properties of the particular system, as well as the considered timescale. We demonstrate that for systems with astrophysically relevant parameters, the Kozai–Lidov oscillations of eccentricity can be enhanced by the extended potential of the cluster in terms of reaching significantly higher maximal values. In a more general statistical sense, the oscillations of eccentricity are typically damped. The efficiency of the damping, however, may be small to negligible for the suitable parameters of the system. This applies, in particular, in the case when the perturbing body is on an eccentric orbit.


2021 ◽  
Author(s):  
Heikki Vanhamäki ◽  
Anita Aikio ◽  
Kirsti Kauristie ◽  
Sebastian Käki ◽  
David Knudsen

&lt;p&gt;Height-integrated ionospheric Pedersen and Hall conductances play a major role in ionospheric electrodynamics and Magnetosphere-Ionosphere coupling. Especially the Pedersen conductance is a crucial parameter in estimating ionospheric energy dissipation via Joule heating. Unfortunately, the conductances are rather difficult to measure directly in extended regions, so statistical models and various proxies are often used.&lt;/p&gt;&lt;p&gt;We discuss a method for estimating the Pedersen Conductance from magnetic and electric field data provided by the Swarm satellites. We need to assume that the height-integrated Pedersen current is identical to the curl-free part of the height integrated ionospheric horizontal current density, which is strictly valid only if the conductance gradients are parallel to the electric field. This may not be a valid assumption in individual cases but could be a good approximation in a statistical sense. Further assuming that the cross-track magnetic disturbance measured by Swarm is mostly produced by field-aligned currents and not affected by ionospheric electrojets, we can use the cross-track ion velocity and the magnetic perturbation to directly estimate the height-integrated Pedersen conductance.&lt;/p&gt;&lt;p&gt;We present initial results of a statistical study utilizing 5 years of data from the Swarm-A and Swarm-B spacecraft, and discuss possible applications of the results and limitations of the method.&lt;/p&gt;


Author(s):  
José A. Garzón-Guerrero ◽  
Silvia Valenzuela ◽  
Carmen Batanero
Keyword(s):  

2021 ◽  
pp. 002242782098421
Author(s):  
Aaron Chalfin ◽  
Jacob Kaplan ◽  
Maria Cuellar

Objectives: In his 2014 Sutherland address to the American Society of Criminology, David Weisburd demonstrated that the share of crime that is accounted for by the most crime-ridden street segments is notably high and strikingly similar across cities, an empirical regularity referred to as the “law of crime concentration.” In the large literature that has since proliferated, there remains considerable debate as to how crime concentration should be measured empirically. We suggest a measure of crime concentration that is simple, accurate and easily interpreted. Methods: Using data from three of the largest cities in the United States, we compare observed crime concentration to a counterfactual distribution of crimes generated by randomizing crimes to street segments. We show that this method avoids a key pitfall that causes a popular method of measuring crime concentration to considerably overstate the degree of crime concentration in a city. Results: While crime is significantly concentrated in a statistical sense and while some crimes are substantively concentrated among hot spots, the precise relationship is considerably weaker than has been documented in the empirical literature. Conclusions: The method we propose is simple and easily interpretable and compliments recent advances which use the Gini coefficient to measure crime concentration.


Author(s):  
Magnus Andersson ◽  
Matts Karlsson

AbstractTurbulent-like hemodynamics with prominent cycle-to-cycle flow variations have received increased attention as a potential stimulus for cardiovascular diseases. These turbulent conditions are typically evaluated in a statistical sense from single scalars extracted from ensemble-averaged tensors (such as the Reynolds stress tensor), limiting the amount of information that can be used for physical interpretations and quality assessments of numerical models. In this study, barycentric anisotropy invariant mapping was used to demonstrate an efficient and comprehensive approach to characterize turbulence-related tensor fields in patient-specific cardiovascular flows, obtained from scale-resolving large eddy simulations. These techniques were also used to analyze some common modeling compromises as well as MRI turbulence measurements through an idealized constriction. The proposed method found explicit sites of elevated turbulence anisotropy, including a broad but time-varying spectrum of characteristics over the flow deceleration phase, which was different for both the steady inflow and Reynolds-averaged Navier–Stokes modeling assumptions. Qualitatively, the MRI results showed overall expected post-stenotic turbulence characteristics, however, also with apparent regions of unrealizable or conceivably physically unrealistic conditions, including the highest turbulence intensity ranges. These findings suggest that more detailed studies of MRI-measured turbulence fields are needed, which hopefully can be assisted by more comprehensive evaluation tools such as the once described herein.


2020 ◽  
Vol 20 (10) ◽  
pp. 2665-2680
Author(s):  
Ina Teutsch ◽  
Ralf Weisse ◽  
Jens Moeller ◽  
Oliver Krueger

Abstract. A new wave data set from the southern North Sea covering the period 2011–2016 and composed of wave buoy and radar measurements sampling the sea surface height at frequencies between 1.28 and 4 Hz was quality controlled and scanned for the presence of rogue waves. Here, rogue waves refer to waves whose height exceeds twice the significant wave height. Rogue wave frequencies were analyzed and compared to Rayleigh and Forristall distributions, and spatial, seasonal, and long-term variability was assessed. Rogue wave frequency appeared to be relatively constant over the course of the year and uncorrelated among the different measurement sites. While data from buoys basically correspond with expectations from the Forristall distribution, radar measurement showed some deviations in the upper tail pointing towards higher rogue wave frequencies. The amount of data available in the upper tail is, however, still too limited to allow a robust assessment. Some indications were found that the distribution of waves in samples with and without rogue waves was different in a statistical sense. However, differences were small and deemed not to be relevant as attempts to use them as a criterion for rogue wave detection were not successful in Monte Carlo experiments based on the available data.


This article focuses on the understanding of definitions of several widely used statistical terms such as degrees of freedom, locations, range, dispersion, grouped and ungrouped data. The terms have been redefined along with examples so that they stand alone to express their meaning. In this article, a new term ‘the smallest unit’ in a statistical sense has been defined and illustrated in some instances. It is also indicated how statisticians or practitioners of statistics are using it knowingly or unknowingly. We have mentioned the application of the smallest unit in the classification of data. Moreover, the concept of the smallest unit has been synced with the definition of the sample range so that the range can cover the entire space of values. Therefore, the proposed sample range can now better approximate the population range. We have shown that researchers can end up with misleading result if they treat a dataset as ungrouped data when it is truly a grouped data. This has been discussed in the computation of different percentiles. Moreover, the crux of the definition of degrees of freedom and dispersion has been pointed out which has helped repelled the confusion behind these terms. We have shown how the concept of linearly independent pieces of information is related to the definition of degrees of freedom. We have also emphasized not to mix the definition of standard deviation and/or variance with the whole concept of dispersion because the former is merely a single measure among many measures of the latter.


Sign in / Sign up

Export Citation Format

Share Document