homogeneity assumption
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 19)

H-INDEX

11
(FIVE YEARS 1)

Author(s):  
Simon Labrunie ◽  
Ibtissem Zaafrani

We consider a linearized Euler--Maxwell model for the propagation and absorption of electromagnetic waves in a magnetized plasma. We present the derivation of the model, and we show its well-posedeness, its strong and polynomial stability under suitable and fairly general assumptions, its exponential stability in the same conditions as the Maxwell system, and finally its convergence to the time-harmonic regime. No homogeneity assumption is made, and the topological and geometrical assumptions on the domain are minimal. These results appear strongly linked to the spectral properties of various matrices describing the anisotropy and other plasma properties.


2021 ◽  
Vol 11 (2) ◽  
Author(s):  
Eirik Strømland

AbstractThis paper argues that some of the discussion around meta-scientific issues can be viewed as an argument over different “meta-hypotheses” – assumptions made about how different hypotheses in a scientific literature relate to each other. I argue that, currently, such meta-hypotheses are typically left unstated except in methodological papers and that the consequence of this practice is that it is hard to determine what can be learned from a direct replication study. I argue in favor of a procedure dubbed the “limited homogeneity assumption” – assuming very little heterogeneity of effect sizes when a literature is initiated but switching to an assumption of heterogeneity once an initial finding has been successfully replicated in a direct replication study. Until that has happened, we do not allow the literature to proceed to a mature stage. This procedure will elevate the scientific status of direct replication studies in science. Following this procedure, a well-designed direct replication study is a means of falsifying an overall claim in an early phase of a literature and thus sets up a hurdle against the canonization of false facts in the behavioral sciences.


2021 ◽  
Vol 13 (6) ◽  
pp. 1098
Author(s):  
Egor Prikaziuk ◽  
Peiqi Yang ◽  
Christiaan van der Tol

In this study, we demonstrate that the Google Earth Engine (GEE) dataset of Sentinel-3 Ocean and Land Color Instrument (OLCI) level-1 deviates from the original Copernicus Open Access Data Hub Service (DHUS) data by 10–20 W m−2 sr−1μμm−1 per pixel per band. We compared GEE and DHUS single pixel time series for the period from April 2016 to September 2020 and identified two sources of this discrepancy: the ground pixel position and reprojection. The ground pixel position of OLCI product can be determined in two ways: from geo-coordinates (DHUS) or from tie-point coordinates (GEE). We recommend using geo-coordinates for pixel extraction from the original data. When the Sentinel Application Platform (SNAP) Pixel Extraction Tool is used, an additional distance check has to be conducted to exclude pixels that lay further than 212 m from the point of interest. Even geo-coordinates-based pixel extraction requires the homogeneity of the target area at a 700 m diameter (49 ha) footprint (double of the pixel resolution). The GEE OLCI dataset can be safely used if the homogeneity assumption holds at 2700 m diameter (9-by-9 OLCI pixels) or if the uncertainty in the radiance of 10% is not critical for the application. Further analysis showed that the scaling factors reported in the GEE dataset description must not be used. Finally, observation geometry and meteorological data are not present in the GEE OLCI dataset, but they are crucial for most applications. Therefore, we propose to calculate angles and extraterrestrial solar fluxes and to use an alternative data source—the Copernicus Atmosphere Monitoring Service (CAMS) dataset—for meteodata.


2021 ◽  
Vol 8 ◽  
Author(s):  
Rachel Honnert ◽  
Valéry Masson ◽  
Christine Lac ◽  
Tim Nagel

A new mixing length adapted to the constraints of the hectometric-scale gray zone of turbulence for neutral and convective boundary layers is proposed. It combines a mixing length for mesoscale simulations, where the turbulence is fully subgrid and a mixing length for Large-Eddy Simulations, where the coarsest turbulent eddies are explicitly resolved. The mixing length is built for isotropic turbulence schemes, as well as schemes using the horizontal homogeneity assumption. This mixing length is tested over three boundary layer cases: a free convective case, a neutral case and a cold air outbreak case. The later combines turbulence from thermal and dynamical origins as well as presence of clouds. With this new mixing length, the turbulence scheme produces the right proportion between subgrid and resolved turbulent exchanges in Large Eddy Simulations, in the gray zone and at the mesoscale. This opens the way of using a single mixing length whatever the grid mesh of the atmospheric model, the evolution stage or the depth of the boundary layer.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Juan Li ◽  
Hui Zhang ◽  
Yanru Zhang ◽  
Xuan Zhang

Stopping behavior during yellow intervals is one of the critical driver behaviors correlated with intersection safety. As the main index of stopping behavior, stopping time is typically described by Accelerated Failure Time (AFT) model. In this study, the comparison of survival curves of stopping time confirms the existence of group specific effects on drivers. However, the AFT model is developed based on the homogeneity assumption. To overcome this drawback, shared frailty survival models are developed for stopping time analysis, which consider the group heterogeneity of drivers. The results show that log-logistic based frailty model with age as a grouping variable has the best goodness of fit and prediction accuracy. Analysis of the models’ parameters indicates that phone status, maximum deceleration, vehicles’ speed, and the distance to stopping line at the onset of the yellow signal have significant impacts on stopping time. Additionally, heterogeneity analysis illustrates that young, middle-aged, and female drivers are more likely to brake harshly and stop past the stop line, which may block the intersection. Furthermore, drivers, who are more familiar with traffic environments, are more possible to make reasonable stopping decisions approaching intersections. The results can be utilized by traffic authorities to implement road safety strategies, which will help reduce traffic incidents caused by improper stopping behavior at intersections.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Viet Dung Nguyen ◽  
Ayse Duha Metin ◽  
Lorenzo Alfieri ◽  
Sergiy Vorogushyn ◽  
Bruno Merz

Abstract Recently, flood risk assessments have been extended to national and continental scales. Most of these assessments assume homogeneous scenarios, i.e. the regional risk estimate is obtained by summing up the local estimates, whereas each local damage value has the same probability of exceedance. This homogeneity assumption ignores the spatial variability in the flood generation processes. Here, we develop a multi-site, extreme value statistical model for 379 catchments across Europe, generate synthetic flood time series which consider the spatial correlation between flood peaks in all catchments, and compute corresponding economic damages. We find that the homogeneity assumption overestimates the 200-year flood damage, a benchmark indicator for the insurance industry, by 139%, 188% and 246% for the United Kingdom (UK), Germany and Europe, respectively. Our study demonstrates the importance of considering the spatial dependence patterns, particularly of extremes, in large-scale risk assessments.


2020 ◽  
Author(s):  
Louis-Marie Harpedanne de Belleville

SummaryContagion happens through heterogeneous interpersonal relations (homophily) which induce contamination clusters. Group testing is increasingly recognized as necessary to fight the asymptomatic transmission of the COVID-19. Still, it is plagued by false negatives. Homophily can be taken into account to design test pools that encompass potential contamination clusters. I show that this makes it possible to overcome the usual information-theoretic limits of group testing, which are based on an implicit homogeneity assumption. Even more interestingly, a multiple-step testing strategy combining this approach with advanced complementary exams for all individuals in pools identified as positive identifies asymptomatic carriers who would be missed even by costly exhaustive individual tests. Recent advances in group testing have brought large gains in efficiency, but within the bounds of the above cited information-theoretic limits, and without tackling the false negatives issue which is crucial for COVID-19. Homophily has been considered in the contagion literature already, but not in order to improve group testing.


Author(s):  
Oke Gerke

The Bland–Altman Limits of Agreement is a popular and widespread means of analyzing the agreement of two methods, instruments, or raters in quantitative outcomes. An agreement analysis could be reported as a stand-alone research article but it is more often conducted as a minor quality assurance project in a subgroup of patients, as a part of a larger diagnostic accuracy study, clinical trial, or epidemiological survey. Consequently, such an analysis is often limited to brief descriptions in the main report. Therefore, in several medical fields, it has been recommended to report specific items related to the Bland–Altman analysis. Seven proposals were identified from a MEDLINE/PubMed search on March 03, 2020, three of which were derived by reviewing anesthesia journals. Broad consensus was seen for the a priori establishment of acceptability benchmarks, estimation of repeatability of measurements, description of the data structure, visual assessment of the normality and homogeneity assumption, and plotting and numerically reporting both bias and the Bland–Altman Limits of Agreement, including respective 95% confidence intervals. Abu-Arafeh et al. provided the most comprehensive and prudent list, identifying 13 key items for reporting (Br. J. Anaesth. 2016, 117, 569–575). The 13 key items should be applied by researchers, journal editors, and reviewers in the future, to increase the quality of reporting Bland–Altman agreement analyses.


Diagnostics ◽  
2020 ◽  
Vol 10 (5) ◽  
pp. 334 ◽  
Author(s):  
Oke Gerke

The Bland–Altman Limits of Agreement is a popular and widespread means of analyzing the agreement of two methods, instruments, or raters in quantitative outcomes. An agreement analysis could be reported as a stand-alone research article but it is more often conducted as a minor quality assurance project in a subgroup of patients, as a part of a larger diagnostic accuracy study, clinical trial, or epidemiological survey. Consequently, such an analysis is often limited to brief descriptions in the main report. Therefore, in several medical fields, it has been recommended to report specific items related to the Bland–Altman analysis. The present study aimed to identify the most comprehensive and appropriate list of items for such an analysis. Seven proposals were identified from a MEDLINE/PubMed search, three of which were derived by reviewing anesthesia journals. Broad consensus was seen for the a priori establishment of acceptability benchmarks, estimation of repeatability of measurements, description of the data structure, visual assessment of the normality and homogeneity assumption, and plotting and numerically reporting both bias and the Bland–Altman Limits of Agreement, including respective 95% confidence intervals. Abu-Arafeh et al. provided the most comprehensive and prudent list, identifying 13 key items for reporting (Br. J. Anaesth. 2016, 117, 569–575). An exemplification with interrater data from a local study accentuated the straightforwardness of transparent reporting of the Bland–Altman analysis. The 13 key items should be applied by researchers, journal editors, and reviewers in the future, to increase the quality of reporting Bland–Altman agreement analyses.


2020 ◽  
Vol 13 (3) ◽  
pp. 1609-1631 ◽  
Author(s):  
Philipp Gasch ◽  
Andreas Wieser ◽  
Julie K. Lundquist ◽  
Norbert Kalthoff

Abstract. Wind profiling by Doppler lidar is common practice and highly useful in a wide range of applications. Airborne Doppler lidar can provide additional insights relative to ground-based systems by allowing for spatially distributed and targeted measurements. Providing a link between theory and measurement, a first large eddy simulation (LES)-based airborne Doppler lidar simulator (ADLS) has been developed. Simulated measurements are conducted based on LES wind fields, considering the coordinate and geometric transformations applicable to real-world measurements. The ADLS provides added value as the input truth used to create the measurements is known exactly, which is nearly impossible in real-world situations. Thus, valuable insight can be gained into measurement system characteristics as well as retrieval strategies. As an example application, airborne Doppler lidar wind profiling is investigated using the ADLS. For commonly used airborne velocity azimuth display (AVAD) techniques, flow homogeneity is assumed throughout the retrieval volume, a condition which is violated in turbulent boundary layer flow. Assuming an ideal measurement system, the ADLS allows to isolate and evaluate the error in wind profiling which occurs due to the violation of the flow homogeneity assumption. Overall, the ADLS demonstrates that wind profiling is possible in turbulent wind field conditions with reasonable errors (root mean squared error of 0.36 m s−1 for wind speed when using a commonly used system setup and retrieval strategy for the conditions investigated). Nevertheless, flow inhomogeneity, e.g., due to boundary layer turbulence, can cause an important contribution to wind profiling error and is non-negligible. Results suggest that airborne Doppler lidar wind profiling at low wind speeds (<5ms-1) can be biased, if conducted in regions of inhomogeneous flow conditions.


Sign in / Sign up

Export Citation Format

Share Document