scholarly journals A Revised Framework to Evaluate the Consistency Assumption Globally in a Network of Interventions

2021 ◽  
pp. 0272989X2110680
Author(s):  
Loukia M. Spineli

Background The unrelated mean effects (UME) model has been proposed for evaluating the consistency assumption globally in the network of interventions. However, the UME model does not accommodate multiarm trials properly and omits comparisons between nonbaseline interventions in the multiarm trials not investigated in 2-arm trials. Methods We proposed a refinement of the UME model that tackles the limitations mentioned above. We also accompanied the scatterplots on the posterior mean deviance contributions of the trial arms under the network meta-analysis (NMA) and UME models with Bland-Altman plots to detect outlying trials contributing to poor model fit. We applied the refined and original UME models to 2 networks with multiarm trials. Results The original UME model omitted more than 20% of the observed comparisons in both networks. The thorough inspection of the individual data points’ deviance contribution using complementary plots in conjunction with the measures of model fit and the estimated between-trial variance indicated that the refined and original UME models revealed possible inconsistency in both examples. Conclusions The refined UME model allows proper accommodation of the multiarm trials and visualization of all observed evidence in complex networks of interventions. Furthermore, considering several complementary plots to investigate deviance helps draw informed conclusions on the possibility of global inconsistency in the network. Highlights We have refined the unrelated mean effects (UME) model to incorporate multiarm trials properly and to estimate all observed comparisons in complex networks of interventions. Forest plots with posterior summaries of all observed comparisons under the network meta-analysis and refined UME model can uncover the consequences of potential inconsistency in the network. Using complementary plots to investigate the individual data points’ deviance contribution in conjunction with model fit measures and estimated heterogeneity aid in detecting possible inconsistency.

1995 ◽  
Vol 23 (4) ◽  
pp. 315-326
Author(s):  
Ronald D. Flack

Uncertainties in least squares curve fits to data with uncertainties are examined. First, experimental data with nominal curve shapes, representing property profiles between boundaries, are simulated by adding known uncertainties to individual points. Next, curve fits to the simulated data are achieved and compared to the nominal curves. By using a large number of different sets of data, statistical differences between the two curves are quantified and, thus, the uncertainty of the curve fit is derived. Studies for linear, quadratic, and higher-order nominal curves with curve fits up to fourth order are presented herein. Typically, curve fits have uncertainties that are 50% or less than those of the individual data points. These uncertainties increase with increasing order of the least squares curve fit. The uncertainties decrease with increasing number of data points on the curves.


2019 ◽  
Author(s):  
Christine Nothelfer ◽  
Steven Franconeri

The power of data visualization is not to convey absolute values of individual data points, but to allow the exploration of relations (increases or decreases in a data value) among them. One approach to highlighting these relations is to explicitly encode the numeric differences (deltas) between data values. Because this approach removes the context of the individual data values, it is important to measure how much of a performance improvement it actually offers, especially across differences in encodings and tasks, to ensure that it is worth adding to a visualization design. Across 3 different tasks, we measured the increase in visual processing efficiency for judging the relations between pairs of data values, from when only the values were shown, to when the deltas between the values were explicitly encoded, across position and length visual feature encodings (and slope encodings in Experiments 1 & 2). In Experiment 1, the participant’s task was to locate a pair of data values with a given relation (e.g., Find the ‘small bar to the left of a tall bar’ pair) among pairs of the opposite relation, and we measured processing efficiency from the increase in response times as the number of pairs increased. In Experiment 2, the task was to judge which of two relation types was more prevalent in a briefly presented display of 10 data pairs (e.g., Are there more ‘small bar to the left of a tall bar’ pairs or more ‘tall bar to the left of a small bar’ pairs?). In the final experiment, the task was to estimate the average delta within a briefly presented display of 6 data pairs (e.g., What is the average bar height difference across all ‘small bar to the left of a tall bar’ pairs?). Across all three experiments, visual processing of relations between data value pairs was significantly better when directly encoded as deltas rather than implicitly between individual data points, and varied substantially depending on the task (improvement ranged from 25% to 95%). Considering the ubiquity of bar charts and dot plots, relation perception for individual data values is highly inefficient, and confirms the need for alternative designs that provide not only absolute values, but also direct encoding of critical relationships between those values.


2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2017 ◽  
Vol 107 (09) ◽  
pp. 610-616
Author(s):  
S. Eisenhauer ◽  
F. Zimmermann ◽  
M. Reichart ◽  
P. Accordi ◽  
A. Prof. Sauer

Bisherige Studien über energetische Flexibilität in der deutschen Industrie weisen das vorhandene Flexibilitätspotenzial mit hoher Streuung aus. Diese Arbeit analysiert relevante Studien in Bezug auf deren Annahmen und Vorgehensweise. Aufbauend auf den bisherigen Vorgehensweisen wird ein Ansatz zur Erhebung der Daten im Produktionssystem vorgestellt. Des Weiteren wird eine Methode zur Aggregation der Daten hoch bis auf Branchenebene entwickelt.   Previous studies on the energetic flexibility of German industry show potentials with a large spread. Therefore, in this article, a systematic analysis of the individual studies and an evaluation of the indicated flexibility potentials are carried out. Based on the existing methods, a bottom-up approach for collecting the data in the production system and the aggregation up to the industry level is presented.


2021 ◽  
Vol 53 (8S) ◽  
pp. 282-282
Author(s):  
Gabriel Perri Esteves ◽  
Paul Swinton ◽  
Craig Sale ◽  
Ruth James ◽  
Guilherme Giannini Artioli ◽  
...  

Nematology ◽  
2021 ◽  
pp. 1-11
Author(s):  
Ann-Kristin Koehler ◽  
Christopher A. Bell ◽  
Matthew A. Back ◽  
Peter E. Urwin ◽  
Howard J. Atkinson

Summary Globodera pallida is the most damaging pest of potato in the UK. This work underpins enhancement of a well-established, web-based scenario analysis tool for its management by recommending additions and modifications of its required inputs and a change in the basis of yield loss estimates. The required annual decline rate of the dormant egg population is determined at the individual field sample level to help define the required rotation length by comparing the viable egg content of recovered cysts to that of newly formed cysts for the same projected area. The mean annual decline was 20.4 ± 1.4% but ranged from 4.0 to 39.7% annum−1 at the field level. Further changes were based on meta-analysis of previous field trials. Spring rainfall in the region where a field is located and cultivar tolerance influence yield loss. Tolerance has proved difficult to define for many UK potato cultivars in field trials but uncertainty can be avoided without detriment by replacing it with determinacy integers. They are already determined to support optimisation of nitrogen application rates. Multiple linear regression estimates that loss caused by pre-plant populations of up to 20 viable eggs (g soil)−1 varies from ca 0.2 to 2.0% (viable egg)−1 (g soil)−1 depending on cultivar determinacy and spring rainfall. Reliability of the outcomes from scenario analysis requires validation in field trials with population densities over which planting is advisable.


2016 ◽  
Vol 7 (1) ◽  
pp. 45-62 ◽  
Author(s):  
Kathleen Campbell Garwood ◽  
Alicia Graziosi Strandberg

Is it possible to compare rankings from different sources when the individual rankings of the top x elements differ? To investigate this question, 2015 sustainable rankings from 4 sources that have ranked the top globally most sustainable corporations are considered (Corporate Knights, Fortune's World's Most Admired Companies, Newsweek's Green Rankings, and Harris). These rankings are analyzed using common rank comparison methods (Spearman's ?, Kendall's t). Then, they are analyzed to see if the sources ranking the data are doing so at random or if there is a specific pattern of agreement (Kendall's W and a method by Alvo, Cabilio & Feigin (1982)). The insights from these methods as well as possible limitations are considered. A truly sustainable corporation would transcend all definitions and be good for the environment and the people relying on the company. This paper will attempt to identify data points that tend to cluster close together in one or more groups, thereby justifying the feasibility of identifying sets of companies that are truly the “most” sustainable.


Sign in / Sign up

Export Citation Format

Share Document