scholarly journals Networked Experiments

Author(s):  
Sinan Aral

This chapter considers the design and analysis of networked experiments. As a result of digitization, the scale, scope, and complexity of networked experiments have expanded significantly in recent years, creating a need for more robust design and analysis strategies. This chapter first reviews innovations in networked experimental design, assessing the implications of the experimental setting, sampling, randomization procedures, and treatment assignment. Then the analysis of networked experiments is discussed, with particular emphasis on modeling treatment response assumptions, inference, and estimation, and recent approaches to interference and uncertainty in dependent data. The chapter concludes by discussing important challenges facing the future of networked experimentation, focusing on adaptive treatment assignment, novel randomization techniques, linking online treatments to offline responses, and experimental validation of observational methods. I hope this framework can help guide future work toward a cumulative research tradition in networked experimentation.

EEG - fMRI ◽  
2009 ◽  
pp. 221-257
Author(s):  
Christian-G. Bénar ◽  
Andrew P. Bagshaw ◽  
Louis Lemieux

Robotics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 23
Author(s):  
Adam Williams ◽  
Bijo Sebastian ◽  
Pinhas Ben-Tzvi

In this paper, the design and control of a robotic device intended to stabilize the head and neck of a trauma patient during transport are presented. When transporting a patient who has suffered a traumatic head injury, the first action performed by paramedics is typically to restrain and stabilize the head and cervical spine of a patient. The proposed device would drastically reduce the time required to perform this action while also freeing a first responder to perform other possibly lifesaving actions. The applications for robotic casualty extraction are additionally explored. The design and construction are described, followed by control simulations demonstrating the improved behavior of the chosen controller paradigm, linear active disturbance rejection control (LADRC). Finally, experimental validation is presented, followed by future work and directions for the research.


1995 ◽  
Vol 155 ◽  
pp. 221-231 ◽  
Author(s):  
Kem H. Cook ◽  
C. Alcock ◽  
R.A. Allsman ◽  
T.S. Axelrod ◽  
K.C. Freeman ◽  
...  

AbstractThe MACHO Collaboration’s search for baryonic dark matter via its gravitational microlensing signature has generated a massive database of time ordered photometry of millions of stars in the LMC and the bulge of the Milky Way. The search’s experimental design and capabilities are reviewed and the dark matter results are briefly noted. Preliminary analysis of the ~ 39,000 variable stars discovered in the LMC database is presented and examples of periodic variables are shown. A class of a periodically variable Be star is described which is the closest background to microlensing which has been found. Plans for future work on variable stars using the MACHO data are described.


Parasitology ◽  
2012 ◽  
Vol 139 (5) ◽  
pp. 589-604 ◽  
Author(s):  
JOHNATHAN J. DALZELL ◽  
NEIL D. WARNOCK ◽  
PAUL MCVEIGH ◽  
NIKKI J. MARKS ◽  
ANGELA MOUSLEY ◽  
...  

SUMMARYAlmost a decade has passed since the first report of RNA interference (RNAi) in a parasitic helminth. Whilst much progress has been made with RNAi informing gene function studies in disparate nematode and flatworm parasites, substantial and seemingly prohibitive difficulties have been encountered in some species, hindering progress. An appraisal of current practices, trends and ideals of RNAi experimental design in parasitic helminths is both timely and necessary for a number of reasons: firstly, the increasing availability of parasitic helminth genome/transcriptome resources means there is a growing need for gene function tools such as RNAi; secondly, fundamental differences and unique challenges exist for parasite species which do not apply to model organisms; thirdly, the inherent variation in experimental design, and reported difficulties with reproducibility undermine confidence. Ideally, RNAi studies of gene function should adopt standardised experimental design to aid reproducibility, interpretation and comparative analyses. Although the huge variations in parasite biology and experimental endpoints make RNAi experimental design standardization difficult or impractical, we must strive to validate RNAi experimentation in helminth parasites. To aid this process we identify multiple approaches to RNAi experimental validation and highlight those which we deem to be critical for gene function studies in helminth parasites.


Author(s):  
John T. Cameron ◽  
Sean Brennan

This work presents results of an initial investigation into models and control strategies suitable to prevent vehicle rollover due to untripped driving maneuvers. Outside of industry, the study of vehicle rollover inclusive of both experimental validation and practical controller design is limited. The researcher interested in initiating study on rollover dynamics and control is left with the challenging task of identifying suitable vehicle models from the literature, comparing these models with experimental results, and determining suitable parameters for the models. This work addresses these issues via experimental testing of published models. Parameter estimation data based on model fits is presented, with commentary given on the validity of different methods. Experimental results are then presented and compared to the output predicted by the various models in both the time and frequency domain in order to provide a foundation for future work.


Econometrica ◽  
2021 ◽  
Vol 89 (1) ◽  
pp. 113-132 ◽  
Author(s):  
Maximilian Kasy ◽  
Anja Sautmann

Standard experimental designs are geared toward point estimation and hypothesis testing, while bandit algorithms are geared toward in‐sample outcomes. Here, we instead consider treatment assignment in an experiment with several waves for choosing the best among a set of possible policies (treatments) at the end of the experiment. We propose a computationally tractable assignment algorithm that we call “exploration sampling,” where assignment probabilities in each wave are an increasing concave function of the posterior probabilities that each treatment is optimal. We prove an asymptotic optimality result for this algorithm and demonstrate improvements in welfare in calibrated simulations over both non‐adaptive designs and bandit algorithms. An application to selecting between six different recruitment strategies for an agricultural extension service in India demonstrates practical feasibility.


2019 ◽  
Vol 68 (5) ◽  
pp. 730-743 ◽  
Author(s):  
Kris V Parag ◽  
Oliver G Pybus

Abstract The coalescent process describes how changes in the size or structure of a population influence the genealogical patterns of sequences sampled from that population. The estimation of (effective) population size changes from genealogies that are reconstructed from these sampled sequences is an important problem in many biological fields. Often, population size is characterized by a piecewise-constant function, with each piece serving as a population size parameter to be estimated. Estimation quality depends on both the statistical coalescent inference method employed, and on the experimental protocol, which controls variables such as the sampling of sequences through time and space, or the transformation of model parameters. While there is an extensive literature on coalescent inference methodology, there is comparatively little work on experimental design. The research that does exist is largely simulation-based, precluding the development of provable or general design theorems. We examine three key design problems: temporal sampling of sequences under the skyline demographic coalescent model, spatio-temporal sampling under the structured coalescent model, and time discretization for sequentially Markovian coalescent models. In all cases, we prove that 1) working in the logarithm of the parameters to be inferred (e.g., population size) and 2) distributing informative coalescent events uniformly among these log-parameters, is uniquely robust. “Robust” means that the total and maximum uncertainty of our parameter estimates are minimized, and made insensitive to their unknown (true) values. This robust design theorem provides rigorous justification for several existing coalescent experimental design decisions and leads to usable guidelines for future empirical or simulation-based investigations. Given its persistence among models, this theorem may form the basis of an experimental design paradigm for coalescent inference.


2014 ◽  
pp. 249-279
Author(s):  
José Robles ◽  
Sumaira Qureshi ◽  
Stuart Stephen ◽  
Susan Wilson ◽  
Conrad Burden ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document