scholarly journals Introducing COSMOS: a web-platform for multimodal game-based psychological assessment geared towards open science practice

Author(s):  
Andy Aeberhard ◽  
Leo Gschwind ◽  
Joe Kossowsky ◽  
Gediminas Luksys ◽  
Dominique de Quervain ◽  
...  

We have established the COgnitive Science Metrics Online Survey (COSMOS) platform that contains a digital psychometrics toolset in the guise of applied games measuring a wide range of cognitive functions. Here we are outlining this online research endeavor designed for automatized psychometric data collection and scalable assessment: Once set up, the low costs and expenditure associated with individual psychometric testing allow substantially increased study cohorts and thus contribute to enhancing study outcome reliability. We are leveraging gamification of the data acquisition method to make the tests suitable for online administration. By putting a strong focus on entertainment and individually tailored feedback, we aim to maximize subjects’ incentives for repeated and continued participation. The objective of measuring repeatedly is obtaining more revealing multi-trial average scores and measures from various operationalizations of the same psychological construct instead of relying on single-shot measurements. COSMOS is set up to acquire an automatically and continuously growing dataset that can be used to answer a wide variety of research questions.Following the principles of the open science movement, this data set will also be made accessible to other publicly-funded researchers, given that all precautions for individual data protection are fulfilled. We have developed a secure hosting platform and a series of digital gamified testing instruments that can measure theory of mind, attention, working memory, episodic long- and short-term memory, spatial memory, reaction times, eye-hand coordination, impulsivity, humor appreciation, altruism, fairness, strategic thinking, decision making and risk-taking behavior. Furthermore, some of the game-based testing instruments also offer the possibility of using classical questionnaire items. A subset of these gamified tests is already implemented in the COSMOS platform, publicly accessible and currently undergoing evaluation and calibration as normative data is being collected. In summary, our approach can be used to accomplish a detailed and reliable psychometric characterization of thousands of individuals to supply various studies with large-scale neuro-cognitive phenotypes. Our game-based online testing strategy can also guide recruitment for studies as they allow very efficient screening and sample composition. Finally, this setup also allows to evaluate potential cognitive training effects and whether improvements are merely task specific or if generalization effects occur in or even across cognitive domains.

Author(s):  
Eun-Young Mun ◽  
Anne E. Ray

Integrative data analysis (IDA) is a promising new approach in psychological research and has been well received in the field of alcohol research. This chapter provides a larger unifying research synthesis framework for IDA. Major advantages of IDA of individual participant-level data include better and more flexible ways to examine subgroups, model complex relationships, deal with methodological and clinical heterogeneity, and examine infrequently occurring behaviors. However, between-study heterogeneity in measures, designs, and samples and systematic study-level missing data are significant barriers to IDA and, more broadly, to large-scale research synthesis. Based on the authors’ experience working on the Project INTEGRATE data set, which combined individual participant-level data from 24 independent college brief alcohol intervention studies, it is also recognized that IDA investigations require a wide range of expertise and considerable resources and that some minimum standards for reporting IDA studies may be needed to improve transparency and quality of evidence.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Sungmin O. ◽  
Rene Orth

AbstractWhile soil moisture information is essential for a wide range of hydrologic and climate applications, spatially-continuous soil moisture data is only available from satellite observations or model simulations. Here we present a global, long-term dataset of soil moisture derived through machine learning trained with in-situ measurements, SoMo.ml. We train a Long Short-Term Memory (LSTM) model to extrapolate daily soil moisture dynamics in space and in time, based on in-situ data collected from more than 1,000 stations across the globe. SoMo.ml provides multi-layer soil moisture data (0–10 cm, 10–30 cm, and 30–50 cm) at 0.25° spatial and daily temporal resolution over the period 2000–2019. The performance of the resulting dataset is evaluated through cross validation and inter-comparison with existing soil moisture datasets. SoMo.ml performs especially well in terms of temporal dynamics, making it particularly useful for applications requiring time-varying soil moisture, such as anomaly detection and memory analyses. SoMo.ml complements the existing suite of modelled and satellite-based datasets given its distinct derivation, to support large-scale hydrological, meteorological, and ecological analyses.


2003 ◽  
Vol 9 (4) ◽  
pp. 300-307 ◽  
Author(s):  
Gyles Glover

Since the start of the National Health Service, data have been collected on admissions to psychiatric in-patient units, first as the Mental Health Enquiry, then as part of Hospital Episode Statistics. Some details have changed but many have stayed remarkably consistent. Published literature on the wide range of research and policy work undertaken using this data source is reviewed. Early work was central to the government's deinstitutionalisation policy in the early 1960s. Subsequent studies cover a wide range of epidemiological and health services research issues. A new statistical base, the Mental Health Minimum Data Set, covering individuals receiving all types of health care is currently being set up. This will supplement (but not replace) admission statistics.


Author(s):  
Sebastian Brehm ◽  
Felix Kern ◽  
Jonas Raub ◽  
Reinhard Niehuis

The Institute of Jet Propulsion at the University of the German Federal Armed Forces Munich has developed and patented a novel concept of air injection systems for active aerodynamic stabilization of turbo compressors. This so-called Ejector Injection System (EIS) utilizes the ejector effect to enhance efficiency and impact of the aerodynamic stabilization of the Larzac 04 two-spool turbofan engine’s LPC. The EIS design manufactured recently has been subject to CFD and experimental pre-investigations in which the expected ejector effect performance has been proven and the CFD set-up has been validated. Subsequently, optimization of the EIS ejector geometry comes into focus in order to enhance its performance. In this context, CFD parameter studies on the influence of in total 16 geometric and several aerodynamic parameters on the ejector effect are required. However, the existing and validated CFD set-up of the EIS comprises not only the mainly axisymmetric ejector geometry but also the highly complex 3D supply components upstream of the ejector geometry. This is hindering large scale CFD parameter studies due to the numerical effort required for these full 3D CFD simulations. Therefore, an approach to exploit the overall axissymmetry of the ejector geometry is presented within this paper which reduces the numerical effort required for CFD simulations of the EIS by more than 90%. This approach is verified by means of both experimental results as well as CFD predictions of the full 3D set-up. The comprehensive verification data set contains wall pressure distributions and the mass flow rates involved at various Aerodynamic Operating Points (AOP). Furthermore, limitations of the approach are revealed concerning its suitability e.g. to judge the response of the attached compressor of future EIS designs concerning aerodynamic stability or cyclic loading.


2001 ◽  
Vol 432 ◽  
pp. 219-283 ◽  
Author(s):  
G. BRIASSULIS ◽  
J. H. AGUI ◽  
Y. ANDREOPOULOS

A decaying compressible nearly homogeneous and nearly isotropic grid-generated turbulent flow has been set up in a large scale shock tube research facility. Experiments have been performed using instrumentation with spatial resolution of the order of 7 to 26 Kolmogorov viscous length scales. A variety of turbulence-generating grids provided a wide range of turbulence scales with bulk flow Mach numbers ranging from 0.3 to 0.6 and turbulent Reynolds numbers up to 700. The decay of Mach number fluctuations was found to follow a power law similar to that describing the decay of incompressible isotropic turbulence. It was also found that the decay coefficient and the decay exponent decrease with increasing Mach number while the virtual origin increases with increasing Mach number. A possible mechanism responsible for these effects appears to be the inherently low growth rate of compressible shear layers emanating from the cylindrical rods of the grid. Measurements of the time-dependent, three dimensional vorticity vectors were attempted for the first time with a 12-wire miniature probe. This also allowed estimates of dilatation, compressible dissipation and dilatational stretching to be obtained. It was found that the fluctuations of these quantities increase with increasing mean Mach number of the flow. The time-dependent signals of enstrophy, vortex stretching/tilting vector and dilatational stretching vector were found to exhibit a rather strong intermittent behaviour which is characterized by high-amplitude bursts with values up to 8 times their r.m.s. within periods of less violent and longer lived events. Several of these bursts are evident in all the signals, suggesting the existence of a dynamical flow phenomenon as a common cause.


1997 ◽  
Vol 40 (2) ◽  
Author(s):  
M. Popeskov

There has recently been much discussion of large-scale interactions of fault zones and the influence of large-scale processes in the preparation and triggering of earthquakes. As a consequence, an official recommendation was issued to set up observational networks at regional scale. In this context, the existing network of standard geomagnetic observatories might play a more important role in future tectonomagnetic studies. The data from standard geomagnetic observatories are basically not appropriate for the detection of small-magnitude and, in most cases, spatially very localized geomagnetic field changes. However, their advantage is a continuity in a long-time period which enables the study of regional tectonomagnetic features and long-term precursory changes. As the first step of a more extensive study aimed at examining the features of observatory data for this purpose, a three-year data set from five European observatories has been analyzed. Some common statistical procedures have been applied along with a simple difference technique and multivariate linear regression to define local geomagnetic field changes. The distribution of M ³ 4.5 earthquakes in Europe, in a corresponding period, was also taken into account. No pronounced field variation, related in time to the M 5.7 Timisoara (Romania) earthquake on July 12, 1991, was found at Grocka observatory at about 80 km from the earthquake epicenter. However, an offset in level of the differences in declination which include Grocka observatory, not seen in the case of differences between other observatories, could be associated with a possible regional effect of the M 4.8 earthquake which occurred in September 1991 at about 70 km SE from Grocka.


2020 ◽  
Author(s):  
Erhan Genç ◽  
Caroline Schlüter ◽  
Christoph Fraenz ◽  
Larissa Arning ◽  
Huu Phuc Nguyen ◽  
...  

AbstractIntelligence is a highly polygenic trait and GWAS have identified thousands of DNA variants contributing with small effects. Polygenic scores (PGS) can aggregate those effects for trait prediction in independent samples. As large-scale light-phenotyping GWAS operationalized intelligence as performance in rather superficial tests, the question arises which intelligence facets are actually captured. We used deep-phenotyping to investigate the molecular determinantes of individual differences in cognitive ability. We therefore studied the association between PGS of educational attainment (EA-PGS) and intelligence (IQ-PGS) with a wide range of intelligence facets in a sample of 320 healthy adults. EA-PGS and IQ-PGS had the highest incremental R2s for general (3.25%; 1.78%), verbal (2.55%; 2.39%) and numerical intelligence (2.79%; 1.54%) and the weakest for non-verbal intelligence (0.50%; 0.19%) and short-term memory (0.34%; 0.22%). These results indicate that PGS derived from light-phenotyping GWAS do not reflect different facets of intelligence equally well, and thus should not be interpreted as genetic indicators of intelligence per se. The findings refine our understanding of how PGS are related to other traits or life outcomes.


2017 ◽  
Vol 44 (2) ◽  
pp. 203-229 ◽  
Author(s):  
Javier D Fernández ◽  
Miguel A Martínez-Prieto ◽  
Pablo de la Fuente Redondo ◽  
Claudio Gutiérrez

The publication of semantic web data, commonly represented in Resource Description Framework (RDF), has experienced outstanding growth over the last few years. Data from all fields of knowledge are shared publicly and interconnected in active initiatives such as Linked Open Data. However, despite the increasing availability of applications managing large-scale RDF information such as RDF stores and reasoning tools, little attention has been given to the structural features emerging in real-world RDF data. Our work addresses this issue by proposing specific metrics to characterise RDF data. We specifically focus on revealing the redundancy of each data set, as well as common structural patterns. We evaluate the proposed metrics on several data sets, which cover a wide range of designs and models. Our findings provide a basis for more efficient RDF data structures, indexes and compressors.


2017 ◽  
Author(s):  
Philipp N. Spahn ◽  
Tyler Bath ◽  
Ryan J. Weiss ◽  
Jihoon Kim ◽  
Jeffrey D. Esko ◽  
...  

AbstractBackgroundLarge-scale genetic screens using CRISPR/Cas9 technology have emerged as a major tool for functional genomics. With its increased popularity, experimental biologists frequently acquire large sequencing datasets for which they often do not have an easy analysis option. While a few bioinformatic tools have been developed for this purpose, their utility is still hindered either due to limited functionality or the requirement of bioinformatic expertise.ResultsTo make sequencing data analysis of CRISPR/Cas9 screens more accessible to a wide range of scientists, we developed a Platform-independent Analysis of Pooled Screens using Python (PinAPL-Py), which is operated as an intuitive web-service. PinAPL-Py implements state-of-the-art tools and statistical models, assembled in a comprehensive workflow covering sequence quality control, automated sgRNA sequence extraction, alignment, sgRNA enrichment/depletion analysis and gene ranking. The workflow is set up to use a variety of popular sgRNA libraries as well as custom libraries that can be easily uploaded. Various analysis options are offered, suitable to analyze a large variety of CRISPR/Cas9 screening experiments. Analysis output includes ranked lists of sgRNAs and genes, and publication-ready plots.ConclusionsPinAPL-Py helps to advance genome-wide screening efforts by combining comprehensive functionality with user-friendly implementation. PinAPL-Py is freely accessible at http://pinapl-py.ucsd.edu with instructions, documentation and test datasets. The source code is available at https://github.com/LewisLabUCSD/PinAPL-Py


2021 ◽  
Author(s):  
Kristina Wiebels ◽  
David Moreau

Containers have become increasingly popular in computing and software engineering, and are gaining traction in scientific research. They allow packaging up all code and dependencies to ensure that analyses run reliably across a range of operating systems and software versions. Despite being a crucial component for reproducible science, containerization has yet to become mainstream in psychology. In this tutorial, we describe the logic behind containers, what they are, and the practical problems they can solve. We walk the reader through the implementation of containerization within a research workflow, with examples using Docker and R. Specifically, we describe how to use existing containers, build personalized containers, and share containers alongside publications. We provide a worked example that includes all steps required to set up a container for a research project and can easily be adapted and extended. We conclude with a discussion of the possibilities afforded by the large-scale adoption of containerization, especially in the context of cumulative, open science, toward a more efficient and inclusive research ecosystem.


Sign in / Sign up

Export Citation Format

Share Document