Lessons learned from a pilot intuitive eating intervention for college women delivered through group and guided self-help: qualitative and process data

2021 ◽  
pp. 1-26
Author(s):  
C. Blair Burnette ◽  
Alexandria E. Davies ◽  
Suzanne E. Mazzeo
2020 ◽  
Vol 3 (8) ◽  
pp. e2015633 ◽  
Author(s):  
Ellen E. Fitzsimmons-Craft ◽  
C. Barr Taylor ◽  
Andrea K. Graham ◽  
Shiri Sadeh-Sharvit ◽  
Katherine N. Balantekin ◽  
...  

2006 ◽  
Vol 53 (4) ◽  
pp. 486-497 ◽  
Author(s):  
Laura C. Avalos ◽  
Tracy L. Tylka

2021 ◽  
Author(s):  
Peter Kraus ◽  
Elisabeth Wolf ◽  
Charlotte Prinz ◽  
Giulia Bellini ◽  
Annette Trunschke ◽  
...  

Automation of experiments is a key component on the path of digitalisation in catalysis and related sciences. Here we present the lessons learned and caveats avoided during the automation of our contactless conductivity measurement set-up, capable of operando measurement of catalytic samples. We briefly discuss the motivation behind the work, the technical groundwork required, and the philosophy guiding our design. The main body of this work is dedicated to the detailing of the implementation of the automation, data structures, as well as the modular data processing pipeline. The open-source toolset developed as part of this work allows us to carry out unattended and reproducible experiments, as well as post-process data according to current best practice. This process is illustrated by implementing two routine sample protocols, one of which was included in the Handbook of Catalysis, providing several case studies showing the benefits of such automation, including increased throughput and higher data quality. The datasets included as part of this work contain catalytic and operando conductivity data, and are self-consistent, annotated with metadata, and are available on a public repository in a machine-readable form. We hope the datasets as well as the tools and workflows developed as part of this work will be an useful guide on the path towards automation and digital catalysis.


2020 ◽  
Vol 7 (3) ◽  
pp. 79-97
Author(s):  
Katerina Mangaroska ◽  
Kshitij Sharma ◽  
Dragan Gašević ◽  
Michalis Giannakos

Programming is a complex learning activity that involves coordination of cognitive processes and affective states. These aspects are often considered individually in computing education research, demonstrating limited understanding of how and when students learn best. This issue confines researchers to contextualize evidence-driven outcomes when learning behaviour deviates from pedagogical intentions. Multimodal learning analytics (MMLA) captures data essential for measuring constructs (e.g., cognitive load, confusion) that are posited in the learning sciences as important for learning, and cannot effectively be measured solely with the use of programming process data (IDE-log data). Thus, we augmented IDE-log data with physiological data (e.g., gaze data) and participants’ facial expressions, collected during a debugging learning activity. The findings emphasize the need for learning analytics that are consequential for learning, rather than easy and convenient to collect. In that regard, our paper aims to provoke productive reflections and conversations about the potential of MMLA to expand and advance the synergy of learning analytics and learning design among the community of educators from a post-evaluation design-aware process to a permanent monitoring process of adaptation.


2021 ◽  
Author(s):  
Julia Wagemann ◽  
Umberto Modigliani ◽  
Stephan Siemen ◽  
Vasileios Baousis ◽  
Florian Pappenberger

<p>The European Centre for Medium-Range Weather Forecasts (ECMWF) is moving gradually towards an open data licence , aiming to make real-time forecast data available under a full, free and open data license by 2025. The introduction of open data policies lead in general to an increase in data requests and a broader user base. Therefore a much larger community of diverse users will be interested in accessing, understanding and using ECMWF Open Data (real-time). While an open data license is an important prerequisite, it does not automatically lead to an increased uptake of open data. In order to increase the uptake of (open) data, Wilkinson et al. (2016) defined the FAIR principles, which emphasize the need to make data better ‘findable’, ‘accessible’, ‘interoperable’ and ‘reusable’.</p><p>In 2019, we conducted a web-based survey among users of big Earth data to obtain a better understanding of users’ needs in terms of the data they are interested in, the applications they need the data for, the way they access and process data and the challenges they face. The results show that users are in particular interested in meteorological and climate forecast data, but facing challenges related to the growing data volumes, the data heterogeneity and the limited processing capacities. At the same time, survey respondents showed an interest in using cloud-based services in the near future, but expressed the need for an easier data discovery and the interoperability of data systems. Moreover, an ECMWF supported activity that made a subset of ERA5 climate reanalysis data available to the user community of the Google Earth Engine platform, revealed that interoperability of data systems is a growing bottleneck. </p><p>Conclusions from both activities are helping ECMWF to define the way forward to make ECMWF Open Data (real-time) better accessible via cloud-based services. In this presentation we would like to share and discuss lessons learned to make open data more easily ‘accessible’ and ‘interoperable’ and the role cloud-based services play in doing so. We will also cover our future plans.</p>


Sign in / Sign up

Export Citation Format

Share Document