scholarly journals The effects of assessment intensity on participant burden, compliance, within-person variance, and within-person relationships in ambulatory assessment

Author(s):  
Kilian Hasselhorn ◽  
Charlotte Ottenstein ◽  
Tanja Lischetzke

AbstractConsidering the very large number of studies that have applied ambulatory assessment (AA) in the last decade across diverse fields of research, knowledge about the effects that these design choices have on participants’ perceived burden, data quantity (i.e., compliance with the AA protocol), and data quality (e.g., within-person relationships between time-varying variables) is surprisingly restricted. The aim of the current research was to experimentally manipulate aspects of an AA study’s assessment intensity—sampling frequency (Study 1) and questionnaire length (Study 2)—and to investigate their impact on perceived burden, compliance, within-person variability, and within-person relationships between time-varying variables. In Study 1, students (n = 313) received either 3 or 9 questionnaires per day for the first 7 days of the study. In Study 2, students (n = 282) received either a 33- or 82-item questionnaire three times a day for 14 days. Within-person variability and within-person relationships were investigated with respect to momentary pleasant-unpleasant mood and state extraversion. The results of Study 1 showed that a higher sampling frequency increased perceived burden but did not affect the other aspects we investigated. In Study 2, longer questionnaire length did not affect perceived burden or compliance but yielded a smaller degree of within-person variability in momentary mood (but not in state extraversion) and a smaller within-person relationship between state extraversion and mood. Differences between Studies 1 and 2 with respect to the type of manipulation of assessment intensity are discussed.

Assessment ◽  
2020 ◽  
pp. 107319112095710
Author(s):  
Gudrun Eisele ◽  
Hugo Vachon ◽  
Ginette Lafit ◽  
Peter Kuppens ◽  
Marlies Houben ◽  
...  

Currently, little is known about the association between assessment intensity, burden, data quantity, and data quality in experience sampling method (ESM) studies. Researchers therefore have insufficient information to make informed decisions about the design of their ESM study. Our aim was to investigate the effects of different sampling frequencies and questionnaire lengths on burden, compliance, and careless responding. Students ( n = 163) received either a 30- or 60-item questionnaire three, six, or nine times per day for 14 days. Preregistered multilevel regression analyses and analyses of variance were used to analyze the effect of design condition on momentary outcomes, changes in those outcomes over time, and retrospective outcomes. Our findings offer support for increased burden and compromised data quantity and quality with longer questionnaires, but not with increased sampling frequency. We therefore advise against the use of long ESM questionnaires, while high-sampling frequencies do not seem to be associated with negative consequences.


2020 ◽  
Author(s):  
Gudrun Eisele ◽  
Hugo Vachon ◽  
Ginette Lafit ◽  
Peter Kuppens ◽  
Marlies Houben ◽  
...  

Currently, little is known about the association between assessment intensity, burden, data quantity, and data quality in experience sampling method (ESM) studies. Researchers therefore have insufficient information to make informed decisions about the design of their ESM study. Our aim was to investigate the effects of different sampling frequencies and questionnaire lengths on burden, compliance, and careless responding.Students (n = 164) received either a 30- or 60-item questionnaire three, six, or nine times per day for 14 days. Preregistered multilevel regression analyses and ANOVAs were used to analyze the effect of design condition on momentary outcomes, changes in those outcomes over time, and retrospective outcomes.Our findings offer support for increased burden and compromised data quantity and quality with longer questionnaires, but not with increased sampling frequency. We therefore advise against the use of long ESM questionnaires, while high sampling frequencies do not seem to be associated with negative consequences.


1974 ◽  
Vol 35 (1) ◽  
pp. 275-278 ◽  
Author(s):  
Philip B. Ender ◽  
Arthur C. Bohart

A 20-item questionnaire was administered to 388 Ss. The questionnaire consisted of items involving success or failure. Half the items were about one's self (actor) and half about someone else (observer). For each situation there were four possible causes: (1) task difficulty, (2) luck, (3) ability, and (4) effort. The results showed that effort was rated significantly higher than the other causes, while luck was rated significantly lower. Also, there was a significant actor-observer difference with observers being more internal than actors.


2017 ◽  
Author(s):  
Amelia McNamara ◽  
Nicholas J Horton

Data wrangling is a critical foundation of data science, and wrangling of categorical data is an important component of this process. However, categorical data can introduce unique issues in data wrangling, particularly in real-world settings with collaborators and periodically-updated dynamic data. This paper discusses common problems arising from categorical variable transformations in R, demonstrates the use of factors, and suggests approaches to address data wrangling challenges. For each problem, we present at least two strategies for management, one in base R and the other from the ‘tidyverse.’ We consider several motivating examples, suggest defensive coding strategies, and outline principles for data wrangling to help ensure data quality and sound analysis.


2016 ◽  
Vol 21 (5) ◽  
pp. 1158-1174 ◽  
Author(s):  
Stephen G. Hall ◽  
P. A. V. B. Swamy ◽  
George S. Tavlas

Coefficient drivers are observable variables that feed into time-varying coefficients (TVCs) and explain at least part of their movement. To implement the TVC approach, the drivers are split into two subsets, one of which is correlated with the bias-free coefficient that we want to estimate and the other with the misspecification in the model. This split, however, can appear to be arbitrary. We provide a way of splitting the drivers that takes account of any nonlinearity that may be present in the data, with the aim of removing the arbitrary element in driver selection. We also provide an example of the practical use of our method by applying it to modeling the effect of ratings on sovereign-bond spreads.


2006 ◽  
Vol 24 (3) ◽  
pp. 961-972 ◽  
Author(s):  
S. K. Morley ◽  
M. Lockwood

Abstract. Using a numerical implementation of the cowlock92 model of flow excitation in the magnetosphere-ionosphere (MI) system, we show that both an expanding (on a ~12-min timescale) and a quasi-instantaneous response in ionospheric convection to the onset of magnetopause reconnection can be accommodated by the Cowley-Lockwood conceptual framework. This model has a key feature of time dependence, necessarily considering the history of the coupled MI system. We show that a residual flow, driven by prior magnetopause reconnection, can produce a quasi-instantaneous global ionospheric convection response; perturbations from an equilibrium state may also be present from tail reconnection, which will superpose constructively to give a similar effect. On the other hand, when the MI system is relatively free of pre-existing flow, we can most clearly see the expanding nature of the response. As the open-closed field line boundary will frequently be in motion from such prior reconnection (both at the dayside magnetopause and in the cross-tail current sheet), it is expected that there will usually be some level of combined response to dayside reconnection.


1990 ◽  
Vol 12 (6) ◽  
pp. 263-266 ◽  
Author(s):  
J. M. López Fernández ◽  
M. D. Luque de Castro ◽  
M. Valcárcel

An asymmetrical FIA merging-zones manifold based on the dual injection of two sample microvolumes was developed for the simultaneous determination of salicylic acid and acetylsalicylic acid in pharmaceutical preparations at a sampling frequency of 30/h. The complex formed between the Fe(III) reagent continuously introduced in the system and salicylic acid was monitored photometrically at 520 nm. One of the sample plugs was prehydrolysed on injection into an NaOH stream and was circulated through a longer channel than the other plug. This yielded two FIA peaks corresponding to salicylic acid and the overall content, respectively. The proposed manifold was successfully used to control the dissolution test of a pharmaceutical preparation.


2008 ◽  
Vol 6 (1) ◽  
pp. 48 ◽  
Author(s):  
Andrew M Garratt ◽  
Stephen Brealey ◽  
Michael Robling ◽  
Chris Atwell ◽  
Ian Russell ◽  
...  

2014 ◽  
Author(s):  
Λάζαρος Γκατζίκης

The energy needs of all sectors of our modern societies are constantly increasing. Indicatively,annual worldwide demand for electricity has increased ten-fold within the last 50 years. Thus, energyefficiency has become a major target of the research community. The ongoing research efforts are focusedon two main threads, i) optimizing efficiency and reliability of the power grid and ii) improvingenergy efficiency of individual devices / systems. In this thesis we explore the use of optimizationand game theory techniques towards both goals.Stable and economic operation of the power grid calls for electricity demand to be uniformly distributedacross a day. Currently, the price of electricity is fixed throughout a day for most users. Givenalso the highly correlated daily schedules of users, this leads to unbalanced distribution of demand.However, the recent development of low-cost smart meters enables bidirectional communication betweenthe electricity operator and each user, and hence introduces the option of dynamic pricing anddemand adaptation (a.k.a. Demand Response - DR). Dynamic pricing motivates home users to modifytheir electricity consumption profile so as to reduce their electricity bill. Eventually, users by movingdemand out of peak consumption periods lead to a more balanced total demand pattern and a morestable grid.A DR scheme has to balance the contradictory interests of the utility operator and the users.On the one hand, the operator wants to minimize electricity generation cost. On the other hand,each user aims to maximize a utility function that captures the trade-off between timely executionof demands and financial savings. In this thesis we focus on designing efficient DR schemes for theresidential sector. Initially, we introduce a realistic model of user’s response to time-varying pricesand identify the operating constraints of home appliances that make optimal demand scheduling NPHard.Thus, we devise an optimization-based dynamic pricing mechanism and demonstrate how itcan be implemented as a day-ahead DR market. Our numerical results underline the potential ofresidential DR and verify that our scheme exploits DR benefits more efficiently compared to existingones.The large number of home users though and the fact that the utility operator generally lacks the know-how of designing and applying dynamic pricing at such a large scale introduce the need fora new market entity. Aggregators act as intermediaries that coordinate home users to shift or evencurtail their demands and then resell this service to the utility operator. In this direction, we introducea three-level hierarchical model for the smart grid market and we devise the corresponding pricingmechanism for each level. The operator seeks to minimize the smart grid operational cost and offersrewards to aggregators toward this goal. Aggregators are profit-maximizing entities that competeto sell DR services to the operator. Finally, end-users are also self-interested and seek to optimizethe tradeoff between earnings and discomfort. Based on realistic demand traces we demonstrate thedominant role of the utility operator and how its strategy affects the actual DR benefits. Although theproposed scheme guarantees significant financial benefits for each market entity, interestingly usersthat are extremely willing to modify their consumption pattern do not derive the maximum financialbenefit.In parallel to optimizing the power grid itself, per device energy economy has become a goal ofutmost performance. Contemporary mobile devices are battery powered and hence characterized bylimited processing and energy resources. In addition, the latest mobile applications are particularlydemanding and hence cannot be executed locally. Instead, a mobile device can outsource its computationallyintensive tasks to the cloud over its wireless access interface, so as to maximize bothits lifetime and performance. In this thesis, we explore task offloading and Virtual Machine (VM)migration mechanisms for the mobile cloud computing paradigm that minimize energy consumptionand execution time. We identify that in order to decide whether offloading is beneficial, a mobilehas also to consider the delay and energy cost of data transfer from/to the cloud. On the other hand,the challenge for the cloud is to optimally allocate the arising VMs to its servers so as to minimizeits operating cost without sacrificing performance though. Providing quality of service guarantees isparticularly challenging in the dynamic cloud environment, due to the time-varying bandwidth of theaccess links, the ever changing available processing capacity at each server and the time-varying datavolume of each VM. Thus, we propose a mobile cloud architecture that brings the cloud closer tothe user and online VM migration policies spanning fully uncoordinated ones, in which each user orserver autonomously makes its migration decisions, up to cloud-wide ones.Nevertheless, the transceiver is one of the most power consuming components of a mobile wirelessdevice. Since the medium access layer controls when a transmission takes place, it has significantimpact on overall energy consumption and consequently on the lifetime of a device. In this direction,we investigate the potential of sleep modes when several wireless devices compete for medium access.In order to characterize the resulting energy-throughput tradeoff, we calculate the optimal throughputunder energy constraints and we model contention for wireless medium as a non-cooperative game.The strategy of each user consists of its access probability and its sleep mode schedule. We showthat the resulting game has a unique Nash Equilibrium Point and that energy constraints reduce thenegative impact of selfish behaviour, leading to bounded price of anarchy. We devise also a modifiedmedium access scheme, where the state of the medium can be sampled in the beginning of each frameand show that it leads to improved exploitation of the medium without any explicit cooperation. Finally, we move to a scenario where concurrent transmissions over the same channel are not destructivebut lead to reduced performance due to interference. In this context, we consider the problemof joint relay assignment and power control. We develop interference-aware sum-rate maximizationalgorithms that make use of a bipartite maximum weight matching formulation of the problem andgeometric programming and are amenable to distributed implementation. We also identify the importanceof interference for cell-edge users in cellular networks and demonstrate that our schemes bringtogether two main features of 4G systems, namely interference management and relaying.


Author(s):  
Jinsen Zhuang ◽  
Yan Zhou ◽  
Yonghui Xia

This paper concerns the impact of stochastic perturbations on the intra-layer synchronization of the duplex networks. A duplex network contains two layers ([1,2]). Different from the previous works, environmental noise is introduced into the dynamical system of the duplex network. We incorporate both the inter-layer delay and the intra-layer delay into the dynamical system. Both of the delays are time-varying. However, the paper [1] only considered the intra-layer delays and they are assumed as the constants. While the paper [2] did not consider the inter-layer delay or intra-layer delay. When the system does not achieve automatic intra-layer synchronization, we introduce two controllers: one is the state-feedback controller, the other is the adaptive state-feedback controller. Interestingly, we find that the intra-layer synchronization will achieve automatically if the inter-layer coupling strength $c_1$ is large enough when the time-varying inter-layer delays are absent. Finally, some interesting simulation results are obtained for the Chua-Chua chaotic system with application of our theoretic results, which show the feasibility effectiveness of our control schemes.


Sign in / Sign up

Export Citation Format

Share Document