internet panels
Recently Published Documents


TOTAL DOCUMENTS

10
(FIVE YEARS 3)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Marco Angrisani ◽  
Anya Samek ◽  
Arie Kapteyn

The number of data sources available for academic research on retirement economics and policy has increased rapidly in the past two decades. Data quality and comparability across studies have also improved considerably, with survey questionnaires progressively converging towards common ways of eliciting the same measurable concepts. Probability-based Internet panels have become a more accepted and recognized tool to obtain research data, allowing for fast, flexible, and cost-effective data collection compared to more traditional modes such as in-person and phone interviews. In an era of big data, academic research has also increasingly been able to access administrative records (e.g., Kostøl and Mogstad, 2014; Cesarini et al., 2016), private-sector financial records (e.g., Gelman et al., 2014), and administrative data married with surveys (Ameriks et al., 2020), to answer questions that could not be successfully tackled otherwise.


2019 ◽  
Vol 29 (2) ◽  
pp. 234-242
Author(s):  
Mandy Ryan ◽  
Emmanouil Mentzakis ◽  
Catriona Matheson ◽  
Christine Bond

2019 ◽  
Vol 8 (1) ◽  
pp. 62-88
Author(s):  
Jennifer Unangst ◽  
Ashley E Amaya ◽  
Herschel L Sanders ◽  
Jennifer Howard ◽  
Abigail Ferrell ◽  
...  

Abstract As survey methods evolve, researchers require a comprehensive understanding of the error sources in their data. Comparative studies, which assess differences between the estimates from emerging survey methods and those from traditional surveys, are a popular tool for evaluating total error; however, they do not provide insight on the contributing error sources themselves. The Total Survey Error (TSE) framework is a natural fit for evaluations that examine survey error components across multiple data sources. In this article, we present a case study that demonstrates how the TSE framework can support both qualitative and quantitative evaluations comparing probability and nonprobability surveys. Our case study focuses on five internet panels that are intended to represent the US population and are used to measure health statistics. For these panels, we analyze the total survey error in two ways: (1) using a qualitative assessment that describes how panel construction and management methods may introduce error and (2) using a quantitative assessment that estimates and partitions the total error for two probability-based panels into coverage error and nonresponse error. This work can serve as a “proof of concept” for how the TSE framework may be applied to understand and compare the error structure of probability and nonprobability surveys. For those working specifically with internet panels, our findings will further provide an example of how researchers may choose the panel option best suited to their study aims and help vendors prioritize areas of improvement.


2016 ◽  
Vol 35 (4) ◽  
pp. 498-520 ◽  
Author(s):  
Annelies G. Blom ◽  
Jessica M. E. Herzing ◽  
Carina Cornesse ◽  
Joseph W. Sakshaug ◽  
Ulrich Krieger ◽  
...  

The past decade has seen a rise in the use of online panels for conducting survey research. However, the popularity of online panels, largely driven by relatively low implementation costs and high rates of Internet penetration, has been met with criticisms regarding their ability to accurately represent their intended target populations. This criticism largely stems from the fact that (1) non-Internet (or offline) households, despite their relatively small size, constitute a highly selective group unaccounted for in Internet panels, and (2) the preeminent use of nonprobability-based recruitment methods likely contributes a self-selection bias that further compromises the representativeness of online panels. In response to these criticisms, some online panel studies have taken steps to recruit probability-based samples of individuals and providing them with the means to participate online. Using data from one such study, the German Internet Panel, this article investigates the impact of including offline households in the sample on the representativeness of the panel. Consistent with studies in other countries, we find that the exclusion of offline households produces significant coverage biases in online panel surveys, and the inclusion of these households in the sample improves the representativeness of the survey despite their lower propensity to respond.


2016 ◽  
Vol 9 (1) ◽  
pp. 162-167 ◽  
Author(s):  
Melissa G. Keith ◽  
Peter D. Harms

Although we share Bergman and Jean's (2016) concerns about the representativeness of samples in the organizational sciences, we are mindful of the ever changing nature of the job market. New jobs are created from technological innovation while others become obsolete and disappear or are functionally transformed. These shifts in employment patterns produce both opportunities and challenges for organizational researchers addressing the problem of the representativeness in our working population samples. On one hand, it is understood that whatever we do, we will always be playing catch-up with the market. On the other hand, it is possible that we can leverage new technologies in order to react to such changes more quickly. As an example, in Bergman and Jean's commentary, they suggested making use of crowdsourcing websites or Internet panels in order to gain access to undersampled populations. Although we agree there is an opportunity to conduct much research of interest to organizational scholars in these settings, we also would point out that these types of samples come with their own sampling challenges. To illustrate these challenges, we examine sampling issues for Amazon's Mechanical Turk (MTurk), which is currently the most used portal for psychologists and organizational scholars collecting human subjects data online. Specifically, we examine whether MTurk workers are “workers” as defined by Bergman and Jean, whether MTurk samples are WEIRD (Western, educated, industrialized, rich, and democratic; Henrich, Heine, & Norenzayan, 2010), and how researchers may creatively utilize the sample characteristics.


Methodology ◽  
2015 ◽  
Vol 11 (3) ◽  
pp. 81-88 ◽  
Author(s):  
Suzette M. Matthijsse ◽  
Edith D. de Leeuw ◽  
Joop J. Hox

Abstract. Most web surveys collect data through nonprobability or opt-in online panels, which are characterized by self-selection. A concern in online research is the emergence of professional respondents, who frequently participate in surveys and are mainly doing so for the incentives. This study investigates if professional respondents can be distinguished in online panels and if they provide lower quality data than nonprofessionals. We analyzed a data set of the NOPVO (Netherlands Online Panel Comparison) study that includes 19 panels, which together capture 90% of the respondents in online market research in the Netherlands. Latent class analysis showed that four types of respondents can be distinguished, ranging from the professional respondent to the altruistic respondent. A profile of professional respondents is depicted. Professional respondents appear not to be a great threat to data quality.


2015 ◽  
Vol 47 (3) ◽  
pp. 685-690 ◽  
Author(s):  
Ron D. Hays ◽  
Honghu Liu ◽  
Arie Kapteyn

2014 ◽  
Vol 30 (2) ◽  
pp. 291-310 ◽  
Author(s):  
Matthias Schonlau ◽  
Beverly Weidmer ◽  
Arie Kapteyn

Abstract Respondent-driven sampling (RDS) is a network sampling technique typically employed for hard-to-reach populations when traditional sampling approaches are not feasible (e.g., homeless) or do not work well (e.g., people with HIV). In RDS, seed respondents recruit additional respondents from their network of friends. The recruiting process repeats iteratively, thereby forming long referral chains. RDS is typically implemented face to face in individual cities. In contrast, we conducted Internet-based RDS in the American Life Panel (ALP), a web survey panel, targeting the general US population. We found that when friends are selected at random, as RDS methodology requires, recruiting chains die out. When self-selecting friends, self-selected friends tend to be older than randomly selected friends but share the same demographic characteristics otherwise. Using randomized experiments, we also found that respondents list more friends when the respondent’s number of friends is preloaded from an earlier question. The results suggest that with careful selection of parameters, RDS can be used to select population-wide Internet panels and we discuss a number of elements that are critical for success.


2009 ◽  
Vol 2 (6) ◽  
pp. 1-6 ◽  
Author(s):  
Gerty J.L.M. Lensvelt-Mulders ◽  
Peter J. Lugtig ◽  
Marianne Hubregtse

Sign in / Sign up

Export Citation Format

Share Document