scholarly journals Does the privacy paradox exist? Comment on Yu et al.’s (2020) meta-analysis

2021 ◽  
Vol 5 ◽  
Author(s):  
Tobias Dienlin ◽  
Ye Sun

In their meta-analysis on how privacy concerns and perceived privacy risk are related to online disclosure intentionand behavior, Yu et al. (2020) conclude that “the ‘privacy paradox’ phenomenon (...) exists in our research model” (p. 8). In this comment, we contest this conclusion and present evidence and arguments against it. We find five areas of problems: (1) Flawed logic of hypothesis testing; (2) erroneous and implausible results; (3) questionable decision to use only the direct effect of privacy concerns on disclosure behavior as evidence in testing the privacy paradox; (4) overinterpreting results from MASEM; (5) insufficient reporting and lack of transparency. To guide future research, we offer three recommendations: Going beyond mere null hypothesis significance testing, probing alternative theoretical models, and implementing open science practices. While we value this meta-analytic effort, we caution its readers that, contrary to the authors’ claim, it does not offer evidence in support of the privacy paradox.

2020 ◽  
Author(s):  
Tobias Dienlin ◽  
Ye Sun

In their meta-analysis on how privacy concerns and perceived privacy risks are related to online disclosure intention and behavior, Yu et al. (2020) conclude that “the ‘privacy paradox’ phenomenon [...] exists in our research model” (p. 8). In this comment, we contest this conclusion and present evidence and arguments against it. We find three areas of problems: (1) flawed logic of hypothesis testing; (2) erroneous and implausible results; (3) questionable decision to use only the direct effect of privacy concerns on disclosure behavior as evidence in testing the privacy paradox. In light of these issues and to help guide future research, we propose a research agenda for the privacy paradox. We encourage researchers to (1) go beyond the null hypothesis significance testing (NHST), (2) engage in open science practices, (3) refine theoretical explications, (4) consider confounding, mediating, and boundary variables, and (5) improve the rigor of causal inference. Overall, while we value this meta-analytic effort by Yu et al., we caution its readers that, contrary to the authors’ claim, it does not offer evidence in support of the privacy paradox.


2021 ◽  
pp. 174569162098447
Author(s):  
Robert Körner ◽  
Lukas Röseler ◽  
Astrid Schütz

We offer a critical perspective on the meta-analysis by Elkjær et al. (2020) by pointing out three constraints: The first refers to open-science practices, the second addresses the selection of studies, and the third offers a broader theoretical perspective. We argue that preregistration and adherence to the highest standards of conducting meta-analyses is important. Further, we identified several missing studies. Regarding the theoretical perspective, we suggest that it may be useful to tie body positions into the dominance-prestige framework and, on that basis, to distinguish two types of body positions. Such an approach has the potential to account for discrepancies in previous meta-analytical evidence regarding the effects of expansive versus contractive nonverbal displays. Future research may thus be able to provide not only methodological but also theoretical innovations to the field of body positions.


2017 ◽  
Vol 13 (2) ◽  
pp. 106-132 ◽  
Author(s):  
Satish Kumar ◽  
Sisira Colombage ◽  
Purnima Rao

Purpose The purpose of this paper is to study the status of studies on capital structure determinants in the past 40 years. This paper highlights the major gaps in the literature on determinants of capital structure and also aims to raise specific questions for future research. Design/methodology/approach The prominence of research is assessed by studying the year of publication and region, level of economic development, firm size, data collection methods, data analysis techniques and theoretical models of capital structure from the selected papers. The review is based on 167 papers published from 1972 to 2013 in various peer-reviewed journals. The relationship of determinants of capital structure is analyzed with the help of meta-analysis. Findings Major findings show an increase of interest in research on determinants of capital structure of the firms located in emerging markets. However, it is observed that these regions are still under-examined which provides more scope for research both empirical and survey-based studies. Majority of research studies are conducted on large-sized firms by using secondary data and regression-based models for the analysis, whereas studies on small-sized firms are very meager. As majority of the research papers are written only at the organizational level, the impact of leverage on various industries is yet to be examined. The review highlights the major determinants of capital structure and their relationship with leverage. It also reveals the dominance of pecking order theory in explaining capital structure of firms theoretically as well as statistically. Originality/value The paper covers a considerable period of time (1972-2013). Among very few review papers on capital structure research, to the best of authors’ knowledge; this is the first review to identify what is missing in the literature on the determinants of capital structure while offering recommendations for future studies. It also synthesize the findings of empirical studies on determinants of capital structure statistically.


2020 ◽  
Vol 11 ◽  
Author(s):  
Cristiane Souza ◽  
Margarida V. Garrido ◽  
Joana C. Carmo

Common objects comprise living and non-living things people interact with in their daily-lives. Images depicting common objects are extensively used in different fields of research and intervention, such as linguistics, psychology, and education. Nevertheless, their adequate use requires the consideration of several factors (e.g., item-differences, cultural-context and confounding correlated variables), and careful validation procedures. The current study presents a systematic review of the available published norms for images of common objects. A systematic search using PRISMA guidelines indicated that despite their extensive use, the production of norms for such stimuli with adult populations is quite limited (N = 55), particularly for more ecological images, such as photos (N = 14). Among the several dimensions in which the items were assessed, the most commonly referred in our sample were familiarity, visual complexity and name agreement, illustrating some consistency across the reported dimensions while also indicating the limited examination of other potentially relevant dimensions for image processing. The lack of normative studies simultaneously examining affective, perceptive and semantic dimensions was also documented. The number of such normative studies has been increasing in the last years and published in relevant peer-reviewed journals. Moreover, their datasets and norms have been complying with current open science practices. Nevertheless, they are still scarcely cited and replicated in different linguistic and cultural contexts. The current study brings important theoretical contributions by characterizing images of common objects stimuli and their culturally-based norms while highlighting several important features that are likely to be relevant for future stimuli selection and evaluative procedures. The systematic scrutiny of these normative studies is likely to stimulate the production of new, robust and contextually-relevant normative datasets and to provide tools for enhancing the quality of future research and intervention.


2018 ◽  
Author(s):  
Gerit Pfuhl ◽  
Jon Grahe

Watch the VIDEO.Recent years have seen a revolution in publishing, and large support for open access publishing. There has been a slower acceptance and transition to other open science principles such as open data, open materials, and preregistration. To accelerate the transition and make open science the new standard, the collaborative replications and education project (CREP; http://osf.io/wfc6u/)) was launched in 2013, hosted on the Open Science Framework (osf.io). OSF is like a preprint, collecting partial data with each individual contributors project. CREP introduces open science at the start of academic research, facilitating student research training in open science and solidifying behavioral science results. The CREP team attempts to achieve this by inviting contributors to replicate one of several replication studies selected for scientific impact and suitability for undergraduates to complete during one academic term. Contributors follow clear protocols with students interacting with a CREP team that reviews the materials and video of the procedure to ensure quality data collection while students are learning science practices and methods. By combining multiple replications from undergraduates across the globe, the findings can be pooled to conduct meta-analysis and so contribute to generalizable and replicable research findings. CREP is careful to not interpret any single result. CREP has recently joined forces with the psychological science accelerator (PsySciAcc), a globally distributed network of psychological laboratories accelerating the accumulation of reliable and generalizable results in the behavioral sciences. The Department of Psychology at UiT is part of the network and has two ongoing CREP studies, maintaining open science practices early on. In this talk, we will present our experiences of conducting transparent replicable research, and experience with preprints from a supervisor and researcher perspective.


Author(s):  
David McGiffin ◽  
Geoff Cumming ◽  
Paul Myles

Null hypothesis significance testing (NHST) and p-values are widespread in the cardiac surgical literature but are frequently misunderstood and misused. The purpose of the review is to discuss major disadvantages of p-values and suggest alternatives. We describe diagnostic tests, the prosecutor’s fallacy in the courtroom, and NHST, which involve inter-related conditional probabilities, to help clarify the meaning of p-values, and discuss the enormous sampling variability, or unreliability, of p-values. Finally, we use a cardiac surgical database and simulations to explore further issues involving p-values. In clinical studies, p-values provide a poor summary of the observed treatment effect, whereas the three- number summary provided by effect estimates and confidence intervals is more informative and minimises over-interpretation of a “significant” result. P-values are an unreliable measure of strength of evidence; if used at all they give only, at best, a very rough guide to decision making. Researchers should adopt Open Science practices to improve the trustworthiness of research and, where possible, use estimation (three-number summaries) or other better techniques.


2020 ◽  
pp. 027112141989972
Author(s):  
Collin Shepley ◽  
Jennifer Grisham-Brown ◽  
Justin D. Lane

Multitiered systems of support provide a framework for matching the needs of a struggling student with an appropriate intervention. Experimental evaluations of tiered support systems in grade schools have been conducted for decades but have been less frequently examined in early childhood contexts. A recent meta-analysis of multitiered systems of support in preschool settings exclusively synthesized outcomes from group design studies. Our current review extends this review by synthesizing single-case research examining interventions implemented within tiered support system frameworks in preschool settings. Our data indicate that single-case evaluations of tiered support systems do not frequently meet contemporary standards for rigor nor consistently identify functional relations. Recommendations and considerations for future research are discussed. Copies of completed coding tables, syntax, and supplemental tables referenced throughout the manuscript may be obtained via Open Science Framework at https://osf.io/ghptw/ .


2020 ◽  
Author(s):  
Diana Eugenie Kornbrot

Open Science advocates recommend deposit of stimuli, data and code sufficient to support all assertions in a scientific Ms. Most ‘respectable’ journals and funding bodies have endorsed Open Science. i.e. they ‘talk the talk’. Nevertheless, most published Mss. do not ‘walk the walk’ by following the Open Science guidelines. Professional statisticians, e.g. the America Statistical Association, The Royal Statistical Society provide guidance on inferential statistics reporting that proscribes null-hypothesis statistical tests. This guidance is also widely ignored. The purpose of this Ms. is to increase the proportion of Mss. following open science practices by providing guides to transparent reporting that are easily useable by authors and reviewers. The Ms. comprises the guides themselves, already public, and a rationale as to why recommendations have been chosen, together with suggestions to promote open science practices. The guides are unique in including, in a single document, the three main phases for the conduction of replicable science: planning and execution, Ms. generation and publication; and deposit of supplementary materials. A main aim of the Ms. is to subject the guidance and justifications to peer review.


Author(s):  
Sophia C. Weissgerber ◽  
Matthias Brunmair ◽  
Ralf Rummer

AbstractIn the 2018 meta-analysis of Educational Psychology Review entitled “Null effects of perceptual disfluency on learning outcomes in a text-based educational context” by Xie, Zhou, and Liu, we identify some errors and inconsistencies in both the methodological approach and the reported results regarding coding and effect sizes. While from a technical point of view the meta-analysis aligns with current meta-analytical guidelines (e.g., PRISMA) and conforms to general meta-analytical requirements (e.g., considering publication bias), it exemplifies certain insufficient practices in the creation and review of meta-analysis. We criticize the lack of transparency and negligence of open-science practices in the generation and reporting of results, which complicate evaluation of the meta-analytical reproducibility, especially given the flexibility in subjective choices regarding the analytical approach and the flexibility in creating the database. Here we present a framework applicable to pre- and post-publication review on improving the Methods Reproducibility of meta-analysis. Based on considerations of the transparency and openness (TOP)-guidlines (Nosek et al. Science 348: 1422–1425, 2015), the Reproducibility Enhancement Principles (REP; Stodden et al. Science 354:1240–1241, 2016), and recommendations by Lakens et al. (BMC Psychology 4: Article 24, 2016), we outline Computational Reproducibility (Level 1), Computational Verification (Level 2), Analysis Reproducibility (Level 3), and Outcome Reproducibility (Level 4). Applying reproducibility checks to TRANSFER performance as the chosen outcome variable, we found Xie’s and colleagues’ results to be (rather) robust. Yet, regarding RECALL performance and the moderator analysis, the identified problems raise doubts about the credibility of the reported results.


Author(s):  
Megan Potterbusch ◽  
Gaetano R Lotrecchiano

Aim/Purpose: This paper explores the implications of machine-mediated communication on human interaction in cross-disciplinary teams. The authors explore the relationships between Open Science Theory, its contributions to team science, and the opportunities and challenges associated with adopting open science principles. Background: Open Science Theory impacts many aspects of human interaction throughout the scholarly life cycle and can be seen in action through various technologies, which each typically touch only one such aspect. By serving multiple aspects of Open Science Theory at once, the Open Science Framework (OSF) serves as an exemplar technology. As such it illustrates how Open Science Theory can inform and expand cognitive and behavioral dynamics in teams at multiple levels in a single tool. Methodology: This concept paper provides a theoretical rationale for recommendations for exploring the connections between an open science paradigm and the dynamics of team communication. As such theory and evidence have been culled to initiate a synthesis of the nascent literature, current practice and theory. Contribution: This paper aims to illuminate the shared goals between open science and the study of teams by focusing on science team activities (data management, methods, algorithms, and outputs) as focal objects for further combined study. Findings: Team dynamics and characteristics that will affect successful human/machine assisted interactions through mediators of workflow culture, attitudes about ownership of knowledge, readiness to share openly, shifts from group-driven to user-driven functionality, group-organizing to self-organizing structures, and the development of trust as teams regulate between traditional and open science dissemination. Recommendations for Practitioners: Participation in open science practices through machine-assisted technologies in team projects/scholarship should be encouraged. Recommendation for Researchers: The information provided highlights areas in need of further study in team science as well as new primary sources of material in the study of teams utilizing machine-assisted methods in their work. Impact on Society: As researchers take on more complex social problems, new technology and open science practices can complement the work of diverse stakeholders while also providing opportunities to broaden impact and intensify scholarly contributions. Future Research: Future investigation into the cognitive and behavioral research conducted with teams that employ machine-assisted technologies in their workflows would offer researchers the opportunity to understand better the relationships between intelligent machines and science teams’ impacts on their communities as well as the necessary paradigmatic shifts inherent when utilizing these technologies.


Sign in / Sign up

Export Citation Format

Share Document