scholarly journals ‘Unfocused groups’: lessons learnt amid remote focus groups in the Philippines

2021 ◽  
Vol 9 (Suppl 1) ◽  
pp. e001098
Author(s):  
Mila F Aligato ◽  
Vivienne Endoma ◽  
Jonas Wachinger ◽  
Jhoys Landicho-Guevarra ◽  
Thea Andrea Bravo ◽  
...  

The ongoing COVID-19 pandemic has required tremendous shifts in data collection techniques. While an emerging body of research has described experiences conducting remote interviews, less attention has been paid to focus group discussions (FGDs). Herein, we present experiences conducting remote FGDs (n=9) with healthcare workers and caretakers of small children in the Philippines. We used ‘Facebook Messenger Room’ (FBMR), the preferred platform of participants. Despite some success, we generally encountered considerable challenges in terms of recruiting, retaining and moderating remote FGDs, particularly among caretakers of small children. Finding a quiet, private place proved unfeasible for many participants, who were juggling family demands in tight, locked down quarters. Connectivity issues and technological missteps compromised the flow of FGDs and minimised the ability to share and compare opinions. For the research team, remote FGDs resulted in a dramatic role shift for notetakers—from being passive observers to active tech supporters, chatbox referees and co-moderators (when audio disruptions occurred). Finally, we note that remote FGDs via FBMR are associated with ethical complexities, particularly as participants often chose to use their personal Facebook accounts, which can compromise anonymity. We developed and continuously refined strategies to mitigate challenges, but ultimately decided to forgo FGDs. We urge fellow researchers with more successful experiences to guide the field in terms of capturing high-quality data that respond to research questions, while also contending with privacy concerns, both in online spaces, as well as physical privacy despite lockdowns in tight quarters.

2020 ◽  
Vol 4 (4) ◽  
pp. 354-359
Author(s):  
Ari Ercole ◽  
Vibeke Brinck ◽  
Pradeep George ◽  
Ramona Hicks ◽  
Jilske Huijben ◽  
...  

AbstractBackground:High-quality data are critical to the entire scientific enterprise, yet the complexity and effort involved in data curation are vastly under-appreciated. This is especially true for large observational, clinical studies because of the amount of multimodal data that is captured and the opportunity for addressing numerous research questions through analysis, either alone or in combination with other data sets. However, a lack of details concerning data curation methods can result in unresolved questions about the robustness of the data, its utility for addressing specific research questions or hypotheses and how to interpret the results. We aimed to develop a framework for the design, documentation and reporting of data curation methods in order to advance the scientific rigour, reproducibility and analysis of the data.Methods:Forty-six experts participated in a modified Delphi process to reach consensus on indicators of data curation that could be used in the design and reporting of studies.Results:We identified 46 indicators that are applicable to the design, training/testing, run time and post-collection phases of studies.Conclusion:The Data Acquisition, Quality and Curation for Observational Research Designs (DAQCORD) Guidelines are the first comprehensive set of data quality indicators for large observational studies. They were developed around the needs of neuroscience projects, but we believe they are relevant and generalisable, in whole or in part, to other fields of health research, and also to smaller observational studies and preclinical research. The DAQCORD Guidelines provide a framework for achieving high-quality data; a cornerstone of health research.


2020 ◽  
Author(s):  
Maryam Zolnoori ◽  
Mark D Williams ◽  
William B Leasure ◽  
Kurt B Angstman ◽  
Che Ngufor

BACKGROUND Patient-centered registries are essential in population-based clinical care for patient identification and monitoring of outcomes. Although registry data may be used in real time for patient care, the same data may further be used for secondary analysis to assess disease burden, evaluation of disease management and health care services, and research. The design of a registry has major implications for the ability to effectively use these clinical data in research. OBJECTIVE This study aims to develop a systematic framework to address the data and methodological issues involved in analyzing data in clinically designed patient-centered registries. METHODS The systematic framework was composed of 3 major components: visualizing the multifaceted and heterogeneous patient-centered registries using a data flow diagram, assessing and managing data quality issues, and identifying patient cohorts for addressing specific research questions. RESULTS Using a clinical registry designed as a part of a collaborative care program for adults with depression at Mayo Clinic, we were able to demonstrate the impact of the proposed framework on data integrity. By following the data cleaning and refining procedures of the framework, we were able to generate high-quality data that were available for research questions about the coordination and management of depression in a primary care setting. We describe the steps involved in converting clinically collected data into a viable research data set using registry cohorts of depressed adults to assess the impact on high-cost service use. CONCLUSIONS The systematic framework discussed in this study sheds light on the existing inconsistency and data quality issues in patient-centered registries. This study provided a step-by-step procedure for addressing these challenges and for generating high-quality data for both quality improvement and research that may enhance care and outcomes for patients. INTERNATIONAL REGISTERED REPORT DERR1-10.2196/18366


10.2196/18366 ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. e18366
Author(s):  
Maryam Zolnoori ◽  
Mark D Williams ◽  
William B Leasure ◽  
Kurt B Angstman ◽  
Che Ngufor

Background Patient-centered registries are essential in population-based clinical care for patient identification and monitoring of outcomes. Although registry data may be used in real time for patient care, the same data may further be used for secondary analysis to assess disease burden, evaluation of disease management and health care services, and research. The design of a registry has major implications for the ability to effectively use these clinical data in research. Objective This study aims to develop a systematic framework to address the data and methodological issues involved in analyzing data in clinically designed patient-centered registries. Methods The systematic framework was composed of 3 major components: visualizing the multifaceted and heterogeneous patient-centered registries using a data flow diagram, assessing and managing data quality issues, and identifying patient cohorts for addressing specific research questions. Results Using a clinical registry designed as a part of a collaborative care program for adults with depression at Mayo Clinic, we were able to demonstrate the impact of the proposed framework on data integrity. By following the data cleaning and refining procedures of the framework, we were able to generate high-quality data that were available for research questions about the coordination and management of depression in a primary care setting. We describe the steps involved in converting clinically collected data into a viable research data set using registry cohorts of depressed adults to assess the impact on high-cost service use. Conclusions The systematic framework discussed in this study sheds light on the existing inconsistency and data quality issues in patient-centered registries. This study provided a step-by-step procedure for addressing these challenges and for generating high-quality data for both quality improvement and research that may enhance care and outcomes for patients. International Registered Report Identifier (IRRID) DERR1-10.2196/18366


2020 ◽  
Author(s):  
James McDonagh ◽  
William Swope ◽  
Richard L. Anderson ◽  
Michael Johnston ◽  
David J. Bray

Digitization offers significant opportunities for the formulated product industry to transform the way it works and develop new methods of business. R&D is one area of operation that is challenging to take advantage of these technologies due to its high level of domain specialisation and creativity but the benefits could be significant. Recent developments of base level technologies such as artificial intelligence (AI)/machine learning (ML), robotics and high performance computing (HPC), to name a few, present disruptive and transformative technologies which could offer new insights, discovery methods and enhanced chemical control when combined in a digital ecosystem of connectivity, distributive services and decentralisation. At the fundamental level, research in these technologies has shown that new physical and chemical insights can be gained, which in turn can augment experimental R&D approaches through physics-based chemical simulation, data driven models and hybrid approaches. In all of these cases, high quality data is required to build and validate models in addition to the skills and expertise to exploit such methods. In this article we give an overview of some of the digital technology demonstrators we have developed for formulated product R&D. We discuss the challenges in building and deploying these demonstrators.<br>


Author(s):  
Mary Kay Gugerty ◽  
Dean Karlan

Without high-quality data, even the best-designed monitoring and evaluation systems will collapse. Chapter 7 introduces some the basics of collecting high-quality data and discusses how to address challenges that frequently arise. High-quality data must be clearly defined and have an indicator that validly and reliably measures the intended concept. The chapter then explains how to avoid common biases and measurement errors like anchoring, social desirability bias, the experimenter demand effect, unclear wording, long recall periods, and translation context. It then guides organizations on how to find indicators, test data collection instruments, manage surveys, and train staff appropriately for data collection and entry.


2021 ◽  
Vol 13 (7) ◽  
pp. 1387
Author(s):  
Chao Li ◽  
Jinhai Zhang

The high-frequency channel of lunar penetrating radar (LPR) onboard Yutu-2 rover successfully collected high quality data on the far side of the Moon, which provide a chance for us to detect the shallow subsurface structures and thickness of lunar regolith. However, traditional methods cannot obtain reliable dielectric permittivity model, especially in the presence of high mix between diffractions and reflections, which is essential for understanding and interpreting the composition of lunar subsurface materials. In this paper, we introduce an effective method to construct a reliable velocity model by separating diffractions from reflections and perform focusing analysis using separated diffractions. We first used the plane-wave destruction method to extract weak-energy diffractions interfered by strong reflections, and the LPR data are separated into two parts: diffractions and reflections. Then, we construct a macro-velocity model of lunar subsurface by focusing analysis on separated diffractions. Both the synthetic ground penetrating radar (GPR) and LPR data shows that the migration results of separated reflections have much clearer subsurface structures, compared with the migration results of un-separated data. Our results produce accurate velocity estimation, which is vital for high-precision migration; additionally, the accurate velocity estimation directly provides solid constraints on the dielectric permittivity at different depth.


Societies ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 65
Author(s):  
Clem Brooks ◽  
Elijah Harter

In an era of rising inequality, the U.S. public’s relatively modest support for redistributive policies has been a puzzle for scholars. Deepening the paradox is recent evidence that presenting information about inequality increases subjects’ support for redistributive policies by only a small amount. What explains inequality information’s limited effects? We extend partisan motivated reasoning scholarship to investigate whether political party identification confounds individuals’ processing of inequality information. Our study considers a much larger number of redistribution preference measures (12) than past scholarship. We offer a second novelty by bringing the dimension of historical time into hypothesis testing. Analyzing high-quality data from four American National Election Studies surveys, we find new evidence that partisanship confounds the interrelationship of inequality information and redistribution preferences. Further, our analyses find the effects of partisanship on redistribution preferences grew in magnitude from 2004 through 2016. We discuss implications for scholarship on information, motivated reasoning, and attitudes towards redistribution.


2021 ◽  
pp. 1-30
Author(s):  
Lisa Grace S. Bersales ◽  
Josefina V. Almeda ◽  
Sabrina O. Romasoc ◽  
Marie Nadeen R. Martinez ◽  
Dannela Jann B. Galias

With the advancement of technology, digitalization, and the internet of things, large amounts of complex data are being produced daily. This vast quantity of various data produced at high speed is referred to as Big Data. The utilization of Big Data is being implemented with success in the private sector, yet the public sector seems to be falling behind despite the many potentials Big Data has already presented. In this regard, this paper explores ways in which the government can recognize the use of Big Data for official statistics. It begins by gathering and presenting Big Data-related initiatives and projects across the globe for various types and sources of Big Data implemented. Further, this paper discusses the opportunities, challenges, and risks associated with using Big Data, particularly in official statistics. This paper also aims to assess the current utilization of Big Data in the country through focus group discussions and key informant interviews. Based on desk review, discussions, and interviews, the paper then concludes with a proposed framework that provides ways in which Big Data may be utilized by the government to augment official statistics.


2019 ◽  
Vol 14 (3) ◽  
pp. 338-366
Author(s):  
Kashif Imran ◽  
Evelyn S. Devadason ◽  
Cheong Kee Cheok

This article analyzes the overall and type of developmental impacts of remittances for migrant-sending households (HHs) in districts of Punjab, Pakistan. For this purpose, an HH-based human development index is constructed based on the dimensions of education, health and housing, with a view to enrich insights into interactions between remittances and HH development. Using high-quality data from a HH micro-survey for Punjab, the study finds that most migrant-sending HHs are better off than the HHs without this stream of income. More importantly, migrant HHs have significantly higher development in terms of housing in most districts of Punjab relative to non-migrant HHs. Thus, the government would need policy interventions focusing on housing to address inequalities in human development at the district-HH level, and subsequently balance its current focus on the provision of education and health.


2018 ◽  
Vol 27 (1) ◽  
pp. 87-101
Author(s):  
Juhn Chris Espia ◽  
Alma Maria Salvador

Purpose The recent shift in the Philippine Government’s emphasis from response to a more proactive approach came with the recognition that different stakeholders play important roles in the governance of disaster risk. The purpose of this paper is to look beyond the question as to whether all stakeholders are involved in disaster risk management planning and examines the extent by which the narratives of risk of actors at the margins shape how risk is framed in municipal DRM planning in Antique, Philippines. Design/methodology/approach This paper is based on a field study carried out in San Jose de Buenavista, Antique Province, Philippines. Data were gathered through key informant interviews and focus group discussions as well as a review of archival records and documents. Findings The narratives of CSOs and communities, which revolve around livelihoods and community life are conspicuously absent from the plans whereas that of government actors occupy a central position in the risk discourse. The study highlights the power-saturated process of defining and addressing risk to disasters, where knowledge is intimately linked to power as some voices shape plans and policies, whereas, others are excluded because their knowledge is socially constructed as less reliable and therefore irrelevant. Originality/value There is a dearth of studies that examine disaster risk as social constructions in the context of planning in the Philippines and in other disaster-prone countries.


Sign in / Sign up

Export Citation Format

Share Document