Emerging Themes in Epidemiology
Latest Publications


TOTAL DOCUMENTS

205
(FIVE YEARS 28)

H-INDEX

30
(FIVE YEARS 2)

Published By Springer (Biomed Central Ltd.)

1742-7622, 1742-7622

2022 ◽  
Vol 19 (1) ◽  
Author(s):  
Caitlin Shannon ◽  
Chris Hurt ◽  
Seyi Soremekun ◽  
Karen Edmond ◽  
Sam Newton ◽  
...  

Abstract Background Globally adopted health and development milestones have not only encouraged improvements in the health and wellbeing of women and infants worldwide, but also a better understanding of the epidemiology of key outcomes and the development of effective interventions in these vulnerable groups. Monitoring of maternal and child health outcomes for milestone tracking requires the collection of good quality data over the long term, which can be particularly challenging in poorly-resourced settings. Despite the wealth of general advice on conducting field trials, there is a lack of specific guidance on designing and implementing studies on mothers and infants. Additional considerations are required when establishing surveillance systems to capture real-time information at scale on pregnancies, pregnancy outcomes, and maternal and infant health outcomes. Main body Based on two decades of collaborative research experience between the Kintampo Health Research Centre in Ghana and the London School of Hygiene and Tropical Medicine, we propose a checklist of key items to consider when designing and implementing systems for pregnancy surveillance and the identification and classification of maternal and infant outcomes in research studies. These are summarised under four key headings: understanding your population; planning data collection cycles; enhancing routine surveillance with additional data collection methods; and designing data collection and management systems that are adaptable in real-time. Conclusion High-quality population-based research studies in low resource communities are essential to ensure continued improvement in health metrics and a reduction in inequalities in maternal and infant outcomes. We hope that the lessons learnt described in this paper will help researchers when planning and implementing their studies.


2021 ◽  
Vol 18 (1) ◽  
Author(s):  
Sam Newton ◽  
Guus Ten Asbroek ◽  
Zelee Hill ◽  
Charlotte Tawiah Agyemang ◽  
Seyi Soremekun ◽  
...  

Abstract Background Successful implementation of community-based research is dominantly influenced by participation and engagement from the local community without which community members will not want to participate in research and important knowledge and potential health benefits will be missed. Therefore, maximising community participation and engagement is key for the effective conduct of community-based research. In this paper, we present lessons learnt over two decades of conducting research in 7 rural districts in the Brong Ahafo region of Ghana with an estimated population of around 600,000. The trials which were mainly in the area of Maternal, Neonatal and Child Health were conducted by the Kintampo Health Research Centre (KHRC) in collaboration with the London School of Hygiene and Tropical Medicine (LSHTM). Methods The four core strategies which were used were formative research methods, the formation of the Information, Education and Communication (IEC) team to serve as the main link between the research team and the community, recruitment of field workers from the communities within which they lived, and close collaboration with national and regional stakeholders. Results These measures allowed trust to be built between the community members and the research team and ensured that potential misconceptions which came up in the communities were promptly dealt with through the IEC team. The decision to place field workers in the communities from which they came and their knowledge of the local language created trust between the research team and the community. The close working relationship between the District health authorities and the Kintampo Health Research Centre supported the acceptance of the research in the communities as the District Health Authorities were respected and trusted. Conclusion The successes achieved during the past 2 decades of collaboration between LSHTM and KHRC in conducting community-based field trials were based on involving the community in research projects. Community participation and engagement helped not only to identify the pertinent issues, but also enabled the communities and research team to contribute towards efforts to address challenges.


2021 ◽  
Vol 18 (1) ◽  
Author(s):  
Kanako Fuyama ◽  
Yasuhiro Hagiwara ◽  
Yutaka Matsuyama

Abstract Background Risk ratio is a popular effect measure in epidemiological research. Although previous research has suggested that logistic regression may provide biased odds ratio estimates when the number of events is small and there are multiple confounders, the performance of risk ratio estimation has yet to be examined in the presence of multiple confounders. Methods We conducted a simulation study to evaluate the statistical performance of three regression approaches for estimating risk ratios: (1) risk ratio interpretation of logistic regression coefficients, (2) modified Poisson regression, and (3) regression standardization using logistic regression. We simulated 270 scenarios with systematically varied sample size, the number of binary confounders, exposure proportion, risk ratio, and outcome proportion. Performance evaluation was based on convergence proportion, bias, standard error estimation, and confidence interval coverage. Results With a sample size of 2500 and an outcome proportion of 1%, both logistic regression and modified Poisson regression at times failed to converge, and the three approaches were comparably biased. As the outcome proportion or sample size increased, modified Poisson regression and regression standardization yielded unbiased risk ratio estimates with appropriate confidence intervals irrespective of the number of confounders. The risk ratio interpretation of logistic regression coefficients, by contrast, became substantially biased as the outcome proportion increased. Conclusions Regression approaches for estimating risk ratios should be cautiously used when the number of events is small. With an adequate number of events, risk ratios are validly estimated by modified Poisson regression and regression standardization, irrespective of the number of confounders.


2021 ◽  
Vol 18 (1) ◽  
Author(s):  
Sonja Hartnack ◽  
Malgorzata Roos

Abstract Background One of the emerging themes in epidemiology is the use of interval estimates. Currently, three interval estimates for confidence (CI), prediction (PI), and tolerance (TI) are at a researcher's disposal and are accessible within the open access framework in R. These three types of statistical intervals serve different purposes. Confidence intervals are designed to describe a parameter with some uncertainty due to sampling errors. Prediction intervals aim to predict future observation(s), including some uncertainty present in the actual and future samples. Tolerance intervals are constructed to capture a specified proportion of a population with a defined confidence. It is well known that interval estimates support a greater knowledge gain than point estimates. Thus, a good understanding and the use of CI, PI, and TI underlie good statistical practice. While CIs are taught in introductory statistical classes, PIs and TIs are less familiar. Results In this paper, we provide a concise tutorial on two-sided CI, PI and TI for binary variables. This hands-on tutorial is based on our teaching materials. It contains an overview of the meaning and applicability from both a classical and a Bayesian perspective. Based on a worked-out example from veterinary medicine, we provide guidance and code that can be directly applied in R. Conclusions This tutorial can be used by others for teaching, either in a class or for self-instruction of students and senior researchers.


2021 ◽  
Vol 18 (1) ◽  
Author(s):  
Christina Mergenthaler ◽  
Rajpal Singh Yadav ◽  
Sohrab Safi ◽  
Ente Rood ◽  
Sandra Alba

Abstract Background Through a nationally representative household survey in Afghanistan, we conducted an operational study in two relatively secure provinces comparing effectiveness of computer-aided personal interviewing (CAPI) with paper-and-pencil interviewing (PAPI). Methods In Panjshir and Parwan provinces, household survey data were collected using paper questionnaires in 15 clusters, and OpenDataKit (ODK) software on electronic tablets in 15 other clusters. Added value was evaluated from three perspectives: efficient implementation, data quality, and acceptability. Efficiency was measured through financial expenditures and time stamped data. Data quality was measured by examining completeness. Acceptability was studied through focus group discussions with survey staff. Results Survey costs were 68% more expensive in CAPI clusters compared to PAPI clusters, due primarily to the upfront one-time investment for survey programming. Enumerators spent significantly less time administering surveys in CAPI cluster households (248 min survey time) compared to PAPI (289 min), for an average savings of 41 min per household (95% CI 25–55). CAPI offered a savings of 87 days for data management over PAPI. Among 49 tracer variables (meaning responses were required from all respondents), small differences were observed between PAPI and CAPI. 2.2% of the cleaned dataset’s tracer data points were missing in CAPI surveys (1216/ 56,073 data points), compared to 3.2% in PAPI surveys (1953/ 60,675 data points). In pre-cleaned datasets, 3.9% of tracer data points were missing in CAPI surveys (2151/ 55,092 data points) compared to 3.2% in PAPI surveys (1924/ 60,113 data points). Enumerators from Panjsher and Parwan preferred CAPI over PAPI due to time savings, user-friendliness, improved data security, and less conspicuity when traveling; however approximately half of enumerators trained from all 34 provinces reported feeling unsafe due to Taliban presence. Community and household respondent skepticism could be resolved by enumerator reassurance. Enumerators shared that in the future, they prefer collecting data using CAPI when possible. Conclusions CAPI offers clear gains in efficiency over PAPI for data collection and management time, although costs are relatively comparable even without the programming investment. However, serious field staff concerns around Taliban threats and general insecurity mean that CAPI should only be conducted in relatively secure areas.


2021 ◽  
Vol 18 (1) ◽  
Author(s):  
Jose Antonio Navarro Alonso ◽  
Louis J. Bont ◽  
Elena Bozzola ◽  
Egbert Herting ◽  
Federico Lega ◽  
...  

AbstractRespiratory syncytial virus (RSV)—the most common viral cause of bronchiolitis—is a significant cause of serious illness among young children between the ages of 0–5 years and is especially concerning in the first year of life. Globally, RSV is a common cause of childhood acute lower respiratory illness (ALRI) and a major cause of hospital admissions in young children and infants and represents a substantial burden for health-care systems. This burden is strongly felt as there are currently no effective preventative options that are available for all infants. However, a renaissance in RSV prevention strategies is unfolding, with several new prophylactic options such as monoclonal antibodies and maternal vaccinations that are soon to be available. A key concern is that health decision makers and systems may not be ready to take full advantage of forthcoming technological innovations. A multi-stakeholder approach is necessary to bridge data gaps to fully utilise upcoming options. Knowledge must be made available at multiple levels to ensure that parents and doctors are aware of preventative options, but also to ensure that stakeholders and policymakers are given the necessary information to best advise implementation strategies.


2021 ◽  
Vol 18 (1) ◽  
Author(s):  
Andrew Evarist Mganga ◽  
Jenny Renju ◽  
Jim Todd ◽  
Michael Johnson Mahande ◽  
Seema Vyas

Abstract Background Women’s empowerment is a multidimensional construct which varies by context. These variations make it challenging to have a concrete definition that can be measured quantitatively. Having a standard composite measure of empowerment at the individual and country level would help to assess how countries are progressing in efforts to achieve gender equality (SDG 5), enable standardization across and within settings and guide the formulation of policies and interventions. The aim of this study was to develop a women’s empowerment index for Tanzania and to assess its evolution across three demographic and health surveys from 2004 to 2016. Results Women’s empowerment in Tanzania was categorized into six distinct domains namely; attitudes towards violence, decision making, social independence, age at critical life events, access to healthcare, and property ownership. The internal reliability of this six-domain model was shown to be acceptable by a Cronbach’s α value of 0.658. The fit statistics of the root mean squared error of approximation (0.05), the comparative fit index (0.93), and the standardized root mean squared residual (0.04) indicated good internal validity. The structure of women’s empowerment was observed to have remained relatively constant across three Tanzanian demographic and health surveys. Conclusions The use of factor analysis in this research has shown that women’s empowerment in Tanzania is a six-domain construct that has remained relatively constant over the past ten years. This could be a stepping stone to reducing ambiguity in conceptualizing and operationalizing empowerment and expanding its applications in empirical research to study different women related outcomes in Tanzania.


2021 ◽  
Vol 18 (1) ◽  
Author(s):  
Ebrahim Rahimi ◽  
Seyed Saeed Hashemi Nazari

AbstractThis paper introduces the Blinder-Oaxaca decomposition method to be applied in explaining inequality in health outcome across any two groups. In order to understand every aspect of the inequality, multiple regression model can be used in a way to decompose the inequality into contributing factors. The method can therefore be indicated to what extent of the difference in mean predicted outcome between two groups is due to differences in the levels of observable characteristics (acceptable and fair). Assuming the identical characteristics in the two groups, the remaining inequality can be due to differential effects of the characteristics, maybe discrimination, and unobserved factors that not included in the model. Thus, using the decomposition methods can identify the contribution of each particular factor in moderating the current inequality. Accordingly, more detailed information can be provided for policy-makers, especially concerning modifiable factors. The method is theoretically described in detail and schematically presented. In the following, some criticisms of the model are reviewed, and several statistical commands are represented for performing the method, as well. Furthermore, the application of it in the health inequality with an applied example is presented.


2021 ◽  
Vol 18 (1) ◽  
Author(s):  
Lawrence M. Paul

Abstract Background The use of meta-analysis to aggregate the results of multiple studies has increased dramatically over the last 40 years. For homogeneous meta-analysis, the Mantel–Haenszel technique has typically been utilized. In such meta-analyses, the effect size across the contributing studies of the meta-analysis differs only by statistical error. If homogeneity cannot be assumed or established, the most popular technique developed to date is the inverse-variance DerSimonian and Laird (DL) technique (DerSimonian and Laird, in Control Clin Trials 7(3):177–88, 1986). However, both of these techniques are based on large sample, asymptotic assumptions. At best, they are approximations especially when the number of cases observed in any cell of the corresponding contingency tables is small. Results This research develops an exact, non-parametric test for evaluating statistical significance and a related method for estimating effect size in the meta-analysis of k 2 × 2 tables for any level of heterogeneity as an alternative to the asymptotic techniques. Monte Carlo simulations show that even for large values of heterogeneity, the Enhanced Bernoulli Technique (EBT) is far superior at maintaining the pre-specified level of Type I Error than the DL technique. A fully tested implementation in the R statistical language is freely available from the author. In addition, a second related exact test for estimating the Effect Size was developed and is also freely available. Conclusions This research has developed two exact tests for the meta-analysis of dichotomous, categorical data. The EBT technique was strongly superior to the DL technique in maintaining a pre-specified level of Type I Error even at extremely high levels of heterogeneity. As shown, the DL technique demonstrated many large violations of this level. Given the various biases towards finding statistical significance prevalent in epidemiology today, a strong focus on maintaining a pre-specified level of Type I Error would seem critical. In addition, a related exact method for estimating the Effect Size was developed.


Sign in / Sign up

Export Citation Format

Share Document