Drug prescription pattern in European neonatal units

2018 ◽  
Author(s):  
Inge Mesek ◽  
Georgi Nellis ◽  
Jana Lass ◽  
Tuuli Metsvaht ◽  
Heili Varendi ◽  
...  

ABSTRACTBackgroundHospitalized neonates receive the highest number of drugs compared to all other age groups, but consumption rates vary between studies depending on patient characteristics and local practices. There are no large scale international studies on drug use in neonatal units. We aimed to describe drug use in European neonatal units and characterize its associations with geographic region and gestational age (GA).MethodsA one-day point prevalence study (PPS) was performed as part of the European Study of Neonatal Exposure to Excipients (ESNEE) from January to June 2012. All neonatal prescriptions and demographic data were registered in a web-based database. The impact of GA and region on prescription rate were analyzed with logistic regression.ResultsIn total, 21 European countries with 89 neonatal units participated. Altogether 2173 prescriptions given to 726 neonates were registered. The 10 drugs with the highest prescription rate were multivitamins, vitamin D, caffeine, gentamicin, amino acids for parenteral nutrition, phytomenadione, ampicillin, benzylpenicillin, fat emulsion for parenteral nutrition and probiotics. The six most commonly prescribed ATC groups (alimentary tract and metabolism, blood and blood-forming organs, systemic anti-infectives, nervous, respiratory and cardiovascular system) covered 98% of prescriptions. GA significantly affected the use of all commonly used drug groups. Geographic region influenced the use of alimentary tract and metabolism, blood and blood-forming organs, systemic anti-infectives, nervous and respiratory system drugs.ConclusionsWhile GA-dependent differences in neonatal drug use were expected, regional variations (except for systemic anti-infectives) indicate a need for cooperation in developing harmonized evidence-based guidelines and suggest priorities for collaborative work.

2021 ◽  
Vol 11 ◽  
Author(s):  
Yu Zhang ◽  
Jie Li ◽  
Zhi-Ke Li ◽  
Xiyue Yang ◽  
Jie Bai ◽  
...  

Lung cancer is the most common cancer malignancy worldwide. With the continuous spread of the coronavirus disease 2019 (COVID-19) globally, it is of great significance to explore the impact of this disease on the clinical characteristics of lung cancer. Thus, we aimed to investigate whether the COVID-19 pandemic had any influence on the clinical characteristics and diagnosis of patients with lung cancer. We collected clinical and demographic data of patients who were newly diagnosed with lung cancer at our hospital between February 2019 and July 2020. Overall, 387 patients with lung cancer were divided into two groups for analysis: epidemic group (from February to July 2020) and pre-epidemic group (from February to July 2019). The source of diagnosis and clinical characteristics of the two groups were analysed. T-test and Mann-Whitney U were used for continuous variables, and Chi-squared or Fisher’s exact test for categorical variable. We found that during the epidemic period, 110 cases of lung cancer were incidentally diagnosed during COVID-19 screening, accounting for 47.6% of all newly diagnosed lung cancer cases at our hospital. The proportions of patients who were diagnosed based on symptoms and physical examination in the epidemic group were 34.2 and 18.2%, respectively, while that in the pre-epidemic group were 41.7 and 58.3%, respectively. There was significant difference in the source of diagnosis between the two groups. In a subgroup analysis of the epidemic group, the average tumour volume of the patients diagnosed with COVID-19 screening was significantly smaller than that of the patients diagnosed with symptoms and physical examination. In conclusion, the continuation of the COVID-19 pandemic may impact the screening and clinical characteristics of lung cancer and require large-scale and longer-term observation.


2012 ◽  
Vol 7 (4) ◽  
pp. 530-546 ◽  
Author(s):  
Merike Darmody

This article explores secondary school transitions from a comparative perspective. It focuses on a stage at which a major institutional transition takes place in two different educational systems. Over the years a number of international studies have explored different learning environments and their impact on student educational outcomes. Much of this research explores the impact of school choice and the transition from one level of schooling to another. In general, these studies refer to school transitions as a time when students are particularly vulnerable due to structural and environmental differences between different levels of schooling. In other words, the new learning environments generally have a different ‘institutional habitus’. While seamless and unproblematic transition from one level of schooling to another is seen to ensure students' success at the subsequent level of schooling and beyond, negative experiences and difficulties around adjustment, on the other hand, are shown to result in disengagement and becoming at risk of early school leaving, with detrimental consequences for the individual concerned. Furthermore, different pathways within the educational systems have been found to reproduce unequal life chances. To discuss and re-theorise school transitions, the article draws on a large-scale comparative study of the transitions in secondary school in Ireland and Estonia, and utilises a conceptual tool called ‘institutional habitus' to gain better understanding of the processes involved. While the article discusses similarities and differences between children's transition experience in two different countries, it also calls for a careful approach to ‘direct borrowing’ of practices from other countries.


Author(s):  
Karen Susan Tingay ◽  
Matthew Roberts ◽  
Charles B.A. Musselwhite

The effect of the wider social-environment on physical and emotional health has long been an area of study. Extrapolating the impact of the individual's immediate environment, such as living with a smoker or caring for a chronically-ill child, would potentially reduce confounding effects in health-related research. Surveys, including the UK Census, are beginning to collect data on household composition. However, these surveys are expensive, time consuming, and, as such, are only completed by a subsection of the population. Large-scale, linked databanks, such as the SAIL Databank at Swansea University, which hold routinely collected secondary use clinical and administrative datasets, are broader in scope, both in terms of the nature of the data held, and the population. The SAIL databank includes demographic data and a geographic indicator that makes it possible to identify groups of people that share accommodation, and in some cases the familial relationships among them. This paper describes a method for creating households, including considerations for how that information can be securely shared for research purposes. This approach has broad implications in Wales and beyond, opening up possibilities for more detailed population-level research that includes consideration of residential social interactions.


2020 ◽  
Vol 36 (4) ◽  
Author(s):  
Nguyen Thi Ngoc Quynh ◽  
Nguyen Thi Quynh Yen ◽  
Tran Thi Thu Hien ◽  
Nguyen Thi Phuong Thao ◽  
Bui Thien Sao ◽  
...  

Playing a vital role in assuring reliability of language performance assessment, rater training has been a topic of interest in research on large-scale testing. Similarly, in the context of VSTEP, the effectiveness of the rater training program has been of great concern. Thus, this research was conducted to investigate the impact of the VSTEP speaking rating scale training session in the rater training program provided by University of Languages and International Studies - Vietnam National University, Hanoi. Data were collected from 37 rater trainees of the program. Their ratings before and after the training session on the VSTEP.3-5 speaking rating scales were then compared. Particularly, dimensions of score reliability, criterion difficulty, rater severity, rater fit, rater bias, and score band separation were analyzed. Positive results were detected when the post-training ratings were shown to be more reliable, consistent, and distinguishable. Improvements were more noticeable for the score band separation and slighter in other aspects. Meaningful implications in terms of both future practices of rater training and rater training research methodology could be drawn from the study.


2020 ◽  
Vol 319 (1) ◽  
pp. G1-G10
Author(s):  
Cambrian Y. Liu ◽  
D. Brent Polk

The development of modern methods to induce optical transparency (“clearing”) in biological tissues has enabled the three-dimensional (3D) reconstruction of intact organs at cellular resolution. New capabilities in visualization of rare cellular events, long-range interactions, and irregular structures will facilitate novel studies in the alimentary tract and gastrointestinal systems. The tubular geometry of the alimentary tract facilitates large-scale cellular reconstruction of cleared tissue without specialized microscopy setups. However, with the rapid pace of development of clearing agents and current relative paucity of research groups in the gastrointestinal field using these techniques, it can be daunting to incorporate tissue clearing into experimental workflows. Here, we give some advice and describe our own experience bringing tissue clearing and whole mount reconstruction into our laboratory’s investigations. We present a brief overview of the chemical concepts that underpin tissue clearing, what sorts of questions whole mount imaging can answer, how to choose a clearing agent, an example of how to clear and image alimentary tissue, and what to do after obtaining the image. This short review will encourage other gastrointestinal researchers to consider how utilizing tissue clearing and creating 3D “maps” of tissue might deepen the impact of their studies.


2013 ◽  
Vol 31 (15_suppl) ◽  
pp. 8552-8552
Author(s):  
Kevin A. Hay ◽  
Benny Lee ◽  
Ozge Goktepe ◽  
Joseph M. Connors ◽  
Laurie Helen Sehn ◽  
...  

8552 Background: DLBCL is potentially curable with combination chemotherapy such as CHOP-R. Although it is generally regarded appropriate to start chemotherapy promptly after diagnosis, the impact of the time from diagnosis to treatment initiation on treatment outcome is unknown. Methods: Patients diagnosed with DLBCL and treated with at least one cycle of CHOP-R with curative intent during 2003 – 2008 in British Columbia were identified in the Lymphoid Cancer Database. Additional demographic data were obtained from the BC Cancer Registry. The BC Cancer Agency provincial pharmacy database was used to obtain dates of chemotherapy administration. The impact of the time interval from the date of pathologic diagnosis to treatment on overall survival (OS) and progression-free survival (PFS) was evaluated. Results: A total of 793 patients were identified: 199 (25%) received CHOP-R <2 weeks after diagnosis, 244 (31%) at 2-4 weeks, 293 (37%) at 5-8 weeks, and 57 (7%) at >8 weeks. High international prognostic index, primary mediastinal DLBCL, and hospitalization at the time of CHOP-R start were associated with earlier initiation of chemotherapy (p<0.001 for all factors). Distance to chemotherapy from home (p=0.237), rural vs. urban location (p=0.952), geographic region (p=0.458), and median household income (p=0.127) were not associated to treatment start. Five-year PFS and OS respectively were 54% (SD 4%) and 61% (SD 4%) for treatment <2 weeks, 63% (SD 3%) and 66% (SD 3%) for 2-4 weeks, 70% (SD 3%) and 74% (SD 3%) for 5-8 weeks, and 60% (SD 7%) and for 61% (SD 8%) >8 weeks, p=0.006 (PFS) and p=0.024 (OS). A multivariate analysis demonstrated no significant difference between the groups. Conclusions: In a publicly funded healthcare system, earlier initiation of chemotherapy was strongly associated with poor prognostic factors, as well as inferior PFS and OS. The timing of chemotherapy initiation appears to be related to clinical factors instead of system or socioeconomic barriers. Notwithstanding the lack of detrimental outcomes in those commencing CHOP-R after 8 weeks, clinicians should endeavor to initiate curative chemotherapy as soon as possible after a diagnosis of DLBCL is established.


2019 ◽  
Vol 15 (6) ◽  
pp. 507-520
Author(s):  
Anna Schinas, MSc ◽  
Shein Nanji, BSc ◽  
Kira Vorobej, MSc ◽  
Catherine Mills, MSc ◽  
Dawn Govier, BSc ◽  
...  

Objective: To identify key characteristics and habits of recreational opioid users.Design: The data were compiled from volunteers who participated in clinical studies at a contract research organization in Toronto, Ontario, Canada.Interventions: Data were collected from 5,018 male and female recreational opioid users via telephone and face-to-face screening interviews. Five recreational opioid users participated in a live interview broadcast on the internet.Main outcome measures: Demographic data, recreational drug use history, routes of recreational drug administration, alcohol use, and smoking status. A subset of the demographic information and recreational drug use history was summarized separately using data collected between 2013 and 2016 from 114 recreational opioid users who were not dependent on opioids. Interview excerpts were included from five recreational opioid users who described their real-world experiences with drug abuse, including the impact of abuse-deterrent opioid formulations on their drug abuse behavior.Results: The preferred route of administration of opioids was oral (52 percent), followed by intranasal (36 percent), intravenous (10 percent), and buccal (chewing on a patch; 2 percent). Other substances used included nicotine, alcohol, and non-opioid psychoactive drugs (primarily cannabis). Oxycodone was the most frequently reported opioid of abuse.Conclusions: Recreational opioid users have distinct drug-related behaviors and preferences. Monitoring current trends and examining these behaviors is an important component to understand the potential safety risks associated with recreational opioid use.


2019 ◽  
Vol 77 (1) ◽  
pp. 1-11
Author(s):  
M Quinzán ◽  
J Castro ◽  
E Massutí ◽  
L Rueda ◽  
M Hidalgo

Abstract Overexploitation and climate change are increasingly causing unanticipated changes in marine ecosystems such as higher variability in fish recruitment or shifts in species dominance and distribution that alter the productivity of fish stocks. This study analyses how external and internal drivers influence population dynamics of hake (Merluccius merluccius), white anglerfish (Lophius piscatorius), four-spot megrim (Lepidorhombus boscii), and horse mackerel (Trachurus trachurus) of Iberian Peninsula waters of the Northeast Atlantic across different spatiotemporal scales. Available spawning stock biomass and recruitment have been used as biological data, whereas fishing mortality, demographic data as well as climatic and oceanographic data have been used as drivers. The obtained results indicate that population dynamics of these species are mainly driven by oceanographic variability at regional scale along with fishing pressure and demographic factors, while the impact of large-scale climate indices was minimal. The identified variables represent relevant oceanographic regional processes candidate to be potentially integrated into the stock assessment models and management procedures of these important fishery resources.


2021 ◽  
Author(s):  
Emilie L. Schwarz ◽  
Lara Schwarz ◽  
Anaïs Teyton ◽  
Katie Crist ◽  
Tarik Benmarhnia

Abstract Policies to restrict population mobility are a commonly used strategy to limit the transmission of contagious diseases. Among measures implemented during the COVID-19 pandemic were dynamic stay-at-home orders informed by real-time, regional-level data. California was the only state in the U.S. to implement this novel approach; however, the effectiveness of California’s four-tier system on population mobility has not been quantified. Utilizing data from mobile devices and county-level demographic data, we evaluated the impact of policy changes on population mobility and explored whether demographic characteristics explained variability in responsiveness to policy changes. For each Californian county, we calculated the proportion of people staying home and the average number of daily trips taken per 100 persons, across different trip distances and compared this to pre-COVID-19 levels. We found that overall mobility decreased when counties moved to a more restrictive tier and increased when moving to a less restrictive tier, as the policy intended. When placed in a more restrictive tier, the greatest decrease in mobility was observed for shorter and medium-range trips, while there was an unexpected increase in the longer trips. The mobility response varied by geographic region, as well as county-level median income, gross domestic product, the prevalence of farms, and recent election results. This analysis provides evidence of the effectiveness of the tier-based system in decreasing overall population mobility to ultimately reduce COVID-19 transmission. Results demonstrate that economic and political indicators drive important variability in such patterns across counties.


Sign in / Sign up

Export Citation Format

Share Document