scholarly journals Conducting Comparable Research in Representative Worlds

Author(s):  
Chiara Santomauro ◽  
Penelope Sanderson

Field-based simulation research can be delayed or prevented due to restricted resources and other practical challenges. Although laboratory work is a feasible alternative, it is often criticized for a lack of generalizability. We faced this issue when investigating the impact of workplace interruptions on nurses’ work performance in the Intensive Care Unit (ICU). The potential relationship between interruptions and errors has been widely investigated in healthcare settings; however, much of the evidence is associative. Some evidence outside the healthcare domain points to a causal connection between interruptions and errors, but the studies are mostly laboratory based and interruptions are artificial to the situation. Our eventual aim is to carry out a high-fidelity randomized-controlled trial to test the hypothesis that interruptions cause errors in healthcare, which could have major implications for interventions and policy. However, there are considerable challenges and constraints to overcome when designing such an experiment; for example (a) a limited potential participant pool within the ICU, (b) constant changes to technology and procedures in the ICU, and (c) restricted access to hospital simulation rooms. There are various ways to address these issues but most options compromise the generalizability of the simulation to authentic situations. By adopting principles of Brunswick’s representative design, we designed an initial laboratory study to be used as a formal pilot study in a different, but parallel, context to healthcare. Representative design refers to the “arrangement of conditions of an experiment so that they represent the behavioural setting to which the results are intended to apply” (Araujo, Davids, & Passos, 2007, p. 72). Representing the ‘natural world’ with high fidelity is not crucial in representative design, but rather ensuring that the properties of the conditions to which the researcher wishes to generalize are adequately captured in the laboratory task (Hammond & Stewart, 2001). Accordingly, we have created a laboratory-based simulation on the basis of the properties we aim to capture in a healthcare-based simulation, so that we can generalize findings from the former to the latter. The laboratory component needed to involve a task that embodied the high-level properties of medication preparation and administration, with a specialized population who regularly perform the task. The task of cocktail making fits the above requirements because it has high-level properties similar to medication preparation and administration. Cocktail making is concerned with controlled liquids, it requires perceptual motor skills, it involves multi-step tasks carried out in a busy environment where there is a high demand on working memory, and it is performed by experts (bartenders). These similarities mean that cocktail making can be used as a laboratory analog of certain aspects of medication preparation and administration. First, we mapped the physical environment across the two domains to ensure the cocktail component possessed the same spatial properties as the medication component. Second, we designed the high-level structure of the cocktail scenarios to approximate the high-level structure of the medication component scenarios. Third, we designed the interrupting tasks and added them to the scenarios. Experiment 1 of the cocktail component was a condition with zero interruptions to provide a baseline error rate with which to calculate the required sample size for Experiment 2. Experiment 2 was a between-subjects study in which participants were randomly assigned to receive either 3 or 12 interruptions. All participants had at least one-month cocktail making experience in a licensed venue. Cocktail errors were the analog of clinical errors in healthcare (Westbrook et al., 2010). In Experiment 1, an average of 44% of cocktails made contained at least one error per scenario ( SD = 18%). To calculate the required sample size for Experiment 2, an effect size was calculated by integrating our baseline data with observational data from Westbrook et al. (2010) and a power analysis was performed. Data collection for Experiment 2 is now completed. The findings from Experiment 2 will be used to calculate the required sample size for the medication component and lessons learned from the cocktail component will help finalize the design of the medication component. The cocktail component of our study is not totally analogous to the medication component, but we have shown that principles of representative design can be used to design a simulation from which we can argue that generalizable findings can be gathered. A major advantage of our approach is that we have been able to design and test many formal aspects of the medication component of our study prior to stepping into the hospital simulator. Findings from the medication component will help to shape interventions and policies in the healthcare domain that reduce error rates and increase patient safety. The cocktail component will contribute to the interruptions literature because the interruptions are representative of those that would actually occur in the workplace – something that is rare in laboratory-based interruptions research.

1998 ◽  
Vol 64 (5) ◽  
pp. 1589-1593 ◽  
Author(s):  
Michael J. Weickert ◽  
Izydor Apostol

ABSTRACT Coexpression of di-α-globin and β-globin in Escherichia coli in the presence of exogenous heme yielded high levels of soluble, functional recombinant human hemoglobin (rHb1.1). High-level expression of rHb1.1 provides a good model for measuring mistranslation in heterologous proteins. rHb1.1 does not contain isoleucine; therefore, any isoleucine present could be attributed to mistranslation, most likely mistranslation of one or more of the 200 codons that differ from an isoleucine codon by 1 bp. Sensitive amino acid analysis of highly purified rHb1.1 typically revealed ≤0.2 mol of isoleucine per mol of hemoglobin. This corresponds to a translation error rate of ≤0.001, which is not different from typical translation error rates found for E. coli proteins. Two different expression systems that resulted in accumulation of globin proteins to levels equivalent to ∼20% of the level of E. colisoluble proteins also resulted in equivalent translational fidelity.


2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Elahe Allahyari ◽  
Peyman Jafari ◽  
Zahra Bagheri

Objective.The present study uses simulated data to find what the optimal number of response categories is to achieve adequate power in ordinal logistic regression (OLR) model for differential item functioning (DIF) analysis in psychometric research.Methods.A hypothetical ten-item quality of life scale with three, four, and five response categories was simulated. The power and type I error rates of OLR model for detecting uniform DIF were investigated under different combinations of ability distribution (θ), sample size, sample size ratio, and the magnitude of uniform DIF across reference and focal groups.Results.Whenθwas distributed identically in the reference and focal groups, increasing the number of response categories from 3 to 5 resulted in an increase of approximately 8% in power of OLR model for detecting uniform DIF. The power of OLR was less than 0.36 when ability distribution in the reference and focal groups was highly skewed to the left and right, respectively.Conclusions.The clearest conclusion from this research is that the minimum number of response categories for DIF analysis using OLR is five. However, the impact of the number of response categories in detecting DIF was lower than might be expected.


Author(s):  
Len LeBlanc ◽  
Walter Kresic ◽  
Sean Keane ◽  
John Munro

This paper describes the integrity management framework utilized within the Enbridge Liquids Pipelines Integrity Management Program. The role of the framework is to provide the high-level structure used by the company to prepare and demonstrate integrity safety decisions relative to mainline pipelines, and facility piping segments where applicable. The scope is directed to corrosion, cracking, and deformation threats and all variants within those broad categories. The basis for the framework centers on the use of a safety case to provide evidence that the risks affecting the system have been effectively mitigated. A ‘safety case’, for the purposes of this methodology is defined as a structured argument demonstrating that the evidence is sufficient to show that the system is safe.[1] The decision model brings together the aspects of data integration and determination of maintenance timing; execution of prevention, monitoring, and mitigation; confirmation that the execution has met reliability targets; application of additional steps if targets are not met; and then the collation of the results into an engineering assessment of the program effectiveness (safety case). Once the program is complete, continuous improvement is built into the next program through the incorporation of research and development solutions, lessons learned, and improvements to processes. On the basis of a wide range of experiences, investigations and research, it was concluded that there are combinations of monitoring and mitigation methods required in an integrity program to effectively manage integrity threats. A safety case approach ultimately provides the structure for measuring the effectiveness of integrity monitoring and mitigation efforts, and the methodology to assess whether a pipeline is sufficiently safe with targets for continuous improvement. Hence, the need for the safety case is to provide transparent, quantitative integrity program performance results which are continually improved upon through ongoing revalidations and improvement to the methods utilized. This enables risk reduction, better stakeholder awareness, focused innovation, opportunities for industry information sharing along with other benefits.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 4037
Author(s):  
Shania Stewart ◽  
Ha H. Nguyen ◽  
Robert Barton ◽  
Jerome Henry

This paper presents two methods to optimize LoRa (Low-Power Long-Range) devices so that implementing multiplier-less pulse shaping filters is more economical. Basic chirp waveforms can be generated more efficiently using the method of chirp segmentation so that only a quarter of the samples needs to be stored in the ROM. Quantization can also be applied to the basic chirp samples in order to reduce the number of unique input values to the filter, which in turn reduces the size of the lookup table for multiplier-less filter implementation. Various tests were performed on a simulated LoRa system in order to evaluate the impact of the quantization error on the system performance. By examining the occupied bandwidth, fast Fourier transform used for symbol demodulation, and bit-error rates, it is shown that even performing a high level of quantization does not cause significant performance degradation. Therefore, the memory requirements of LoRa devices can be significantly reduced by using the methods of chirp segmentation and quantization so as to improve the feasibility of implementing multiplier-less filters in LoRa devices.


2006 ◽  
Vol 33 (4) ◽  
pp. 859-877 ◽  
Author(s):  
CAROLINE F. ROWLAND ◽  
SARAH L. FLETCHER

Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.


2016 ◽  
Vol 101 (9) ◽  
pp. e2.13-e2 ◽  
Author(s):  
Anastasia Tsyben ◽  
Nigel Gooding ◽  
Wilf Kelsall

AimPrescribing audits have shown that the Women's and Children's Directorate reported higher number of prescription errors on the paediatric and neonatal wards compared to other areas in the Trust. Over the last three years a multidisciplinary prescribing team (PT), which included senior clinicians, pharmacists and trainees introduced a number of initiatives to improve the quality of prescribing. Strategies included structured departmental inductions, setting up of designated prescribing areas and reviewing errors with the prescriber. Year on year there were fewer prescribing errors.1 With the introduction of a new electronic prescribing system in October 2014 prescribing error rates were expected to decrease further, eradicating omissions around allergy recording, ward location and drug names. The aim of this abstract is to highlight the impact of the new system and describe lessons learned.MethodIn the summer of 2014, all inpatient drug charts across the department were reviewed on three non-consecutive days over a period of three weeks. Prescribing errors were identified by the ward pharmacist. Errors were grouped according to type and further analyzed by the PT. Errors deemed to have no clinical significance were excluded. Error rates were compared to the previous audits performed with identical methodology. Following the introduction of the electronic prescribing system, the ward pharmacists continued to review prescription charts on daily basis and generate regular error reports to notify the staff of new challenges.ResultsThere were 174 (14%) errors out of 1225 prescriptions on 181 drug charts. The most commonly made mistakes included drug name errors, strength of preparation, allergies and ward documentation, prescriber's signature omissions, and antibiotic review and end dates. The introduction of an electronic system has eliminated drug name, strength of preparation, allergy recording and ward errors. However, serious challenges have been identified: entering of an incorrect weight resulted in all drug dosages being inaccurate; the timing of drug levels for Vancomycin and Gentamicin and the administration of subsequent doses have been problematic. Communication difficulties between all staff groups has led to dosage omission, duplicate administration and confusion around start and stop dates. The ability to prescribe away from the bedside and indeed the ward has compounded some of these problems.ConclusionThe implementation of a new electronic system has reduced prescribing errors but has also resulted in new challenges, some with significant patient safety implications. The lessons learned and good practice introduced following previous audits of “traditional paper based” prescribing are equally important with electronic prescribing. Communication between staff groups is crucial. It is likely that the full benefits of the system will be realized a year after its introduction. On-going audit is required to assess the impact and safety of the electronic prescribing and lessons learned.


Author(s):  
V. Kovpak ◽  
N. Trotsenko

<div><p><em>The article analyzes the peculiarities of the format of native advertising in the media space, its pragmatic potential (in particular, on the example of native content in the social network Facebook by the brand of the journalism department of ZNU), highlights the types and trends of native advertising. The following research methods were used to achieve the purpose of intelligence: descriptive (content content, including various examples), comparative (content presentation options) and typological (types, trends of native advertising, in particular, cross-media as an opportunity to submit content in different formats (video, audio, photos, text, infographics, etc.)), content analysis method using Internet services (using Popsters service). And the native code for analytics was the page of the journalism department of Zaporizhzhya National University on the social network Facebook. After all, the brand of the journalism department of Zaporozhye National University in 2019 celebrates its 15th anniversary. The brand vector is its value component and professional training with balanced distribution of theoretical and practical blocks (seven practices), student-centered (democratic interaction and high-level teacher-student dialogue) and integration into Ukrainian and world educational process (participation in grant programs).</em></p></div><p><em>And advertising on social networks is also a kind of native content, which does not appear in special blocks, and is organically inscribed on one page or another and unobtrusively offers, just remembering the product as if «to the word». Popsters service functionality, which evaluates an account (or linked accounts of one person) for 35 parameters, but the main three areas: reach or influence, or how many users evaluate, comment on the recording; true reach – the number of people affected; network score – an assessment of the audience’s response to the impact, or how far the network information diverges (how many share information on this page).</em></p><p><strong><em>Key words:</em></strong><em> nativeness, native advertising, branded content, special project, communication strategy.</em></p>


2020 ◽  
Vol 2020 (10) ◽  
pp. 19-33
Author(s):  
Nadiia NOVYTSKA ◽  
◽  
Inna KHLIEBNIKOVA ◽  

The market of tobacco products in Ukraine is one of the most dynamic and competitive. It develops under the influence of certain factors that cause structural changes, therefore, the aim of the article is to conduct a comprehensive analysis of transformation processes in the market of tobacco and their alternatives in Ukraine and identify the factors that cause them. The high level of tax burden and the proliferation of alternative products with a potentially lower risk to human health, including heating tobacco products and e-cigarettes, are key factors in the market’s transformation process. Their presence leads to an increase in illicit turnover of tobacco products, which accounts for 6.37% of the market, and the gradual replacement of cigarettes with alternative products, which account for 12.95%. The presence on the market of products that are not taxed or taxed at lower rates is one of the reasons for the reduction of excise duty revenues. According to the results of 2019, the planned indicators of revenues were not met by 23.5%. Other reasons for non-fulfillment of excise duty revenues include: declining dynamics of the tobacco products market; reduction in the number of smokers; reorientation of «cheap whites» cigarette flows from Ukraine to neighboring countries; tax avoidance. Prospects for further research are identified, namely the need to develop measures for state regulation and optimization of excise duty taxation of tobacco products and their alternatives, taking into account the risks to public health and increasing demand of illegal products.


2020 ◽  
Vol 38 (3) ◽  
Author(s):  
Shoaib Ali ◽  
Imran Yousaf ◽  
Muhammad Naveed

This paper aims to examine the impact of external credit ratings on the financial decisions of the firms in Pakistan.  This study uses the annual data of 70 non-financial firms for the period 2012-2018. It uses ordinary least square (OLS) to estimate the impact of credit rating on capital structure. The results show that rated firm has a high level of leverage. Moreover, Profitability and tanagability are also found to be a significantly negative determinant of the capital structure, whereas, size of the firm has a significant positive relationship with the capital structure of the firm.  Besides, there exists a non-linear relationship between the credit rating and the capital structure. The rated firms have higher leverage as compared to the non-rated firms. The high and low rated firms have a low level of leverage, while mid rated firms have a higher leverage ratio. The finding of the study have practical implications for the manager; they can have easier access to the financial market by just having a credit rating no matter high or low. Policymakers must stress upon the rating agencies to keep improving themselves as their rating severs as the measure to judge the creditworthiness of the firm by both the investors and management as well.


Sign in / Sign up

Export Citation Format

Share Document