scholarly journals The new normal? Redaction bias in biomedical science

2021 ◽  
Vol 8 (12) ◽  
Author(s):  
David Robert Grimes ◽  
James Heathers

A concerning amount of biomedical research is not reproducible. Unreliable results impede empirical progress in medical science, ultimately putting patients at risk. Many proximal causes of this irreproducibility have been identified, a major one being inappropriate statistical methods and analytical choices by investigators. Within this, we formally quantify the impact of inappropriate redaction beyond a threshold value in biomedical science. This is effectively truncation of a dataset by removing extreme data points, and we elucidate its potential to accidentally or deliberately engineer a spurious result in significance testing. We demonstrate that the removal of a surprisingly small number of data points can be used to dramatically alter a result. It is unknown how often redaction bias occurs in the broader literature, but given the risk of distortion to the literature involved, we suggest that it must be studiously avoided, and mitigated with approaches to counteract any potential malign effects to the research quality of medical science.

1997 ◽  
Vol 11 (5) ◽  
pp. 304-313
Author(s):  
W. O. George ◽  
A. N. Hill

In this paper, the origins and characteristics of the 102 current UK universities are briefly traced and the outcomes of recent assessments of research quality are summarized for all universities and for the 69 subject units within which assessment was made. The quality of research in a subject unit, group of subject units or complete institution is measured by a weighted average score based on a peer rating of submitted subject units from each university and the numerical values obtained are described within the limitations of the methodology developed. The authors consider the scores in terms of the characteristics of each university and the broad subject areas, science, engineering, social sciences and humanities. They then discuss the industrial link with research in terms of recent government policy inputs, university research outcomes and the impact of market forces on universities from diminishing patterns of some income streams.


2021 ◽  
Vol 45 (1) ◽  
pp. 170-194
Author(s):  
Richard O. Welsh

The contemporary social, economic, and cultural conditions within and outside the academy prompt important questions about the role of research in education policy and practice. Scholars have framed research-practice partnerships (RPPs) as a strategy to promote evidence-based decision-making in education. In this chapter, I interrogate the notion that RPPs offer an insightful framework to consider how the quality of research can be measured through its use. The findings suggest that using RPPs to assess the quality of education research enhances the relevance to policy and practice as well as attention to the quality of reporting, and pivots from the preeminence of methodological quality. RPPs increase local education leaders’ access to research and bolster the use of research. RPPs may also strengthen the alignment between education research and the public good. Notwithstanding, employing RPPs as a vehicle to assess research quality has its challenges. Valuing the work of RPPs in academia is a work in progress. Building and sustaining an RPP is challenging, and there is still much to learn about the ways in which RPPs work and overcome obstacles. Assessing the impact of RPPs is also difficult. Future considerations are discussed.


2019 ◽  
Vol 32 (1) ◽  
pp. 2-25 ◽  
Author(s):  
James Guthrie ◽  
Lee D. Parker ◽  
John Dumay ◽  
Markus J. Milne

Purpose The purpose of this paper is to reflect upon the focus and changing nature of measuring academic accounting research quality. The paper addresses contemporary changes in academic publishing, metrics for determining research quality and the possible impacts on accounting scholars. These are considered in relation to the core values of interdisciplinary accounting research ‒ that is, the pursuit of novel, rigorous, significant and authentic research motivated by a passion for scholarship, curiosity and solving wicked problems. The impact of changing journal rankings and research citation metrics on the traditional and highly valued role of the accounting academic is further considered. In this setting, the paper also provides a summary of the journal’s activities for 2018, and in the future. Design/methodology/approach Drawing on contemporary data sets, the paper illustrates the increasingly diverse and confusing array of “evidence” brought to bear on the question of the relative quality of accounting research. Commercial products used to rate and rank journals, and judge the academic impact of individual scholars and their papers not only offer insight and visibility, but also have the potential to misinform scholars and their assessors. Findings In the move from simple journal ranking lists to big data and citations, and increasingly to concerns with impact and engagement, the authors identify several challenges facing academics and administrators alike. The individual academic and his or her contribution to scholarship are increasingly marginalised in the name of discipline, faculty and institutional performance. A growing university performance management culture within, for example, the UK and Australasia, has reached a stage in the past decade where publication and citation metrics are driving allocations of travel grants, research grants, promotions and appointments. With an expanded range of available metrics and products to judge their worth, or have it judged for them, scholars need to be increasingly informed of the nuanced or not-so-nuanced uses to which these measurement systems will be put. Narrow, restricted and opaque peer-based sources such as journal ranking lists are now being challenged by more transparent citation-based sources. Practical implications The issues addressed in this commentary offer a critical understanding of contemporary metrics and measurement in determining the quality of interdisciplinary accounting research. Scholars are urged to reflect upon the challenges they face in a rapidly moving context. Individuals are increasingly under pressure to seek out preferred publication outlets, developing and curating a personal citation profile. Yet such extrinsic outcomes may come at the cost of the core values that motivate the interdisciplinary scholar and research. Originality/value This paper provides a forward-looking focus on the critical role of academics in interdisciplinary accounting research.


Author(s):  
Tushar ◽  
Tushar ◽  
Shibendu Shekhar Roy ◽  
Dilip Kumar Pratihar

Clustering is a potential tool of data mining. A clustering method analyzes the pattern of a data set and groups the data into several clusters based on the similarity among themselves. Clusters may be either crisp or fuzzy in nature. The present chapter deals with clustering of some data sets using Fuzzy C-Means (FCM) algorithm and Entropy-based Fuzzy Clustering (EFC) algorithm. In FCM algorithm, the nature and quality of clusters depend on the pre-defined number of clusters, level of cluster fuzziness and a threshold value utilized for obtaining the number of outliers (if any). On the other hand, the quality of clusters obtained by the EFC algorithm is dependent on a constant used to establish the relationship between the distance and similarity of two data points, a threshold value of similarity and another threshold value used for determining the number of outliers. The clusters should ideally be distinct and at the same time compact in nature. Moreover, the number of outliers should be as minimum as possible. Thus, the above problem may be posed as an optimization problem, which will be solved using a Genetic Algorithm (GA). The best set of multi-dimensional clusters will be mapped into 2-D for visualization using a Self-Organizing Map (SOM).


2016 ◽  
Vol 8 (1) ◽  
Author(s):  
Lucia Parisi ◽  
Teresa Di Filippo ◽  
Michele Roccella

Nowadays, quality of life is receiving an increasing attention in all scientific areas. Rett syndrome (RTT) is a rare neurological development, affecting mainly females. The congenital disease affects the central nervous system, and is one of the most common causes of severe intellectual disability. The aim of our study is to evaluate the effect of RTT on the quality of life of people who are affected. Both parents of 18 subjects, all female, diagnosed with RTT, took part in the research. Quality of life was assessed using the Italian version of the Impact of Childhood Illness Scale. This scale consists of 30 questions that investigate the effect of illness on children, parents and families. For each question, the parent was asked to rate two variables: frequency and importance. Another questionnaire was administered to obtain medical history, diagnostic and therapeutic data of the persons with RTT. Our data show that RTT has a considerable impact on both the child’s development and the entire family. Parents’ answers demonstrated that their child’s illness had consequences for the child and how the family coped with it. For this reason, attention should be directed at psychological and social aspects, as well as attitudes, manners, reactions and effects such disturbances can have on the entire family.


2019 ◽  
Vol 45 (2) ◽  
pp. 190-221 ◽  
Author(s):  
Domenico Piatti ◽  
Peter Cincinelli

PurposeThe purpose of this paper is to investigate whether the quality of the credit process is sensitive to reaching a particular threshold level of non-performing loans (NPLs) and, more importantly, whether higher NPLs ratios could make the monitoring activity ineffective.Design/methodology/approachThe empirical design is composed of two steps: in the first step, the authors introduce a monitoring performance indicator (MPI) of the credit process by combining the non-parametric technique Data Envelopment Analysis with some financial ratios adopted as input and output variables. As second step, the authors apply a threshold panel regression model to a sample of 298 Italian banks, over the time period 2006–2014, and the authors investigate whether the quality of the credit process is sensitive to reaching a particular threshold level of NPLs.FindingsThis paper finds that, first, when the NPLs ratio remains below the threshold value estimated endogenously, an increase in the quality of monitoring has a positive impact on the NPLs ratio. Second, if the NPLs ratio exceeds the estimated threshold, the relationship between the NPLs ratio and quality of monitoring assumes a positive value and is statistically significant.Research limitations/implicationsDue to the lack of data, the investigation of NPLs in the Italian industry across loan types combined with the monitoring effort by banks management was not possible. The authors plan to investigate this topic in future studies.Practical implicationsThe identification of the threshold has a double operational valence. The first regards the Supervisory Authority, the threshold approach could be used as an early warning in order to introduce active control strategies based on the additional information requested or by on-site inspections. The second implication is highlighted in relation to the individual banks, the monitoring of credit control quality, if objective and comparable, could facilitate the emergence of best practices among banks.Social implicationsA high NPLs ratio requires greater loan provisions, which reduces capital resources available for lending, and dents bank profitability. Moreover, structural weaknesses on banks’ balance sheets still persist particularly in relation to the inadequate internal governance structures. This means that bank management must able to recognise in advance early warning signals by providing prudent measurement together with an in-depth valuation of loans portfolio.Originality/valueThe originality of the paper is twofold: the authors introduce a new proxy of credit monitoring, called MPI; the authors provide an empirical proof of the Diamond’s (1991) economic intuition: for riskier borrowers, the monitoring activity is an inappropriate instrument depending on the bad reputational quality of borrowers.


2017 ◽  
Vol 83 (6) ◽  
pp. 633-639 ◽  
Author(s):  
Nathan M. Hinkle ◽  
Vandana Botta ◽  
John P. Sharpe ◽  
Paxton Dickson ◽  
Jeremiah Deneve ◽  
...  

Improved oncological outcomes after cytoreductive surgery (CRS) with hyperthermic intraperitoneal chemotherapy (HIPEC) in highly selected patients have been well documented. The extensive nature of the procedure adversely affects quality of life (QoL). The aim of this study is to longitudinally evaluate QoL following CRS/HIPEC. This is a retrospective review of a prospectively maintained database of patients with peritoneal malignancies undergoing CRS/HIPEC. Clinicopathological data, oncologic outcomes, and QoL were analyzed preoperatively and post-operatively at 2 weeks, and 1, 3, 6, and 12 months. The Functional Assessment of Cancer Therapy-Colorectal instrument was used to determine changes in QoL after CRS/HIPEC and the impact of early recurrence (<12 months) on QoL. Thirty-six patients underwent CRS/HIPEC over 36 months. The median peritoneal cancer index score was 18 and the completeness of cytoreduction-0/1 rate was 97.2 per cent. Postoperative major morbidity was 16.7 per cent with one perioperative death. Disease-free survival was 12.6 months in patients with high-grade tumors versus 31.0 months in those with low-grade tumors (P = 0.03). QoL decreased postoperatively and improved to baseline in six months. Patients with early recurrence had a decrease in global QoL compared with preoperative QoL at 6 (P < 0.03) and 12 months (P < 0.05). This correlation was not found in patients who had not recurred. Patients who undergo CRS/HIPEC have a decrease in QoL that plateaus in 3 to 6 months. Early recurrence adversely impacts QoL at 6 and 12 months. This study emphasizes the importance of patient selection for CRS/HIPEC. The expected QoL trajectory in patients at risk for early recurrence must be carefully weighed against the potential oncological benefit of CRS/HIPEC.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Linh Truong-Hong ◽  
Roderik Lindenbergh ◽  
Thu Anh Nguyen

PurposeTerrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.Design/methodology/approachIn practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.FindingsThe study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.Research limitations/implicationsDue to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.Practical implicationsThis study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.Social implicationsThe results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.Originality/valueAlthough a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.


2016 ◽  
Vol 31 (4) ◽  
pp. 451-455 ◽  
Author(s):  
Valeria Scotti ◽  
Annalisa De Silvestri ◽  
Luigia Scudeller ◽  
Paola Abele ◽  
Funda Topuz ◽  
...  

Introduction Novel bibliometric indexes (commonly known as altmetrics) are gaining interest within the scientific community and might represent an important alternative measure of research quality and output. Aims We evaluate how these new metrics correlate with established bibliometric indexes such as the impact factor (IF), currently used as a measure of scientific production as well as a criterion for scientific research funding, and how they might be helpful in assessing the impact of research. Methods We calculated altmetrics scores for all the articles published at our institution during a single year and examined the correlation between altmetrics scores and IFs as a measure of research quality and impact in all departments. Results For all articles from the various departments published in a single year, the altmetrics score and the sum of all IFs showed a strong and significant correlation (Spearman's rho 0.88). The correlation was significant also when the major components of altmetrics, including Facebook, Twitter and Mendeley, were analyzed. The implementation of altmetrics has been found to be easy and effective at both the researcher and librarian levels. Conclusions The novel bibliographic index altmetrics is consistent and reliable and can complement or be considered a valid alternative to standard bibliometric indexes to benchmark output and quality of research for academic and funding purposes.


2017 ◽  
Vol 2017 (10) ◽  
pp. 12-19 ◽  
Author(s):  
Ewelina Kwiatkowska

Research quality railroad crossings with under sleeper pad. The quality of the railway track is infl uenced by a number of factors, such as durability, reliability, swelling and cost. The article made use of impact assessment under sleepers pad (USP) on the quality of rail track. The author has presented laboratory research, computer research and dynamic study of turnouts with USP. The impact of the USP on the quality of turnouts was assessed. Research results have shown an increase in the quality of turnouts with USP.


Sign in / Sign up

Export Citation Format

Share Document