reliability of results
Recently Published Documents


TOTAL DOCUMENTS

118
(FIVE YEARS 42)

H-INDEX

15
(FIVE YEARS 2)

2021 ◽  
Vol 12 (4) ◽  
pp. 292-300
Author(s):  
S. M. Dmitriev ◽  
A. E. Khrobostov ◽  
D. N. Solncev ◽  
A. A. Barinov ◽  
A. A. Chesnokov ◽  
...  

The correlation method for measuring of the coolant fl rate is used in the operation of nuclear power plants and is widespread in research practice including study of turbulent fl    hydrodynamics. However the question of its applicability and possibilities in studies using the matrix conductometry method remains open. Earlier the algorithm for determining of the correlation fl rate using a conductometric measuring system was highlighted and the error of the results obtained was estimated and the dependence of the influence of noise and the time of data collection on the reliability of results was investigated. These works were carried out using two independent mesh sensors and the issue of the resolution of local velocity components was not covered. The purpose of this work was to test the correlation method for measuring velocity with temporal and spatial sampling using two-layer mesh conductometric sensors.As the result velocity cartograms were obtained over the cross-section of the experimental model with quasi-stationary mixing and the value of the average flow rate is in good agreement with the values obtained from the standard flow meters of the stand. Also measurements were carried out at a non-stationary setting of the experiment and realizations of the flow rate and velocity components of the flow at the measuring points were obtained.Analysis of the obtained values allows to conclude about the optimal data collection time for correlation measurements, as well as the reliability of results.


2021 ◽  
Author(s):  
Andrew A. Chen ◽  
Dhivya Srinivasan ◽  
Raymond Pomponio ◽  
Yong Fan ◽  
Ilya M. Nasrallah ◽  
...  

AbstractCommunity detection on graphs constructed from functional magnetic resonance imaging (fMRI) data has led to important insights into brain functional organization. Large studies of brain community structure often include images acquired on multiple scanners across different studies. Differences in scanner can introduce variability into the downstream results, and these differences are often referred to as scanner effects. Such effects have been previously shown to significantly impact common network metrics. In this study, we identify scanner effects in data-driven community detection results and related network metrics. We assess a commonly employed harmonization method and propose new methodology for harmonizing functional connectivity that leverage existing knowledge about network structure as well as patterns of covariance in the data. Finally, we demonstrate that our new methods reduce scanner effects in community structure and network metrics. Our results highlight scanner effects in studies of brain functional organization and provide additional tools to address these unwanted effects. These findings and methods can be incorporated into future functional connectivity studies, potentially preventing spurious findings and improving reliability of results.


Materials ◽  
2021 ◽  
Vol 14 (19) ◽  
pp. 5870
Author(s):  
Serge-Bertrand Adiko ◽  
Alexey A. Gureev ◽  
Olga N. Voytenko ◽  
Alexey V. Korotkov

This study aimed to evaluate the possibility of using Fourier Transform Infrared (FTIR) spectroscopy to track binders produced by three different plants: plants A, B, and C. The work included the quality assessment of 80 bituminous materials graded as BND 70/100 and 100/130 according to GOST 33133 (Russian interstate standard) and chemical analyses using FTIR spectroscopy. FTIR analyses were conducted before and after short-term ageing in a Rolling Thin Film Oven Test (RTFOT). Thus, the number of binder samples was multiplied by two (2) for a final total of 160 infrared (IR) spectra. All infrared spectra were normalised to ensure the reliability of results, and the standard deviation and variance coefficient were included. The principal purpose of the present work was to track the origin and the ageing extent of the bituminous binders under study.


2021 ◽  
pp. 053901842110258
Author(s):  
Anne Marcovich ◽  
Terry Shinn

This article points out some issues raised by the encounter between astrophysics (AP) and a newly emergent mathematical tool/discipline, namely artificial intelligence (AI). We suggest that this encounter has interesting consequences in terms of science evaluation. Our discussion favors an intra science perspective, both on the institutional and cognitive side. This encounter between machine learning (ML) and astrophysics points to three different consequences. (1) As a transverse tool, a same ML algorithm can be used for a diversity of very different disciplines and questions. This ambition and analytic intellectual architecture frequently identify similarities among apparently differentiated fields. (2) The perimeter of the disciplines involved in a research can lead to many and novel ways of collaboration between scientists and to new ways of evaluation of their work. And (3), the impossibility for the human mind to understand the processes involved in ML work raises the question of the reliability of results.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Jeong-Hwa Yoon ◽  
Sofia Dias ◽  
Seokyung Hahn

Abstract Background In a star-shaped network, pairwise comparisons link treatments with a reference treatment (often placebo or standard care), but not with each other. Thus, comparisons between non-reference treatments rely on indirect evidence, and are based on the unidentifiable consistency assumption, limiting the reliability of the results. We suggest a method of performing a sensitivity analysis through data imputation to assess the robustness of results with an unknown degree of inconsistency. Methods The method involves imputation of data for randomized controlled trials comparing non-reference treatments, to produce a complete network. The imputed data simulate a situation that would allow mixed treatment comparison, with a statistically acceptable extent of inconsistency. By comparing the agreement between the results obtained from the original star-shaped network meta-analysis and the results after incorporating the imputed data, the robustness of the results of the original star-shaped network meta-analysis can be quantified and assessed. To illustrate this method, we applied it to two real datasets and some simulated datasets. Results Applying the method to the star-shaped network formed by discarding all comparisons between non-reference treatments from a real complete network, 33% of the results from the analysis incorporating imputed data under acceptable inconsistency indicated that the treatment ranking would be different from the ranking obtained from the star-shaped network. Through a simulation study, we demonstrated the sensitivity of the results after data imputation for a star-shaped network with different levels of within- and between-study variability. An extended usability of the method was also demonstrated by another example where some head-to-head comparisons were incorporated. Conclusions Our method will serve as a practical technique to assess the reliability of results from a star-shaped network meta-analysis under the unverifiable consistency assumption.


2021 ◽  
Vol 17 (2) ◽  
pp. 249-255
Author(s):  
S. Yu. Martsevich ◽  
N. P. Kutishenko ◽  
Yu. V. Lukina ◽  
M. M. Lukyanov ◽  
O. M. Drapkina

The article discusses the main methods of evidence in modern medicine. Special attention is paid to randomized controlled trials and observational studies. The advantages of randomized controlled trials over observational studies are considered. A comparison of the informative value of randomized controlled trials and observational studies in assessing the effect of therapeutic interventions is made. Attention is drawn to situations when conducting randomized controlled trials is not possible and when they become the main source of information. It is emphasized that in order to verify the results of randomized controlled trials in real clinical practice, it is necessary to conduct observational studies. The basic principles of conducting observational studies are considered.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Denise Meinberger ◽  
Manuel Koch ◽  
Annika Roth ◽  
Gabriele Hermes ◽  
Jannik Stemler ◽  
...  

AbstractImmunoassays are a standard diagnostic tool that assesses immunity in severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) infection. However, immunoassays do not provide information about contaminating antigens or cross-reactions and might exhibit inaccurately high sensitivity and low specificity. We aimed to gain insight into the serological immune response of SARS-CoV-2 patients by immunoblot analysis. We analyzed serum immunoglobulins IgM, -A, and -G directed against SARS-CoV-2 proteins by immunoblot analysis from 12 infected patients. We determined IgG isotype antibodies by commercially available ELISA and assessed the clinical parameters of inflammation status and kidney and liver injury. Unexpectedly, we found no correlation between the presence of antibodies and the future course of the disease. However, attention should be paid to the parameters CRP, IL-6, and LDH. We found evidence of antibody cross-reactivity, which questions the reliability of results for serum samples that tested negative for anti-SARS-CoV-2 antibodies when assessed by immunoassays. Nevertheless, for the detection of IgG anti-SARS-CoV-2 antibodies, our data suggest that the use of the spike glycoprotein in immunoassays should be sufficient to identify positive patients. Using a combination of the spike glycoprotein and the open reading frame 8 protein could prove to be the best way of detecting anti-SARS-CoV-2 IgM antibodies.


Author(s):  
Daniel Kvak ◽  
Karolína Kvaková

One of the critical tools for early detection and subsequent evaluation of the incidence of lung diseases is chest radiography. At a time when the speed and reliability of results, especially for COVID-19 positive patients, is important, the development of applications that would facilitate the work of untrained staff involved in the evaluation is also crucial. Our model takes the form of a simple and intuitive application, into which you only need to upload X-rays: tens or hundreds at once. In just a few seconds, the physician will determine the patient's diagnosis, including the percentage accuracy of the estimate. While the original idea was a mere binary classifier that could tell if a patient was suffering from pneumonia or not, in this paper we present a model that distinguishes between a bacterial disease, a viral infection, or a finding caused by COVID-19. The aim of this research is to demonstrate whether pneumonia can be detected or even spatially localized using a uniform, supervised classification.


Author(s):  
Riduwan Napianto ◽  
Yuri Rahmanto ◽  
Rohmat Indra Borman ◽  
Ova Lestari ◽  
Nurhasan Nugroho

<em>The word uncertainty in an expert system is related to working with wrong data, wrong information, handling identical situations, the reliability of results, etc. Sources of uncertainty can come from unreliable information. This is usually caused by unclear domain concepts or for inaccurate data. One method for overcoming uncertainty is Dhempster-Shafer's theory. Dempster-shafers come up with approaches to calculate probabilities to look for evidence based on trust functions. In general the Dempster-Shafer theory is written at an interval [Confidence, Reasonable]. Belief (Bel) is a measure of the strength of evidence in support of a series of propositions. In this study an expert system will be developed to diagnose oral cancer that can recognize oral cancer based on the symptoms felt by the user. The results showed the Dempster-shafer was able to overcome the uncertainties in the construction of the inference engine, this is because the accuracy of the test results showed an accuracy of 86.6% Dempster-shafer</em>.


2021 ◽  
Vol 5 (2) ◽  
pp. 006-011
Author(s):  
Miora Koloina Ranaivosoa ◽  
Valdo Rahajanirina ◽  
Zafindrasoa Domoina Rakotovao Ravahatra ◽  
Jaquinot Randriamora ◽  
Olivat Rakoto Alsone, Andry Rasamindrakotroka

Management of pre analytical nonconformities within a laboratory is a critical step in ensuring the reliability of results. The objectives of this study are to evaluate the non-compliance of the pre-analytical phase at the Paraclinical Training and Biochemistry Research Unit of the Joseph Ravoahangy Andrianavalona University Hospital Center, to describe in detail the state of play and the progress of this stage. This is a retrospective descriptive study over a period of 5 months from November 01, 2018 to March 31, 2019 within the Paraclinical Training and Biochemistry Research Unit of Joseph Ravoahangy Andrianavalona University Hospital Center. All patient files recorded during this study period have been exploited. Only inpatient records were included in this study. In this study, 5, 71% of pre-analytical non-conformities were recorded. The most frequent non-conformities (recorded 248 times that means 56.88% of the whole nonconformities) were related to the swab or its container, followed by non-conformities related to the prescription sheet (recorded 96 times, that means 22.02%). Pre-analytical non-conformities were the most frequently identified in the surgical intensive care department with 25.24%, followed by the medical service (17.92%). Most of the nonconformities observed were due to preventable human error. However, the laboratory must know how to control nonconformities in order to prevent them and ensure the quality of the analyses.


Sign in / Sign up

Export Citation Format

Share Document