scholarly journals Next Generation of the JAVA Image Science Toolkit (JIST) Visualization and Validation

2012 ◽  
Author(s):  
Bo Li ◽  
Frederick Bryan ◽  
Bennett Landman

Modern medical imaging analyses often involve the concatenation of multiple steps, and neuroimaging analysis is no exception. The Java Image Science Toolkit (JIST) has provided a framework for both end users and engineers to synthesize processing modules into tailored, automatic multi-step processing pipelines (“layouts”) and rapid prototyping of module development. Since its release, JIST has facilitated substantial neuroimaging research and fulfilled much of its intended goal. However, key weaknesses must be addressed for JIST to more fully realize its potential and become accessible to an even broader community base. Herein, we identify three core challenges facing traditional JIST (JIST-I) and present the solutions in the next generation JIST (JIST-II). First, in response to community demand, we have introduced seamless data visualization; users can now click ‘show this data’ through the program interfaces and avoid the need to locating files on the disk. Second, as JIST is an open-source community effort by-design; any developer may add modules to the distribution and extend existing functionality for release. However, the large number of developers and different use cases introduced instability into the overall JIST-I framework, causing users to freeze on different, incompatible versions of JIST-I, and the JIST community began to fracture. JIST-II addresses the problem of compilation instability by performing continuous integration checks nightly to ensure community implemented changes do not negatively impact overall JIST-II functionality. Third, JIST-II allows developers and users to ensure that functionality is preserved by running functionality checks nightly using the continuous integration framework. With JIST-II, users can submit layout test cases and quality control criteria through a new GUI. These test cases capture all runtime parameters and help to ensure that the module produces results within tolerance, despite changes in the underlying architecture. These three “next generation” improvements increase the fidelity of the JIST framework and enhance utility by allowing researchers to more seamlessly and robustly build, manage, and understand medical image analysis processing pipelines.

2004 ◽  
Vol 4 (2) ◽  
pp. 23-30
Author(s):  
K. Connell ◽  
M. Pope ◽  
K. Miller ◽  
J. Scheller ◽  
J. Pulz

Designing and conducting standardized microbiological method interlaboratory validation studies is challenging because most methods are manual, rather than instrument-based, and results from the methods are typically subjective. Determinations of method recovery, in particular, are problematic, due to difficulties in assessing the true spike amount. The standardization and validation process used for the seven most recent USEPA 1600-series pathogen monitoring methods has begun to address these challenges. A staged development process was used to ensure that methods were adequately tested and standardized before resources were dedicated to interlaboratory validation. The interlaboratory validation studies for USEPA Method 1622, for Cryptosporidium, USEPA Method 1601 for coliphage, and USEPA Method 1605 for Aeromonas assessed method performance using different approaches, due the differences in the nature of the target analytes and the data quality needs of each study. However, the use of enumerated spikes in all of the studies allowed method recovery and precision to be assessed, and also provided the data needed to establish quantitative quality control criteria for the methods.


Author(s):  
Damien Jacot ◽  
Trestan Pillonel ◽  
Gilbert Greub ◽  
Claire Bertelli

Although many laboratories worldwide have developed their sequencing capacities in response to the need for SARS-CoV-2 genome-based surveillance of variants, only few reported some quality criteria to ensure sequence quality before lineage assignment and submission to public databases. Hence, we aimed here to provide simple quality control criteria for SARS-CoV-2 sequencing to prevent erroneous interpretation of low quality or contaminated data. We retrospectively investigated 647 SARS-CoV-2 genomes obtained over ten tiled amplicons sequencing runs. We extracted 26 potentially relevant metrics covering the entire workflow from sample selection to bioinformatics analysis. Based on data distribution, critical values were established for eleven selected metrics to prompt further quality investigations for problematic samples, in particular those with a low viral RNA quantity. Low frequency variants (<70% of supporting reads) can result from PCR amplification errors, sample cross contaminations or presence of distinct SARS-CoV2 genomes in the sample sequenced. The number and the prevalence of low frequency variants can be used as a robust quality criterion to identify possible sequencing errors or contaminations. Overall, we propose eleven metrics with fixed cutoff values as a simple tool to evaluate the quality of SARS-CoV-2 genomes, among which cycle thresholds, mean depth, proportion of genome covered at least 10x and the number of low frequency variants combined with mutation prevalence data.


1996 ◽  
Vol 42 (11) ◽  
pp. 1832-1837 ◽  
Author(s):  
B K De ◽  
B A Karr ◽  
S Ghosn ◽  
B E Copeland

Abstract During an experimental period of 12 months in 1992-1993, while we were comparing the effectiveness of monthly vs quarterly use of the National Institute for Standards and Technology Standard Reference Material (NIST SRM) 909a as an accuracy material for the projected 30-year Fernald Medical Monitoring Program, we encountered three random defective vials with a glucose recovery of less than 30% of the NIST-assigned value. Analysis with five different multichannel instruments confirmed the original finding. Concomitant glucose recovery from adjacent vials was 97%-104%, as determined by using the same instruments, reagents, calibrators, and quality-control criteria on the same days. Recoveries of uric acid and cholesterol were also low (53-75% and 75-80%, respectively) in the three defective vials. Other analytes were unaffected. Studies to identify the cause of the defective vials were carried out with microbiological, electron microscopic, and biochemical techniques. When used for accuracy studies, each vial of NIST SRM 909a should have a concomitant check for glucose recovery to detect whether the vial is defective.


2012 ◽  
Vol 19 (4) ◽  
pp. 273-277 ◽  
Author(s):  
Youn Ho Shin ◽  
Sun Jung Jang ◽  
Jung Won Yoon ◽  
Hye Mi Jee ◽  
Sun Hee Choi ◽  
...  

BACKGROUND: Bronchodilator responses (BDR) are routinely used in the diagnosis and management of asthma; however, their acceptability and repeatability have not been evaluated using quality control criteria for preschool children.OBJECTIVES: To compare conventional spirometry with an impulse oscillometry system (IOS) in healthy and asthmatic preschool children.METHODS: Data from 30 asthmatic children and 29 controls (two to six years of age) who underwent IOS and spirometry before and after salbutamol administration were analyzed.RESULTS: Stable asthmatic subjects significantly differed versus controls in their spirometry-assessed BDR (forced expiratory volume in 1 s [FEV1], forced vital capacity and forced expiratory flow at 25% to 75% of forced vital capacity) as well as their IOS-assessed BDR (respiratory resistance at 5 Hz [Rrs5], respiratory reactance at 5 Hz and area under the reactance curve). However, comparisons based on the area under the ROC curve for ΔFEV1% initial versus ΔRrs5% initial were 0.82 (95% CI 0.71 to 0.93) and 0.75 (95% CI 0.62 to 0.87), respectively. Moreover, the sensitivity and specificity for ΔFEV1≥9% were 0.53 and 0.93, respectively. Importantly, sensitivity increased to 0.63 when either ΔFEV1≥9% or ΔRrs5≥29% was considered as an additional criterion for the diagnosis of asthma.CONCLUSION: The accuracy of asthma diagnosis in preschool children may be increased by combining spirometry with IOS when measuring BDR.


Kybernetes ◽  
2017 ◽  
Vol 46 (5) ◽  
pp. 876-892 ◽  
Author(s):  
Parisa Fouladi ◽  
Nima Jafari Navimipour

Purpose This paper aims to propose a new method for evaluating the quality and prioritizing of the human resources (HRs) based on trust, reputation, agility, expertise and cost criteria in the expert cloud. To evaluate some quality control (QC) factors, a model based on the SERVQUAL is used. Design/methodology/approach The aim of this paper is to offer a fast and simple method for selecting the HRs by the customers. To achieve this goal, the ranking diagram of different HRs based on the different criteria of QC is provided. By means of this method, the customer can rapidly decide on the selection of the required HRs. By using the proposed method, the scores for various criteria are evaluated. These criteria are used in the ranking of each HR which is obtained based on the evaluation conducted by previous customers and their colleagues. First, customers were asked to select their needed criteria and then by constructing a hierarchical structure, the ranking diagram of different HRs is achieved. Using a ranking system based on evaluating the quality of the model, satisfy the customer needs to be based on the properties of HRs. Also, an analytical hierarchical process-based ranking mechanism is proposed to solve the problem of assigning weights to features for considering the interdependence between them to rank the HRs in the expert cloud. Findings The obtained results showed the applicability of the radar graph using a case study and also numerically obtained results showed that a hierarchical structure increases the quality and speed rating of HR ranking than the previous works. Originality/value The suggested ranking method in this paper allows the optimal selection due to the special needs of any given customer in the expert cloud.


Sign in / Sign up

Export Citation Format

Share Document