scholarly journals Quantitative lung morphology: semi-automated measurement of mean linear intercept

2019 ◽  
Vol 19 (1) ◽  
Author(s):  
George Crowley ◽  
Sophia Kwon ◽  
Erin J. Caraher ◽  
Syed Hissam Haider ◽  
Rachel Lam ◽  
...  

Abstract Background Quantifying morphologic changes is critical to our understanding of the pathophysiology of the lung. Mean linear intercept (MLI) measures are important in the assessment of clinically relevant pathology, such as emphysema. However, qualitative measures are prone to error and bias, while quantitative methods such as mean linear intercept (MLI) are manually time consuming. Furthermore, a fully automated, reliable method of assessment is nontrivial and resource-intensive. Methods We propose a semi-automated method to quantify MLI that does not require specialized computer knowledge and uses a free, open-source image-processor (Fiji). We tested the method with a computer-generated, idealized dataset, derived an MLI usage guide, and successfully applied this method to a murine model of particulate matter (PM) exposure. Fields of randomly placed, uniform-radius circles were analyzed. Optimal numbers of chords to assess based on MLI were found via receiver-operator-characteristic (ROC)-area under the curve (AUC) analysis. Intraclass correlation coefficient (ICC) measured reliability. Results We demonstrate high accuracy (AUCROC > 0.8 for MLIactual > 63.83 pixels) and excellent reliability (ICC = 0.9998, p < 0.0001). We provide a guide to optimize the number of chords to sample based on MLI. Processing time was 0.03 s/image. We showed elevated MLI in PM-exposed mice compared to PBS-exposed controls. We have also provided the macros that were used and have made an ImageJ plugin available free for academic research use at https://med.nyu.edu/nolanlab. Conclusions Our semi-automated method is reliable, equally fast as fully automated methods, and uses free, open-source software. Additionally, we quantified the optimal number of chords that should be measured per lung field.

2019 ◽  
Vol 26 (1) ◽  
pp. e100004 ◽  
Author(s):  
Athanasios Kotoulas ◽  
Ioannis Stratis ◽  
Theodoros Goumenidis ◽  
George Lambrou ◽  
Dimitrios - Dionysios Koutsouris

ObjectiveAn intranet portal that combines cost-free, open-source software technology with easy set-up features can be beneficial for daily hospital processes. We describe the short-term adoption rates of a costless content management system (CMS) in the intranet of a tertiary Greek hospital.DesignDashboard statistics of our CMS platform were the implementation assessment of our system.ResultsIn a period of 10 months of running the software, the results indicate the employees overcame ‘Resistance to Change’ status. The average growth rate of end users who exploit the portal services is calculated as 2.73 every 3.3 months.ConclusionWe found our intranet web-based portal to be acceptable and helpful so far. Exploitation of an open-source CMS within the hospital intranet can influence healthcare management and the employees’ way of working as well.


2013 ◽  
Vol 118 (1) ◽  
pp. 84-93 ◽  
Author(s):  
Luis Jiménez-Roldán ◽  
Jose F. Alén ◽  
Pedro A. Gómez ◽  
Ramiro D. Lobato ◽  
Ana Ramos ◽  
...  

Object There were two main purposes to this study: first, to assess the feasibility and reliability of 2 quantitative methods to assess bleeding volume in patients who suffered spontaneous subarachnoid hemorrhage (SAH), and second, to compare these methods to other qualitative and semiquantitative scales in terms of reliability and accuracy in predicting delayed cerebral ischemia (DCI) and outcome. Methods A prospective series of 150 patients consecutively admitted to the Hospital 12 de Octubre over a 4-year period were included in the study. All of these patients had a diagnosis of SAH, and diagnostic CT was able to be performed in the first 24 hours after the onset of the symptoms. All CT scans were evaluated by 2 independent observers in a blinded fashion, using 2 different quantitative methods to estimate the aneurysmal bleeding volume: region of interest (ROI) volume and the Cavalieri method. The images were also graded using the Fisher scale, modified Fisher scale, Claasen scale, and the semiquantitative Hijdra scale. Weighted κ coefficients were calculated for assessing the interobserver reliability of qualitative scales and the Hijdra scores. For assessing the intermethod and interrater reliability of volumetric measurements, intraclass correlation coefficients (ICCs) were used as well as the methodology proposed by Bland and Altman. Finally, weighted κ coefficients were calculated for the different quartiles of the volumetric measurements to make comparison with qualitative scales easier. Patients surviving more than 48 hours were included in the analysis of DCI predisposing factors and analyzed using the chi-square or the Mann-Whitney U-tests. Logistic regression analysis was used for predicting DCI and outcome in the different quartiles of bleeding volume to obtain adjusted ORs. The diagnostic accuracy of each scale was obtained by calculating the area under the receiver operating characteristic curve (AUC). Results Qualitative scores showed a moderate interobserver reproducibility (weighted κ indexes were always < 0.65), whereas the semiquantitative and quantitative scores had a very strong interobserver reproducibility. Reliability was very high for all quantitative measures as expressed by the ICCs for intermethod and interobserver agreement. Poor outcome and DCI occurred in 49% and 31% of patients, respectively. Larger bleeding volumes were related to a poorer outcome and a higher risk of developing DCI, and the proportion of patients suffering DCI or a poor outcome increased with each quartile, maintaining this relationship after adjusting for the main clinical factors related to outcome. Quantitative analysis of total bleeding volume achieved the highest AUC, and had a greater discriminative ability than the qualitative scales for predicting the development of DCI and outcome. Conclusions The use of quantitative measures may reduce interobserver variability in comparison with categorical scales. These measures are feasible using dedicated software and show a better prognostic capability in relation to outcome and DCI than conventional categorical scales.


Procedia CIRP ◽  
2014 ◽  
Vol 25 ◽  
pp. 253-260 ◽  
Author(s):  
Oladipupo Olaitan ◽  
John Geraghty ◽  
Paul Young ◽  
Georgios Dagkakis ◽  
Cathal Heavey ◽  
...  

2021 ◽  
Vol 13 (3) ◽  
pp. 402
Author(s):  
Pablo Rodríguez-Gonzálvez ◽  
Manuel Rodríguez-Martín

The thermography as a methodology to quantitative data acquisition is not usually addressed in the degrees of university programs. The present manuscript proposes a novel approach for the acquisition of advanced competences in engineering courses associated with the use of thermographic images via free/open-source software solutions. This strategy is established from a research based on the statistical and three-dimensional visualization techniques over thermographic imagery to improve the interpretation and comprehension of the different sources of error affecting the measurements and, thereby, the conclusions and analysis arising from them. The novelty is focused on the detection of non-normalities in thermographic images, which is illustrates in the experimental section. Additionally, the specific workflow for the generation of learning material related with this aim is raised for asynchronous and e-learning programs. These virtual materials can be easily deployed in an institutional learning management system, allowing the students to work with the models by means of free/open-source solutions easily. Subsequently, the present approach will give new tools to improve the application of professional techniques, will improve the students’ critical sense to know how to interpret the uncertainties in thermography using a single thermographic image, therefore they will be better prepared to face future challenges with more critical thinking.


Author(s):  
Athanasios-Ilias Rousinopoulos ◽  
Gregorio Robles ◽  
Jesús M. González-Barahona

O desenvolvimento de software é uma atividade intensive em esforço humano. Assim, a forma como os desenvolvedores encaram suas tarefas é de suam importância. Em um ambiente como o usual em projetos de FOSS (free/open source software) em que profissionais (desenvolvedores pagos) compartilham os esforços de desenvolvimento com voluntários, a moral da comunidade de desenvolvedores e usuários é fundamental. Neste artigo, apresentamos uma análise preliminary utilizando técnicas de análise de sentimentos realizada em um projeto de FOSS. Para isso, executamos a mineração da lista de endereços eletrônicos de um projeto e aplicamos as técnicas propostas aos participantes mais relevantes. Embora a aplicação seja limitada, no momento atual, experamos que essa experiência possa ser benéfica no future para determiner situações que possam afetar os desenvolvedores ou o projeto, tais como baixa produtividade, abandono do projeto ou bifurcação do projeto, entre outras.


Sign in / Sign up

Export Citation Format

Share Document