scholarly journals Development of a tool-kit for the detection of healthy and injured cardiac tissue based on MR imaging

2017 ◽  
Vol 3 (2) ◽  
pp. 195-198
Author(s):  
Philip Westphal ◽  
Sebastian Hilbert ◽  
Michael Unger ◽  
Claire Chalopin

AbstractPlanning of interventions to treat cardiac arrhythmia requires a 3D patient specific model of the heart. Currently available commercial or free software dedicated to this task have important limitations for routinely use. Automatic algorithms are not robust enough while manual methods are time-consuming. Therefore, the project attempts to develop an optimal software tool. The heart model is generated from preoperative MR data-sets acquired with contrast agent and allows visualisation of damaged cardiac tissue. A requirement in the development of the software tool was the use of semi-automatic functions to be more robust. Once the patient image dataset has been loaded, the user selects a region of interest. Thresholding functions allow selecting the areas of high intensities which correspond to anatomical structures filled with contrast agent, namely cardiac cavities and blood vessels. Thereafter, the target-structure, for example the left ventricle, is coarsely selected by interactively outlining the gross shape. An active contour function adjusts automatically the initial contour to the image content. The result can still be manually improved using fast interaction tools. Finally, possible scar tissue located in the cavity muscle is automatically detected and visualized on the 3D heart model. The model is exported in format which is compatible with interventional devices at hospital. The evaluation of the software tool included two steps. Firstly, a comparison with two free software tools was performed on two image data sets of variable quality. Secondly, six scientists and physicians tested our tool and filled out a questionnaire. The performance of our software tool was visually judged more satisfactory than the free software, especially on the data set of lower quality. Professionals evaluated positively our functionalities regarding time taken, ease of use and quality of results. Improvements would consist in performing the planning based on different MR modalities.

2012 ◽  
Vol 163 (4) ◽  
pp. 119-129
Author(s):  
Fabian Kostadinov ◽  
Renato Lemm ◽  
Oliver Thees

A software tool for the estimation of wood harvesting productivity using the kNN method For operational planning and management of wood harvests it is important to have access to reliable information on time consumption and costs. To estimate these efficiently and reliably, appropriate methods and calculation tools are needed. The present article investigates whether use of the method of the k nearest neighbours (kNN) is appropriate in this case. The kNN algorithm is first explained, then is applied to two sets of data “combined cable crane and processor” and “skidder”, both containing wood harvesting figures, and thus the estimation accuracy of the method is determined. It is shown that the kNN method's estimation accuracy lies within the same order of magnitude as that of a multiple linear regression. Advantages of the kNN method are that it is easy to understand and to visualize, together with the fact that estimation models do not become out of date, since new data sets can be constantly taken into account. The kNN Workbook has been developed by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL). It is a software tool with which any data set can be analysed in practice using the kNN method. This tool is also presented in the article.


Author(s):  
Naoto Yamaguchi ◽  
◽  
Mao Wu ◽  
Michinori Nakata ◽  
Hiroshi Sakai ◽  
...  

This article reports an application ofRough Nondeterministic Information Analysis (RNIA)to two data sets. One is the Mushroom data set in the UCI machine leaning repository, and the other is a student questionnaire data set. Even though these data sets include many missing values, we obtained some interesting rules by using ourgetRNIAsoftware tool. This software is powered by theNIS-Apriorialgorithm, and we apply rule generation and question-answering functionalities to data sets with nondeterministic values.


2008 ◽  
Vol 130 (5) ◽  
Author(s):  
Vickie B. Shim ◽  
Rocco P. Pitto ◽  
Robert M. Streicher ◽  
Peter J. Hunter ◽  
Iain A. Anderson

To produce a patient-specific finite element (FE) model of a bone such as the pelvis, a complete computer tomographic (CT) or magnetic resonance imaging (MRI) geometric data set is desirable. However, most patient data are limited to a specific region of interest such as the acetabulum. We have overcome this problem by providing a hybrid method that is capable of generating accurate FE models from sparse patient data sets. In this paper, we have validated our technique with mechanical experiments. Three cadaveric embalmed pelves were strain gauged and used in mechanical experiments. FE models were generated from the CT scans of the pelves. Material properties for cancellous bone were obtained from the CT scans and assigned to the FE mesh using a spatially varying field embedded inside the mesh while other materials used in the model were obtained from the literature. Although our FE meshes have large elements, the spatially varying field allowed them to have location dependent inhomogeneous material properties. For each pelvis, five different FE meshes with a varying number of patient CT slices (8–12) were generated to determine how many patient CT slices are needed for good accuracy. All five mesh types showed good agreement between the model and experimental strains. Meshes generated with incomplete data sets showed very similar stress distributions to those obtained from the FE mesh generated with complete data sets. Our modeling approach provides an important step in advancing the application of FE models from the research environment to the clinical setting.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 167
Author(s):  
Ivan Kholod ◽  
Evgeny Yanaki ◽  
Dmitry Fomichev ◽  
Evgeniy Shalugin ◽  
Evgenia Novikova ◽  
...  

The rapid development of Internet of Things (IoT) systems has led to the problem of managing and analyzing the large volumes of data that they generate. Traditional approaches that involve collection of data from IoT devices into one centralized repository for further analysis are not always applicable due to the large amount of collected data, the use of communication channels with limited bandwidth, security and privacy requirements, etc. Federated learning (FL) is an emerging approach that allows one to analyze data directly on data sources and to federate the results of each analysis to yield a result as traditional centralized data processing. FL is being actively developed, and currently, there are several open-source frameworks that implement it. This article presents a comparative review and analysis of the existing open-source FL frameworks, including their applicability in IoT systems. The authors evaluated the following features of the frameworks: ease of use and deployment, development, analysis capabilities, accuracy, and performance. Three different data sets were used in the experiments—two signal data sets of different volumes and one image data set. To model low-power IoT devices, computing nodes with small resources were defined in the testbed. The research results revealed FL frameworks that could be applied in the IoT systems now, but with certain restrictions on their use.


2013 ◽  
Vol 31 (4_suppl) ◽  
pp. 66-66
Author(s):  
Yanghee Woo ◽  
Woo Jin Hyung ◽  
Ki Jun Song ◽  
Yanfeng Hu ◽  
Naoki Okumura ◽  
...  

66 Background: Patient-specific prognosis for gastric cancer is difficult to determine. Internationally accepted AJCC TNM staging system currently provides the best framework for predicting a patient’s prognosis. However, a major weakness of the TNM system is that significant survival differences exist even within its subgroups. The objective of this study was to create a simple tool to accurately predict patient survival from gastric cancer after gastrectomy. Methods: Between December 1986 to March 2007, 10,621 patients were surgically treated for gastric cancer at a single institution and observed until death. A nomogram was determined using Cox proportional hazard regression for multivariate analysis and the Kaplan-Meier method for estimation of 5-year overall survival. Overall survival was the endpoint. The predicted probability of the nomogram for actual overall survival was compared to the 7th edition AJCC TNM staging system. Then, the nomogram was validated using external data sets from four different institutions from Korea, Japan, and China. The number of patients in each data set was 1573 (A), 297 (B), 78 (C) and 767 patients (D). Results: Variables selected for the prediction model included age, gender, depth of invasion, number of metastatic lymph nodes (LN), total number of LN retrieved, and the presence of distant metastasis. The newly developed nomogram more accurately predicted a gastric cancer patient’s overall 5-year survival than the 7th Edition AJCC TNM system (p=0.0024) with area under the curve 0.8023 (our nomogram) and 0.7869 (AJCC TNM staging system). The concordance indexes of the different validation sets were 0.824 (A), 0.835 (B), 0.916 (C), and 0.767 (D). Conclusions: Our simple nomogram requires minimal patient and tumor information. It accurately predicts the 5-year overall survival for a patient with gastric cancer after surgical resection. Already internationally validated with data sets of various sample sizes and from different countries, our new nomogram provides a useful tool for prognostication after gastrectomy with wide applicability in different patient populations and institutions.


2016 ◽  
Vol 9 (1) ◽  
pp. 383-392 ◽  
Author(s):  
K. A. Endsley ◽  
M. G. Billmire

Abstract. Due to the proliferation of geophysical models, particularly climate models, the increasing resolution of their spatiotemporal estimates of Earth system processes, and the desire to easily share results with collaborators, there is a genuine need for tools to manage, aggregate, visualize, and share data sets. We present a new, web-based software tool – the Carbon Data Explorer – that provides these capabilities for gridded geophysical data sets. While originally developed for visualizing carbon flux, this tool can accommodate any time-varying, spatially explicit scientific data set, particularly NASA Earth system science level III products. In addition, the tool's open-source licensing and web presence facilitate distributed scientific visualization, comparison with other data sets and uncertainty estimates, and data publishing and distribution.


Author(s):  
Emad Alharbi ◽  
Paul Bond ◽  
Radu Calinescu ◽  
Kevin Cowtan

Proteins are macromolecules that perform essential biological functions which depend on their three-dimensional structure. Determining this structure involves complex laboratory and computational work. For the computational work, multiple software pipelines have been developed to build models of the protein structure from crystallographic data. Each of these pipelines performs differently depending on the characteristics of the electron-density map received as input. Identifying the best pipeline to use for a protein structure is difficult, as the pipeline performance differs significantly from one protein structure to another. As such, researchers often select pipelines that do not produce the best possible protein models from the available data. Here, a software tool is introduced which predicts key quality measures of the protein structures that a range of pipelines would generate if supplied with a given crystallographic data set. These measures are crystallographic quality-of-fit indicators based on included and withheld observations, and structure completeness. Extensive experiments carried out using over 2500 data sets show that the tool yields accurate predictions for both experimental phasing data sets (at resolutions between 1.2 and 4.0 Å) and molecular-replacement data sets (at resolutions between 1.0 and 3.5 Å). The tool can therefore provide a recommendation to the user concerning the pipelines that should be run in order to proceed most efficiently to a depositable model.


2017 ◽  
Vol 37 (suppl_1) ◽  
Author(s):  
Xiaoya Guo ◽  
David Monoly ◽  
Chun Yang ◽  
Habib Samady ◽  
Jie Zheng ◽  
...  

Accurate cap thickness and stress/strain quantifications are of fundamental importance for vulnerable plaque research. An innovative modeling approach combining intravascular ultrasound (IVUS) and optical coherence tomography (OCT) is introduced for more accurate patient-specific coronary morphology and stress/strain calculations. In vivo IVUS and OCT coronary plaque data were acquired from two patients with informed consent obtained. IVUS and OCT images were segmented, co-registered, and merged to form the IVUS+OCT data set, with OCT providing accurate cap thickness. Biplane angiography provided 3D vessel curvature. Due to IVUS resolution (150 μm), original virtual histology (VH) IVUS data often had lipid core exposed to lumen since it sets cap thickness as zero when cap thickness <150 μm. VH-IVUS data were processed with minimum cap thickness set as 50 and 180 μm to generate IVUS50 and IVUS180 data sets for modeling use. 3D fluid-structure interaction models based on IVUS+OCT, IVUS50 and IVUS180 data sets were constructed to investigate the impact of OCT cap thickness improvement on stress/strain calculations. Figure 1 is a brief summary of results from 27 slices with cap covering lipid cores from 2 patients. Mean cap thickness (unit: mm) from Patient 1 was 0.353 (OCT), 0.201 (IVUS50), and 0.329 (IVUS180), respectively. Patient 2 mean cap thickness was 0.320 (OCT), 0.224 (IVUS50), and 0.285 (IVUS180). IVUS50 underestimated cap thickness (27 slices) by 34.5%, compared to OCT cap values. IVUS50 overestimated mean cap stress (27 slices) by 45.8%, compared to OCT cap stress (96.4 vs. 66.1 kPa). IVUS50 maximum cap stress was 59.2% higher than that from IVUS+OCT model (564.2 vs. 354.5 kPa). Differences between IVUS and IVUS+OCT models for mean cap strain and flow shear stress were modest (cap strain: <12%; FSS <2%). Conclusion: IVUS+OCT data and models could provide more accurate cap thickness and stress/strain calculations which will serve as basis for plaque research.


2007 ◽  
Vol 46 (01) ◽  
pp. 38-42 ◽  
Author(s):  
V. Schulz ◽  
I. Nickel ◽  
A. Nömayr ◽  
A. H. Vija ◽  
C. Hocke ◽  
...  

SummaryThe aim of this study was to determine the clinical relevance of compensating SPECT data for patient specific attenuation by the use of CT data simultaneously acquired with SPECT/CT when analyzing the skeletal uptake of polyphosphonates (DPD). Furthermore, the influence of misregistration between SPECT and CT data on uptake ratios was investigated. Methods: Thirty-six data sets from bone SPECTs performed on a hybrid SPECT/CT system were retrospectively analyzed. Using regions of interest (ROIs), raw counts were determined in the fifth lumbar vertebral body, its facet joints, both anterior iliacal spinae, and of the whole transversal slice. ROI measurements were performed in uncorrected (NAC) and attenuation-corrected (AC) images. Furthermore, the ROI measurements were also performed in AC scans in which SPECT and CT images had been misaligned by 1 cm in one dimension beforehand (ACX, ACY, ACZ). Results: After AC, DPD uptake ratios differed significantly from the NAC values in all regions studied ranging from 32% for the left facet joint to 39% for the vertebral body. AC using misaligned pairs of patient data sets led to a significant change of whole-slice uptake ratios whose differences ranged from 3,5 to 25%. For ACX, the average left-to-right ratio of the facet joints was by 8% and for the superior iliacal spines by 31% lower than the values determined for the matched images (p <0.05). Conclusions: AC significantly affects DPD uptake ratios. Furthermore, misalignment between SPECT and CT may introduce significant errors in quantification, potentially also affecting leftto- right ratios. Therefore, at clinical evaluation of attenuation- corrected scans special attention should be given to possible misalignments between SPECT and CT.


2018 ◽  
Vol 154 (2) ◽  
pp. 149-155
Author(s):  
Michael Archer

1. Yearly records of worker Vespula germanica (Fabricius) taken in suction traps at Silwood Park (28 years) and at Rothamsted Research (39 years) are examined. 2. Using the autocorrelation function (ACF), a significant negative 1-year lag followed by a lesser non-significant positive 2-year lag was found in all, or parts of, each data set, indicating an underlying population dynamic of a 2-year cycle with a damped waveform. 3. The minimum number of years before the 2-year cycle with damped waveform was shown varied between 17 and 26, or was not found in some data sets. 4. Ecological factors delaying or preventing the occurrence of the 2-year cycle are considered.


Sign in / Sign up

Export Citation Format

Share Document