scholarly journals A Framework for the Objective Assessment of Registration Accuracy

2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Francesca Pizzorni Ferrarese ◽  
Flavio Simonetti ◽  
Roberto Israel Foroni ◽  
Gloria Menegaz

Validation and accuracy assessment are the main bottlenecks preventing the adoption of image processing algorithms in the clinical practice. In the classical approach, a posteriori analysis is performed through objective metrics. In this work, a different approach based on Petri nets is proposed. The basic idea consists in predicting the accuracy of a given pipeline based on the identification and characterization of the sources of inaccuracy. The concept is demonstrated on a case study: intrasubject rigid and affine registration of magnetic resonance images. Both synthetic and real data are considered. While synthetic data allow the benchmarking of the performance with respect to the ground truth, real data enable to assess the robustness of the methodology in real contexts as well as to determine the suitability of the use of synthetic data in the training phase. Results revealed a higher correlation and a lower dispersion among the metrics for simulated data, while the opposite trend was observed for pathologic ones. Results show that the proposed model not only provides a good prediction performance but also leads to the optimization of the end-to-end chain in terms of accuracy and robustness, setting the ground for its generalization to different and more complex scenarios.

Author(s):  
Nipon Theera-Umpon ◽  
◽  
Udomsak Boonprasert ◽  

This paper demonstrates an application of support vector machine (SVM) to the oceanic disasters search and rescue operation. The support vector regression (SVR) for system identification of a nonlinear black-box model is utilized in this research. The SVR-based ocean model helps the search and rescue unit by predicting the disastrous target’s position at any given time instant. The closer the predicted location to the actual location would shorten the searching time and minimize the loss. One of the most popular ocean models, namely the Princeton ocean model, is applied to provide the ground truth of the target leeway. From the experiments, the results on the simulated data show that the proposed SVR-based ocean model provides a good prediction compared to the Princeton ocean model. Moreover, the experimental results on the real data collected by the Royal Thai Navy also show that the proposed model can be used as an auxiliary tool in the search and rescue operation.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2020 ◽  
Author(s):  
Yoonjee Kang ◽  
Denis Thieffry ◽  
Laura Cantini

AbstractNetworks are powerful tools to represent and investigate biological systems. The development of algorithms inferring regulatory interactions from functional genomics data has been an active area of research. With the advent of single-cell RNA-seq data (scRNA-seq), numerous methods specifically designed to take advantage of single-cell datasets have been proposed. However, published benchmarks on single-cell network inference are mostly based on simulated data. Once applied to real data, these benchmarks take into account only a small set of genes and only compare the inferred networks with an imposed ground-truth.Here, we benchmark four single-cell network inference methods based on their reproducibility, i.e. their ability to infer similar networks when applied to two independent datasets for the same biological condition. We tested each of these methods on real data from three biological conditions: human retina, T-cells in colorectal cancer, and human hematopoiesis.GENIE3 results to be the most reproducible algorithm, independently from the single-cell sequencing platform, the cell type annotation system, the number of cells constituting the dataset, or the thresholding applied to the links of the inferred networks. In order to ensure the reproducibility and ease extensions of this benchmark study, we implemented all the analyses in scNET, a Jupyter notebook available at https://github.com/ComputationalSystemsBiology/scNET.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3784 ◽  
Author(s):  
Jameel Malik ◽  
Ahmed Elhayek ◽  
Didier Stricker

Hand shape and pose recovery is essential for many computer vision applications such as animation of a personalized hand mesh in a virtual environment. Although there are many hand pose estimation methods, only a few deep learning based algorithms target 3D hand shape and pose from a single RGB or depth image. Jointly estimating hand shape and pose is very challenging because none of the existing real benchmarks provides ground truth hand shape. For this reason, we propose a novel weakly-supervised approach for 3D hand shape and pose recovery (named WHSP-Net) from a single depth image by learning shapes from unlabeled real data and labeled synthetic data. To this end, we propose a novel framework which consists of three novel components. The first is the Convolutional Neural Network (CNN) based deep network which produces 3D joints positions from learned 3D bone vectors using a new layer. The second is a novel shape decoder that recovers dense 3D hand mesh from sparse joints. The third is a novel depth synthesizer which reconstructs 2D depth image from 3D hand mesh. The whole pipeline is fine-tuned in an end-to-end manner. We demonstrate that our approach recovers reasonable hand shapes from real world datasets as well as from live stream of depth camera in real-time. Our algorithm outperforms state-of-the-art methods that output more than the joint positions and shows competitive performance on 3D pose estimation task.


2020 ◽  
Author(s):  
Nicola Zoppetti ◽  
Simone Ceccherini ◽  
Flavio Barbara ◽  
Samuele Del Bianco ◽  
Marco Gai ◽  
...  

<p>Remote sounding of atmospheric composition makes use of satellite measurements with very heterogeneous characteristics. In particular, the determination of vertical profiles of gases in the atmosphere can be performed using measurements acquired in different spectral bands and with different observation geometries. The most rigorous way to combine heterogeneous measurements of the same quantity in a single Level 2 (L2) product is simultaneous retrieval. The main drawback of simultaneous retrieval is its complexity, due to the necessity to embed the forward models of different instruments into the same retrieval application. To overcome this shortcoming, we developed a data fusion method, referred to as Complete Data Fusion (CDF), to provide an efficient and adaptable alternative to simultaneous retrieval. In general, the CDF input is any number of profiles retrieved with the optimal estimation technique, characterized by their a priori information, covariance matrix (CM), and averaging kernel (AK) matrix. The output of the CDF is a single product also characterized by an a priori, a CM and an AK matrix, which collect all the available information content. To account for the geo-temporal differences and different vertical grids of the fusing profiles, a coincidence and an interpolation error have to be included in the error budget.<br>In the first part of the work, the CDF method is applied to ozone profiles simulated in the thermal infrared and ultraviolet bands, according to the specifications of the Sentinel 4 (geostationary) and Sentinel 5 (low Earth orbit) missions of the Copernicus program. The simulated data have been produced in the context of the Advanced Ultraviolet Radiation and Ozone Retrieval for Applications (AURORA) project funded by the European Commission in the framework of the Horizon 2020 program. The use of synthetic data and the assumption of negligible systematic error in the simulated measurements allow studying the behavior of the CDF in ideal conditions. The use of synthetic data allows evaluating the performance of the algorithm also in terms of differences between the products of interest and the reference truth, represented by the atmospheric scenario used in the procedure to simulate the L2 products. This analysis aims at demonstrating the potential benefits of the CDF for the synergy of products measured by different platforms in a close future realistic scenario, when the Sentinel 4, 5/5p ozone profiles will be available.<br>In the second part of this work, the CDF is applied to a set of real measurements of ozone acquired by GOME-2 onboard the MetOp-B platform. The quality of the CDF products, obtained for the first time from operational products, is compared with that of the original GOME-2 products. This aims to demonstrate the concrete applicability of the CDF to real data and its possible use to generate Level-3 (or higher) gridded products.<br>The results discussed in this presentation offer a first consolidated picture of the actual and potential value of an innovative technique for post-retrieval processing and generation of Level-3 (or higher) products from the atmospheric Sentinel data.</p>


2020 ◽  
Vol 12 (5) ◽  
pp. 771 ◽  
Author(s):  
Miguel Angel Ortíz-Barrios ◽  
Ian Cleland ◽  
Chris Nugent ◽  
Pablo Pancardo ◽  
Eric Järpe ◽  
...  

Automatic detection and recognition of Activities of Daily Living (ADL) are crucial for providing effective care to frail older adults living alone. A step forward in addressing this challenge is the deployment of smart home sensors capturing the intrinsic nature of ADLs performed by these people. As the real-life scenario is characterized by a comprehensive range of ADLs and smart home layouts, deviations are expected in the number of sensor events per activity (SEPA), a variable often used for training activity recognition models. Such models, however, rely on the availability of suitable and representative data collection and is habitually expensive and resource-intensive. Simulation tools are an alternative for tackling these barriers; nonetheless, an ongoing challenge is their ability to generate synthetic data representing the real SEPA. Hence, this paper proposes the use of Poisson regression modelling for transforming simulated data in a better approximation of real SEPA. First, synthetic and real data were compared to verify the equivalence hypothesis. Then, several Poisson regression models were formulated for estimating real SEPA using simulated data. The outcomes revealed that real SEPA can be better approximated ( R pred 2 = 92.72 % ) if synthetic data is post-processed through Poisson regression incorporating dummy variables.


Life ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 716
Author(s):  
Yunhe Liu ◽  
Aoshen Wu ◽  
Xueqing Peng ◽  
Xiaona Liu ◽  
Gang Liu ◽  
...  

Despite the scRNA-seq analytic algorithms developed, their performance for cell clustering cannot be quantified due to the unknown “true” clusters. Referencing the transcriptomic heterogeneity of cell clusters, a “true” mRNA number matrix of cell individuals was defined as ground truth. Based on the matrix and the actual data generation procedure, a simulation program (SSCRNA) for raw data was developed. Subsequently, the consistency between simulated data and real data was evaluated. Furthermore, the impact of sequencing depth and algorithms for analyses on cluster accuracy was quantified. As a result, the simulation result was highly consistent with that of the actual data. Among the clustering algorithms, the Gaussian normalization method was the more recommended. As for the clustering algorithms, the K-means clustering method was more stable than K-means plus Louvain clustering. In conclusion, the scRNA simulation algorithm developed restores the actual data generation process, discovers the impact of parameters on classification, compares the normalization/clustering algorithms, and provides novel insight into scRNA analyses.


Author(s):  
Zhanpeng Wang ◽  
Jiaping Wang ◽  
Michael Kourakos ◽  
Nhung Hoang ◽  
Hyong Hark Lee ◽  
...  

AbstractPopulation genetics relies heavily on simulated data for validation, inference, and intuition. In particular, since real data is always limited, simulated data is crucial for training machine learning methods. Simulation software can accurately model evolutionary processes, but requires many hand-selected input parameters. As a result, simulated data often fails to mirror the properties of real genetic data, which limits the scope of methods that rely on it. In this work, we develop a novel approach to estimating parameters in population genetic models that automatically adapts to data from any population. Our method is based on a generative adversarial network that gradually learns to generate realistic synthetic data. We demonstrate that our method is able to recover input parameters in a simulated isolation-with-migration model. We then apply our method to human data from the 1000 Genomes Project, and show that we can accurately recapitulate the features of real data.


2018 ◽  
Author(s):  
Yichen Li ◽  
Rebecca Saxe ◽  
Stefano Anzellotti

AbstractNoise is a major challenge for the analysis of fMRI data in general and for connectivity analyses in particular. As researchers develop increasingly sophisticated tools to model statistical dependence between the fMRI signal in different brain regions, there is a risk that these models may increasingly capture artifactual relationships between regions, that are the result of noise. Thus, choosing optimal denoising methods is a crucial step to maximize the accuracy and reproducibility of connectivity models. Most comparisons between denoising methods require knowledge of the ground truth: of what is the ‘real signal’. For this reason, they are usually based on simulated fMRI data. However, simulated data may not match the statistical properties of real data, limiting the generalizability of the conclusions. In this article, we propose an approach to evaluate denoising methods using real (non-simulated) fMRI data. First, we introduce an intersubject version of multivariate pattern dependence (iMVPD) that computes the statistical dependence between a brain region in one participant, and another brain region in a different participant. iMVPD has the following advantages: 1) it is multivariate, 2) it trains and tests models on independent folds of the real fMRI data, and 3) it generates predictions that are both between subjects and between regions. Since whole-brain sources of noise are more strongly correlated within subject than between subjects, we can use the difference between standard MVPD and iMVPD as a ‘discrepancy metric’ to evaluate denoising techniques (where more effective techniques should yield smaller differences). As predicted, the difference is the greatest in the absence of denoising methods. Furthermore, a combination of removal of the global signal and CompCorr optimizes denoising (among the set of denoising options tested).


2021 ◽  
Vol 8 ◽  
Author(s):  
Daniel Schneider ◽  
Lukas Anschuetz ◽  
Fabian Mueller ◽  
Jan Hermann ◽  
Gabriela O'Toole Bom Braga ◽  
...  

Hypothesis: The use of freehand stereotactic image-guidance with a target registration error (TRE) of μTRE + 3σTRE < 0.5 mm for navigating surgical instruments during neurotologic surgery is safe and useful.Background: Neurotologic microsurgery requires work at the limits of human visual and tactile capabilities. Anatomy localization comes at the expense of invasiveness caused by exposing structures and using them as orientation landmarks. In the absence of more-precise and less-invasive anatomy localization alternatives, surgery poses considerable risks of iatrogenic injury and sub-optimal treatment. There exists an unmet clinical need for an accurate, precise, and minimally-invasive means for anatomy localization and instrument navigation during neurotologic surgery. Freehand stereotactic image-guidance constitutes a solution to this. While the technology is routinely used in medical fields such as neurosurgery and rhinology, to date, it is not used for neurotologic surgery due to insufficient accuracy of clinically available systems.Materials and Methods: A freehand stereotactic image-guidance system tailored to the needs of neurotologic surgery–most importantly sub-half-millimeter accuracy–was developed. Its TRE was assessed preclinically using a task-specific phantom. A pilot clinical trial targeting N = 20 study participants was conducted (ClinicalTrials.gov ID: NCT03852329) to validate the accuracy and usefulness of the developed system. Clinically, objective assessment of the TRE is impossible because establishing a sufficiently accurate ground-truth is impossible. A method was used to validate accuracy and usefulness based on intersubjectivity assessment of surgeon ratings of corresponding image-pairs from the microscope/endoscope and the image-guidance system.Results: During the preclinical accuracy assessment the TRE was measured as 0.120 ± 0.05 mm (max: 0.27 mm, μTRE + 3σTRE = 0.27 mm, N = 310). Due to the COVID-19 pandemic, the study was terminated early after N = 3 participants. During an endoscopic cholesteatoma removal, a microscopic facial nerve schwannoma removal, and a microscopic revision cochlear implantation, N = 75 accuracy and usefulness ratings were collected from five surgeons each grading 15 image-pairs. On a scale from 1 (worst rating) to 5 (best rating), the median (interquartile range) accuracy and usefulness ratings were assessed as 5 (4–5) and 4 (4–5) respectively.Conclusion: Navigating surgery in the tympanomastoid compartment and potentially in the lateral skull base with sufficiently accurate freehand stereotactic image-guidance (μTRE + 3σTRE < 0.5 mm) is feasible, safe, and useful.Clinical Trial Registration:www.ClinicalTrials.gov, identifier: NCT03852329.


Sign in / Sign up

Export Citation Format

Share Document