Developing a visual method to characterize displays

2019 ◽  
Vol 2019 (1) ◽  
pp. 75-79
Author(s):  
Yu Hu ◽  
Ming Ronnier Luo

The goal is to develop a display characterization model to include the personal vision characteristics. A two-step model for visually characterizing displays was developed. It was based on the concept of half-toning technique for obtaining gamma factor for each colour channel, and unique hue concept for achieving 3x3 matrix coefficients, respectively. The variation can be presented by the optimized RGB primaries for each observer. The typical difference between the individual and the measured ground truth is 2.2 in terms of CIEDE2000 units.

Author(s):  
Volker A. Coenen ◽  
Bastian E. Sajonz ◽  
Peter C. Reinacher ◽  
Christoph P. Kaller ◽  
Horst Urbach ◽  
...  

Abstract Background An increasing number of neurosurgeons use display of the dentato-rubro-thalamic tract (DRT) based on diffusion weighted imaging (dMRI) as basis for their routine planning of stimulation or lesioning approaches in stereotactic tremor surgery. An evaluation of the anatomical validity of the display of the DRT with respect to modern stereotactic planning systems and across different tracking environments has not been performed. Methods Distinct dMRI and anatomical magnetic resonance imaging (MRI) data of high and low quality from 9 subjects were used. Six subjects had repeated MRI scans and therefore entered the analysis twice. Standardized DICOM structure templates for volume of interest definition were applied in native space for all investigations. For tracking BrainLab Elements (BrainLab, Munich, Germany), two tensor deterministic tracking (FT2), MRtrix IFOD2 (https://www.mrtrix.org), and a global tracking (GT) approach were used to compare the display of the uncrossed (DRTu) and crossed (DRTx) fiber structure after transformation into MNI space. The resulting streamlines were investigated for congruence, reproducibility, anatomical validity, and penetration of anatomical way point structures. Results In general, the DRTu can be depicted with good quality (as judged by waypoints). FT2 (surgical) and GT (neuroscientific) show high congruence. While GT shows partly reproducible results for DRTx, the crossed pathway cannot be reliably reconstructed with the other (iFOD2 and FT2) algorithms. Conclusion Since a direct anatomical comparison is difficult in the individual subjects, we chose a comparison with two research tracking environments as the best possible “ground truth.” FT2 is useful especially because of its manual editing possibilities of cutting erroneous fibers on the single subject level. An uncertainty of 2 mm as mean displacement of DRTu is expectable and should be respected when using this approach for surgical planning. Tractographic renditions of the DRTx on the single subject level seem to be still illusive.


2020 ◽  
Vol 6 (3) ◽  
pp. 284-287
Author(s):  
Jannis Hagenah ◽  
Mohamad Mehdi ◽  
Floris Ernst

AbstractAortic root aneurysm is treated by replacing the dilated root by a grafted prosthesis which mimics the native root morphology of the individual patient. The challenge in predicting the optimal prosthesis size rises from the highly patient-specific geometry as well as the absence of the original information on the healthy root. Therefore, the estimation is only possible based on the available pathological data. In this paper, we show that representation learning with Conditional Variational Autoencoders is capable of turning the distorted geometry of the aortic root into smoother shapes while the information on the individual anatomy is preserved. We evaluated this method using ultrasound images of the porcine aortic root alongside their labels. The observed results show highly realistic resemblance in shape and size to the ground truth images. Furthermore, the similarity index has noticeably improved compared to the pathological images. This provides a promising technique in planning individual aortic root replacement.


2021 ◽  
Vol 2 (1) ◽  
pp. 1-25
Author(s):  
Srinivasan Iyengar ◽  
Stephen Lee ◽  
David Irwin ◽  
Prashant Shenoy ◽  
Benjamin Weil

Buildings consume over 40% of the total energy in modern societies, and improving their energy efficiency can significantly reduce our energy footprint. In this article, we present WattScale, a data-driven approach to identify the least energy-efficient buildings from a large population of buildings in a city or a region. Unlike previous methods such as least-squares that use point estimates, WattScale uses Bayesian inference to capture the stochasticity in the daily energy usage by estimating the distribution of parameters that affect a building. Further, it compares them with similar homes in a given population. WattScale also incorporates a fault detection algorithm to identify the underlying causes of energy inefficiency. We validate our approach using ground truth data from different geographical locations, which showcases its applicability in various settings. WattScale has two execution modes—(i) individual and (ii) region-based, which we highlight using two case studies. For the individual execution mode, we present results from a city containing >10,000 buildings and show that more than half of the buildings are inefficient in one way or another indicating a significant potential from energy improvement measures. Additionally, we provide probable cause of inefficiency and find that 41%, 23.73%, and 0.51% homes have poor building envelope, heating, and cooling system faults, respectively. For the region-based execution mode, we show that WattScale can be extended to millions of homes in the U.S. due to the recent availability of representative energy datasets.


2018 ◽  
Vol 15 (6) ◽  
pp. 172988141881470
Author(s):  
Nezih Ergin Özkucur ◽  
H Levent Akın

Self-localization in autonomous robots is one of the fundamental issues in the development of intelligent robots, and processing of raw sensory information into useful features is an integral part of this problem. In a typical scenario, there are several choices for the feature extraction algorithm, and each has its weaknesses and strengths depending on the characteristics of the environment. In this work, we introduce a localization algorithm that is capable of capturing the quality of a feature type based on the local environment and makes soft selection of feature types throughout different regions. A batch expectation–maximization algorithm is developed for both discrete and Monte Carlo localization models, exploiting the probabilistic pose estimations of the robot without requiring ground truth poses and also considering different observation types as blackbox algorithms. We tested our method in simulations, data collected from an indoor environment with a custom robot platform and a public data set. The results are compared with the individual feature types as well as naive fusion strategy.


Author(s):  
Hao Zhang ◽  
Liangxiao Jiang ◽  
Wenqiang Xu

Crowdsourcing services provide a fast, efficient, and cost-effective means of obtaining large labeled data for supervised learning. Ground truth inference, also called label integration, designs proper aggregation strategies to infer the unknown true label of each instance from the multiple noisy label set provided by ordinary crowd workers. However, to the best of our knowledge, nearly all existing label integration methods focus solely on the multiple noisy label set itself of the individual instance while totally ignoring the intercorrelation among multiple noisy label sets of different instances. To solve this problem, a multiple noisy label distribution propagation (MNLDP) method is proposed in this study. MNLDP first transforms the multiple noisy label set of each instance into its multiple noisy label distribution and then propagates its multiple noisy label distribution to its nearest neighbors. Consequently, each instance absorbs a fraction of the multiple noisy label distributions from its nearest neighbors and yet simultaneously maintains a fraction of its own original multiple noisy label distribution. Promising experimental results on simulated and real-world datasets validate the effectiveness of our proposed method.


2018 ◽  
Vol 31 (3) ◽  
pp. 81-86
Author(s):  
Elizabeth Hartney

The current healthcare system is often as highly stressful environment for patients, their families, and for the employees of the system. Health leaders also experience stress, which can have profound repercussions if not well managed. This article describes the impact of stress on the brain and nervous system functioning of health leaders, then, drawing on evidence from the literature, presents a three-step model for managing stress at the individual, team/organizational, and system levels.


2019 ◽  
Vol 4 (1) ◽  
Author(s):  
Milos Kudelka ◽  
Eliska Ochodkova ◽  
Sarka Zehnalova ◽  
Jakub Plesnik

Abstract The existence of groups of nodes with common characteristics and the relationships between these groups are important factors influencing the structures of social, technological, biological, and other networks. Uncovering such groups and the relationships between them is, therefore, necessary for understanding these structures. Groups can either be found by detection algorithms based solely on structural analysis or identified on the basis of more in-depth knowledge of the processes taking place in networks. In the first case, these are mainly algorithms detecting non-overlapping communities or communities with small overlaps. The latter case is about identifying ground-truth communities, also on the basis of characteristics other than only network structure. Recent research into ground-truth communities shows that in real-world networks, there are nested communities or communities with large and dense overlaps which we are not yet able to detect satisfactorily only on the basis of structural network properties.In our approach, we present a new perspective on the problem of group detection using only the structural properties of networks. Its main contribution is pointing out the existence of large and dense overlaps of detected groups. We use the non-symmetric structural similarity between pairs of nodes, which we refer to as dependency, to detect groups that we call zones. Unlike other approaches, we are able, thanks to non-symmetry, accurately to describe the prominent nodes in the zones which are responsible for large zone overlaps and the reasons why overlaps occur. The individual zones that are detected provide new information associated in particular with the non-symmetric relationships within the group and the roles that individual nodes play in the zone. From the perspective of global network structure, because of the non-symmetric node-to-node relationships, we explore new properties of real-world networks that describe the differences between various types of networks.


2018 ◽  
Author(s):  
Madeny Belkhiri ◽  
Duda Kvitsiani

AbstractUnderstanding how populations of neurons represent and compute internal or external variables requires precise and objective metrics for tracing the individual spikes that belong to a given neuron. Despite recent progress in the development of accurate and fast spike sorting tools, the scarcity of ground truth data makes it difficult to settle on the best performing spike sorting algorithm. Besides, the use of different configurations of electrodes and ways to acquire signal (e.g. anesthetized, head fixed, freely behaving animal recordings, tetrode vs. silicone probes, etc.) makes it even harder to develop a universal spike sorting tool that will perform well without human intervention. Some of the prevalent problems in spike sorting are: units separating due to drift, clustering bursting cells, and dealing with nonstationarity in background noise. The last is particularly problematic in freely behaving animals where the noises from the electrophysiological activity of hundreds or thousands of neurons are intermixed with noise arising from movement artifacts. We address these problems by developing a new spike sorting tool that is based on a template matching algorithm. The spike waveform templates are used to perform normalized cross correlation (NCC) with an acquired signal for spike detection. The normalization addresses problems with drift, bursting, and nonstationarity of noise and provides normative scoring to compare different units in terms of cluster quality. Our spike sorting algorithm, D.sort, runs on the graphic processing unit (GPU) to accelerate computations. D.sort is a freely available software package (https://github.com/1804MB/Kvistiani-lab_Dsort).


2021 ◽  
Vol 9 ◽  
Author(s):  
Stefan Pielsticker ◽  
Benjamin Gövert ◽  
Kentaro Umeki ◽  
Reinhold Kneer

Biomass is a complex material mainly composed of the three lignocellulosic components: cellulose, hemicellulose and lignin. The different molecular structures of the individual components result in various decomposition mechanisms during the pyrolysis process. To understand the underlying reactions in more detail, the individual components can be extracted from the biomass and can then be investigated separately. In this work, the pyrolysis kinetics of extracted and purified cellulose, hemicellulose and lignin are examined experimentally in a small-scale fluidized bed reactor (FBR) under N2 pyrolysis conditions. The FBR provides high particle heating rates (approx. 104 K/s) at medium temperatures (573–973 K) with unlimited reaction time and thus complements typically used thermogravimetric analyzers (TGA, low heating rate) and drop tube reactors (high temperature and heating rate). Based on the time-dependent gas concentrations of 22 species, the release rates of these species as well as the overall rate of volatiles released are calculated. A single first-order (SFOR) reaction model and a 2-step model combined with Arrhenius kinetics are calibrated for all three components individually. Considering FBR and additional TGA experiments, different reaction regimes with different activation energies could be identified. By using dimensionless pyrolysis numbers, limits due to reaction kinetics and heat transfer could be determined. The evaluation of the overall model performance revealed model predictions within the ±2σ standard deviation band for cellulose and hemicellulose. For lignin, only the 2-step model gave satisfying results. Modifications to the SFOR model (yield restriction to primary pyrolysis peak or the assumption of distributed reactivity) were found to be promising approaches for the description of flash pyrolysis behavior, which will be further investigated in the future.


2012 ◽  
Vol 35 (2) ◽  
pp. 97-100 ◽  
Author(s):  
Christopher Malon ◽  
Elena Brachtel ◽  
Eric Cosatto ◽  
Hans Peter Graf ◽  
Atsushi Kurata ◽  
...  

Despite the prognostic importance of mitotic count as one of the components of the Bloom – Richardson grade [3], several studies ([2, 9, 10]) have found that pathologists’ agreement on the mitotic grade is fairly modest. Collecting a set of more than 4,200 candidate mitotic figures, we evaluate pathologists' agreement on individual figures, and train a computerized system for mitosis detection, comparing its performance to the classifications of three pathologists. The system’s and the pathologists’ classifications are based on evaluation of digital micrographs of hematoxylin and eosin stained breast tissue. On figures where the majority of pathologists agree on a classification, we compare the performance of the trained system to that of the individual pathologists. We find that the level of agreement of the pathologists ranges from slight to moderate, with strong biases, and that the system performs competitively in rating the ground truth set. This study is a step towards automatic mitosis count to accelerate a pathologist's work and improve reproducibility.


Sign in / Sign up

Export Citation Format

Share Document