ground truth
Recently Published Documents


TOTAL DOCUMENTS

4527
(FIVE YEARS 2689)

H-INDEX

55
(FIVE YEARS 15)

2022 ◽  
Vol 16 (4) ◽  
pp. 1-18
Author(s):  
Min-Ling Zhang ◽  
Jing-Han Wu ◽  
Wei-Xuan Bao

As an emerging weakly supervised learning framework, partial label learning considers inaccurate supervision where each training example is associated with multiple candidate labels among which only one is valid. In this article, a first attempt toward employing dimensionality reduction to help improve the generalization performance of partial label learning system is investigated. Specifically, the popular linear discriminant analysis (LDA) techniques are endowed with the ability of dealing with partial label training examples. To tackle the challenge of unknown ground-truth labeling information, a novel learning approach named Delin is proposed which alternates between LDA dimensionality reduction and candidate label disambiguation based on estimated labeling confidences over candidate labels. On one hand, the (kernelized) projection matrix of LDA is optimized by utilizing disambiguation-guided labeling confidences. On the other hand, the labeling confidences are disambiguated by resorting to k NN aggregation in the LDA-induced feature space. Extensive experiments over a broad range of partial label datasets clearly validate the effectiveness of Delin in improving the generalization performance of well-established partial label learning algorithms.


2022 ◽  
Vol 29 (2) ◽  
pp. 1-33
Author(s):  
Nigel Bosch ◽  
Sidney K. D'Mello

The ability to identify whether a user is “zoning out” (mind wandering) from video has many HCI (e.g., distance learning, high-stakes vigilance tasks). However, it remains unknown how well humans can perform this task, how they compare to automatic computerized approaches, and how a fusion of the two might improve accuracy. We analyzed videos of users’ faces and upper bodies recorded 10s prior to self-reported mind wandering (i.e., ground truth) while they engaged in a computerized reading task. We found that a state-of-the-art machine learning model had comparable accuracy to aggregated judgments of nine untrained human observers (area under receiver operating characteristic curve [AUC] = .598 versus .589). A fusion of the two (AUC = .644) outperformed each, presumably because each focused on complementary cues. Furthermore, adding more humans beyond 3–4 observers yielded diminishing returns. We discuss implications of human–computer fusion as a means to improve accuracy in complex tasks.


2022 ◽  
Vol 41 (1) ◽  
pp. 1-17
Author(s):  
Xin Chen ◽  
Anqi Pang ◽  
Wei Yang ◽  
Peihao Wang ◽  
Lan Xu ◽  
...  

In this article, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single three-dimensional (3D) human scan, which enables numerous applications such as virtual try-on, biometrics, and body evaluation. To break the severe variations of the human poses and garments, we propose to model the clothing tightness field—the displacements from the garments to the human shape implicitly in the global UV texturing domain. To this end, we utilize an enhanced statistical human template and an effective multi-stage alignment scheme to map the 3D scan into a hybrid 2D geometry image. Based on this 2D representation, we propose a novel framework to predict clothing tightness field via a novel tightness formulation, as well as an effective optimization scheme to further reconstruct multi-layer human shape and garments under various clothing categories and human postures. We further propose a new clothing tightness dataset of human scans with a large variety of clothing styles, poses, and corresponding ground-truth human shapes to stimulate further research. Extensive experiments demonstrate the effectiveness of our TightCap to achieve the high-quality human shape and dressed garments reconstruction, as well as the further applications for clothing segmentation, retargeting, and animation.


2022 ◽  
Vol 4 ◽  
Author(s):  
Naoya Tanabe ◽  
Shizuo Kaji ◽  
Hiroshi Shima ◽  
Yusuke Shiraishi ◽  
Tomoki Maetani ◽  
...  

Chest computed tomography (CT) is used to screen for lung cancer and evaluate pulmonary and extra-pulmonary abnormalities such as emphysema and coronary artery calcification, particularly in smokers. In real-world practice, lung abnormalities are visually assessed using high-contrast thin-slice images which are generated from raw scan data using sharp reconstruction kernels with the sacrifice of increased image noise. In contrast, accurate CT quantification requires low-contrast thin-slice images with low noise, which are generated using soft reconstruction kernels. However, only sharp-kernel thin-slice images are archived in many medical facilities due to limited data storage space. This study aimed to establish deep neural network (DNN) models to convert sharp-kernel images to soft-kernel-like images with a final goal to reuse historical chest CT images for robust quantitative measurements, particularly in completed previous longitudinal studies. By using pairs of sharp-kernel (input) and soft-kernel (ground-truth) images from 30 patients with chronic obstructive pulmonary disease (COPD), DNN models were trained. Then, the accuracy of kernel conversion based on the established DNN models was evaluated using CT from independent 30 smokers with and without COPD. Consequently, differences in CT values between new images converted from sharp-kernel images using the established DNN models and ground-truth soft-kernel images were comparable with the inter-scans variability derived from repeated phantom scans (6 times), showing that the conversion error was the same level as the measurement error of the CT device. Moreover, the Dice coefficients to quantify the similarity between low attenuation voxels on given images and the ground-truth soft-kernel images were significantly higher on the DNN-converted images than the Gaussian-filtered, median-filtered, and sharp-kernel images (p < 0.001). There were good agreements in quantitative measurements of emphysema, intramuscular adipose tissue, and coronary artery calcification between the converted and the ground-truth soft-kernel images. These findings demonstrate the validity of the new DNN model for kernel conversion and the clinical applicability of soft-kernel-like images converted from archived sharp-kernel images in previous clinical studies. The presented method to evaluate the validity of the established DNN model using repeated scans of phantom could be applied to various deep learning-based image conversions for robust quantitative evaluation.


2022 ◽  
Author(s):  
Stephanie Hu ◽  
Steven Horng ◽  
Seth J. Berkowitz ◽  
Ruizhi Liao ◽  
Rahul G. Krishnan ◽  
...  

Accurately assessing the severity of pulmonary edema is critical for making treatment decisions in congestive heart failure patients. However, the current scale for quantifying pulmonary edema based on chest radiographs does not have well-characterized severity levels, with substantial inter-radiologist disagreement. In this study, we investigate whether comparisons documented in radiology reports can provide accurate characterizations of pulmonary edema progression. We propose a rules-based natural language processing approach to assess the change in a patient's pulmonary edema status (e.g. better, worse, no change) by performing pairwise comparisons of consecutive radiology reports, using regular expressions and heuristics derived from clinical knowledge. Evaluated against ground-truth labels from expert radiologists, our labeler extracts comparisons describing the progression of pulmonary edema with 0.875 precision and 0.891 recall. We also demonstrate the potential utility of comparison labels in providing additional fine-grained information over noisier labels produced by models that directly estimate severity level.


2022 ◽  
Vol 9 (2) ◽  
pp. 76-82
Author(s):  
James Downey ◽  
Zachary Ellis ◽  
Ethan Nguyen ◽  
Charlotte Spencer ◽  
Paul Evangelista

Each year, the National Training Center (NTC) located at Fort Irwin, California, hosts multiple Brigade-level rotational units to conduct training exercises. NTC’s Instrumentation Systems (NTC-IS) digitally capture and store characteristics of movement and maneuver, use of fires, and other tactical operations in a vast database. The Army’s Engineer Research and Development Center (ERDC) recently partnered with Training and Doctrine Command (TRADOC) to make some of the data available for introductory analysis within a relational database. While this data has the potential to expose capability gaps, uncover the truth behind doctrinal assumptions, and create a sophisticated feedback platform for Army leaders at all levels, it is largely unexplored and underutilized. The purpose of this project is to demonstrate the value of this data by developing a prototype information system that supports post-rotation analytics, playback capabilities, and repeatable workflows that measure and expose ground-truth operational and logistical behavior and performance during a rotation. The Army modeling and analysis community will use these products to systematically curate and archive the database and enable future analysis of the NTC-IS data.


2022 ◽  
Vol 14 (2) ◽  
pp. 393
Author(s):  
Mike Teucher ◽  
Detlef Thürkow ◽  
Philipp Alb ◽  
Christopher Conrad

Digital solutions in agricultural management promote food security and support the sustainable use of resources. As a result, remote sensing (RS) can be seen as an innovation for the fast generation of reliable information for agricultural management. Near real-time processed RS data can be used as a tool for decision making on multiple scales, from subplot to the global level. This high potential is not yet fully applied, due to often limited access to ground truth information, which is crucial for the development of transferable applications and acceptance. In this study we present a digital workflow for the acquisition, processing and dissemination of agroecological information based on proprietary and open-source software tools with state-of-the-art web-mapping technologies. Data is processed in near real-time and thus can be used as ground truth information to enhance quality and performance of RS-based products. Data is disseminated by easy-to-understand visualizations and download functionalities for specific application levels to serve specific user needs. It thus can increase expert knowledge and can be used for decision support at the same time. The fully digital workflow underpins the great potential to facilitate quality enhancement of future RS products in the context of precision agriculture by safeguarding data quality. The generated FAIR (findable, accessible, interoperable, reusable) datasets can be used to strengthen the relationship between scientists, initiatives and stakeholders.


2022 ◽  
Vol 20 (1) ◽  
Author(s):  
Holger A. Lindner ◽  
Shigehiko Schamoni ◽  
Thomas Kirschning ◽  
Corinna Worm ◽  
Bianka Hahn ◽  
...  

Abstract Background Sepsis is the leading cause of death in the intensive care unit (ICU). Expediting its diagnosis, largely determined by clinical assessment, improves survival. Predictive and explanatory modelling of sepsis in the critically ill commonly bases both outcome definition and predictions on clinical criteria for consensus definitions of sepsis, leading to circularity. As a remedy, we collected ground truth labels for sepsis. Methods In the Ground Truth for Sepsis Questionnaire (GTSQ), senior attending physicians in the ICU documented daily their opinion on each patient’s condition regarding sepsis as a five-category working diagnosis and nine related items. Working diagnosis groups were described and compared and their SOFA-scores analyzed with a generalized linear mixed model. Agreement and discriminatory performance measures for clinical criteria of sepsis and GTSQ labels as reference class were derived. Results We analyzed 7291 questionnaires and 761 complete encounters from the first survey year. Editing rates for all items were > 90%, and responses were consistent with current understanding of critical illness pathophysiology, including sepsis pathogenesis. Interrater agreement for presence and absence of sepsis was almost perfect but only slight for suspected infection. ICU mortality was 19.5% in encounters with SIRS as the “worst” working diagnosis compared to 5.9% with sepsis and 5.9% with severe sepsis without differences in admission and maximum SOFA. Compared to sepsis, proportions of GTSQs with SIRS plus acute organ dysfunction were equal and macrocirculatory abnormalities higher (p < 0.0001). SIRS proportionally ranked above sepsis in daily assessment of illness severity (p < 0.0001). Separate analyses of neurosurgical referrals revealed similar differences. Discriminatory performance of Sepsis-1/2 and Sepsis-3 compared to GTSQ labels was similar with sensitivities around 70% and specificities 92%. Essentially no difference between the prevalence of SIRS and SOFA ≥ 2 yielded sensitivities and specificities for detecting sepsis onset close to 55% and 83%, respectively. Conclusions GTSQ labels are a valid measure of sepsis in the ICU. They reveal suspicion of infection as an unclear clinical concept and refute an illness severity hierarchy in the SIRS-sepsis-severe sepsis spectrum. Ground truth challenges the accuracy of Sepsis-1/2 and Sepsis-3 in detecting sepsis onset. It is an indispensable intermediate step towards advancing diagnosis and therapy in the ICU and, potentially, other health care settings.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 659
Author(s):  
Camille Marie Montalcini ◽  
Bernhard Voelkl ◽  
Yamenah Gómez ◽  
Michael Gantner ◽  
Michael J. Toscano

Tracking technologies offer a way to monitor movement of many individuals over long time periods with minimal disturbances and could become a helpful tool for a variety of uses in animal agriculture, including health monitoring or selection of breeding traits that benefit welfare within intensive cage-free poultry farming. Herein, we present an active, low-frequency tracking system that distinguishes between five predefined zones within a commercial aviary. We aimed to evaluate both the processed and unprocessed datasets against a “ground truth” based on video observations. The two data processing methods aimed to filter false registrations, one with a simple deterministic approach and one with a tree-based classifier. We found the unprocessed data accurately determined birds’ presence/absence in each zone with an accuracy of 99% but overestimated the number of transitions taken by birds per zone, explaining only 23% of the actual variation. However, the two processed datasets were found to be suitable to monitor the number of transitions per individual, accounting for 91% and 99% of the actual variation, respectively. To further evaluate the tracking system, we estimated the error rate of registrations (by applying the classifier) in relation to three factors, which suggested a higher number of false registrations towards specific areas, periods with reduced humidity, and periods with reduced temperature. We concluded that the presented tracking system is well suited for commercial aviaries to measure individuals’ transitions and individuals’ presence/absence in predefined zones. Nonetheless, under these settings, data processing remains a necessary step in obtaining reliable data. For future work, we recommend the use of automatic calibration to improve the system’s performance and to envision finer movements.


2022 ◽  
Vol 14 (2) ◽  
pp. 388
Author(s):  
Zhihao Wei ◽  
Kebin Jia ◽  
Xiaowei Jia ◽  
Pengyu Liu ◽  
Ying Ma ◽  
...  

Monitoring the extent of plateau forests has drawn much attention from governments given the fact that the plateau forests play a key role in global carbon circulation. Despite the recent advances in the remote-sensing applications of satellite imagery over large regions, accurate mapping of plateau forest remains challenging due to limited ground truth information and high uncertainties in their spatial distribution. In this paper, we aim to generate a better segmentation map for plateau forests using high-resolution satellite imagery with limited ground-truth data. We present the first 2 m spatial resolution large-scale plateau forest dataset of Sanjiangyuan National Nature Reserve, including 38,708 plateau forest imagery samples and 1187 handmade accurate plateau forest ground truth masks. We then propose an few-shot learning method for mapping plateau forests. The proposed method is conducted in two stages, including unsupervised feature extraction by leveraging domain knowledge, and model fine-tuning using limited ground truth data. The proposed few-shot learning method reached an F1-score of 84.23%, and outperformed the state-of-the-art object segmentation methods. The result proves the proposed few-shot learning model could help large-scale plateau forest monitoring. The dataset proposed in this paper will soon be available online for the public.


Sign in / Sign up

Export Citation Format

Share Document