loss functions
Recently Published Documents


TOTAL DOCUMENTS

1005
(FIVE YEARS 394)

H-INDEX

43
(FIVE YEARS 8)

2022 ◽  
Vol 13 (1) ◽  
pp. 1-20
Author(s):  
Wen-Cheng Chen ◽  
Wan-Lun Tsai ◽  
Huan-Hua Chang ◽  
Min-Chun Hu ◽  
Wei-Ta Chu

Tactic learning in virtual reality (VR) has been proven to be effective for basketball training. Endowed with the ability of generating virtual defenders in real time according to the movement of virtual offenders controlled by the user, a VR basketball training system can bring more immersive and realistic experiences for the trainee. In this article, an autoregressive generative model for instantly producing basketball defensive trajectory is introduced. We further focus on the issue of preserving the diversity of the generated trajectories. A differentiable sampling mechanism is adopted to learn the continuous Gaussian distribution of player position. Moreover, several heuristic loss functions based on the domain knowledge of basketball are designed to make the generated trajectories assemble real situations in basketball games. We compare the proposed method with the state-of-the-art works in terms of both objective and subjective manners. The objective manner compares the average position, velocity, and acceleration of the generated defensive trajectories with the real ones to evaluate the fidelity of the results. In addition, more high-level aspects such as the empty space for offender and the defensive pressure of the generated trajectory are also considered in the objective evaluation. As for the subjective manner, visual comparison questionnaires on the proposed and other methods are thoroughly conducted. The experimental results show that the proposed method can achieve better performance than previous basketball defensive trajectory generation works in terms of different evaluation metrics.


2023 ◽  
Vol 55 (1) ◽  
pp. 1-35
Author(s):  
Giannis Bekoulis ◽  
Christina Papagiannopoulou ◽  
Nikos Deligiannis

We study the fact-checking problem, which aims to identify the veracity of a given claim. Specifically, we focus on the task of Fact Extraction and VERification (FEVER) and its accompanied dataset. The task consists of the subtasks of retrieving the relevant documents (and sentences) from Wikipedia and validating whether the information in the documents supports or refutes a given claim. This task is essential and can be the building block of applications such as fake news detection and medical claim verification. In this article, we aim at a better understanding of the challenges of the task by presenting the literature in a structured and comprehensive way. We describe the proposed methods by analyzing the technical perspectives of the different approaches and discussing the performance results on the FEVER dataset, which is the most well-studied and formally structured dataset on the fact extraction and verification task. We also conduct the largest experimental study to date on identifying beneficial loss functions for the sentence retrieval component. Our analysis indicates that sampling negative sentences is important for improving the performance and decreasing the computational complexity. Finally, we describe open issues and future challenges, and we motivate future research in the task.


2022 ◽  
Author(s):  
Eyke Hüllermeier ◽  
Marcel Wever ◽  
Eneldo Loza Mencia ◽  
Johannes Fürnkranz ◽  
Michael Rapp

AbstractThe idea to exploit label dependencies for better prediction is at the core of methods for multi-label classification (MLC), and performance improvements are normally explained in this way. Surprisingly, however, there is no established methodology that allows to analyze the dependence-awareness of MLC algorithms. With that goal in mind, we introduce a class of loss functions that are able to capture the important aspect of label dependence. To this end, we leverage the mathematical framework of non-additive measures and integrals. Roughly speaking, a non-additive measure allows for modeling the importance of correct predictions of label subsets (instead of single labels), and thereby their impact on the overall evaluation, in a flexible way. The well-known Hamming and subset 0/1 losses are rather extreme special cases of this function class, which give full importance to single label sets or the entire label set, respectively. We present concrete instantiations of this class, which appear to be especially appealing from a modeling perspective. The assessment of multi-label classifiers in terms of these losses is illustrated in an empirical study, clearly showing their aptness at capturing label dependencies. Finally, while not being the main goal of this study, we also show some preliminary results on the minimization of this parametrized family of losses.


2022 ◽  
Vol 19 (1) ◽  
Author(s):  
Mohd. Arshad ◽  
Qazi J. Azhad

A general family of distributions, namely Kumaraswamy generalized family of (Kw-G) distribution, is considered for estimation of the unknown parameters and reliability function based on record data from Kw-G distribution. The maximum likelihood estimators (MLEs) are derived for unknown parameters and reliability function, along with its confidence intervals. A Bayesian study is carried out under symmetric and asymmetric loss functions in order to find the Bayes estimators for unknown parameters and reliability function. Future record values are predicted using Bayesian approach and non Bayesian approach, based on numerical examples and a monte carlo simulation.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 494
Author(s):  
Erin McGowan ◽  
Vidita Gawade ◽  
Weihong (Grace) Guo

Physics-informed machine learning is emerging through vast methodologies and in various applications. This paper discovers physics-based custom loss functions as an implementable solution to additive manufacturing (AM). Specifically, laser metal deposition (LMD) is an AM process where a laser beam melts deposited powder, and the dissolved particles fuse to produce metal components. Porosity, or small cavities that form in this printed structure, is generally considered one of the most destructive defects in metal AM. Traditionally, computer tomography scans measure porosity. While this is useful for understanding the nature of pore formation and its characteristics, purely physics-driven models lack real-time prediction ability. Meanwhile, a purely deep learning approach to porosity prediction leaves valuable physics knowledge behind. In this paper, a hybrid model that uses both empirical and simulated LMD data is created to show how various physics-informed loss functions impact the accuracy, precision, and recall of a baseline deep learning model for porosity prediction. In particular, some versions of the physics-informed model can improve the precision of the baseline deep learning-only model (albeit at the expense of overall accuracy).


Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 112
Author(s):  
Muhammad S. Battikh ◽  
Artem A. Lenskiy

Reconstruction-based approaches to anomaly detection tend to fall short when applied to complex datasets with target classes that possess high inter-class variance. Similar to the idea of self-taught learning used in transfer learning, many domains are rich with similar unlabeled datasets that could be leveraged as a proxy for out-of-distribution samples. In this paper we introduce the latent-insensitive autoencoder (LIS-AE) where unlabeled data from a similar domain are utilized as negative examples to shape the latent layer (bottleneck) of a regular autoencoder such that it is only capable of reconstructing one task. We provide theoretical justification for the proposed training process and loss functions along with an extensive ablation study highlighting important aspects of our model. We test our model in multiple anomaly detection settings presenting quantitative and qualitative analysis showcasing the significant performance improvement of our model for anomaly detection tasks.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261307
Author(s):  
Sivaramakrishnan Rajaraman ◽  
Ghada Zamzmi ◽  
Sameer K. Antani

Medical images commonly exhibit multiple abnormalities. Predicting them requires multi-class classifiers whose training and desired reliable performance can be affected by a combination of factors, such as, dataset size, data source, distribution, and the loss function used to train deep neural networks. Currently, the cross-entropy loss remains the de-facto loss function for training deep learning classifiers. This loss function, however, asserts equal learning from all classes, leading to a bias toward the majority class. Although the choice of the loss function impacts model performance, to the best of our knowledge, we observed that no literature exists that performs a comprehensive analysis and selection of an appropriate loss function toward the classification task under study. In this work, we benchmark various state-of-the-art loss functions, critically analyze model performance, and propose improved loss functions for a multi-class classification task. We select a pediatric chest X-ray (CXR) dataset that includes images with no abnormality (normal), and those exhibiting manifestations consistent with bacterial and viral pneumonia. We construct prediction-level and model-level ensembles to improve classification performance. Our results show that compared to the individual models and the state-of-the-art literature, the weighted averaging of the predictions for top-3 and top-5 model-level ensembles delivered significantly superior classification performance (p < 0.05) in terms of MCC (0.9068, 95% confidence interval (0.8839, 0.9297)) metric. Finally, we performed localization studies to interpret model behavior and confirm that the individual models and ensembles learned task-specific features and highlighted disease-specific regions of interest. The code is available at https://github.com/sivaramakrishnan-rajaraman/multiloss_ensemble_models.


2021 ◽  
Author(s):  
Mahsa Mozaffari ◽  
Panos P. Markopoulos

<p>In this work, we propose a new formulation for low-rank tensor approximation, with tunable outlier-robustness, and present a unified algorithmic solution framework. This formulation relies on a new generalized robust loss function (Barron loss), which encompasses several well-known loss-functions with variable outlier resistance. The robustness of the proposed framework is corroborated by the presented numerical studies on synthetic and real data.</p>


2021 ◽  
Author(s):  
Mahsa Mozaffari ◽  
Panos P. Markopoulos

<p>In this work, we propose a new formulation for low-rank tensor approximation, with tunable outlier-robustness, and present a unified algorithmic solution framework. This formulation relies on a new generalized robust loss function (Barron loss), which encompasses several well-known loss-functions with variable outlier resistance. The robustness of the proposed framework is corroborated by the presented numerical studies on synthetic and real data.</p>


Sign in / Sign up

Export Citation Format

Share Document