generalized expectation
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 6)

H-INDEX

8
(FIVE YEARS 0)

2021 ◽  
Vol 9 ◽  
pp. 675-690
Author(s):  
Ran Zmigrod ◽  
Tim Vieira ◽  
Ryan Cotterell

Abstract We give a general framework for inference in spanning tree models. We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models. Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms. These algorithms are easy to implement with or without automatic differentiation software. We motivate the development of our framework with several cautionary tales of previous research, which has developed numerous inefficient algorithms for computing expectations and their gradients. We demonstrate how our framework efficiently computes several quantities with known algorithms, including the expected attachment score, entropy, and generalized expectation criteria. As a bonus, we give algorithms for quantities that are missing in the literature, including the KL divergence. In all cases, our approach matches the efficiency of existing algorithms and, in several cases, reduces the runtime complexity by a factor of the sentence length. We validate the implementation of our framework through runtime experiments. We find our algorithms are up to 15 and 9 times faster than previous algorithms for computing the Shannon entropy and the gradient of the generalized expectation objective, respectively.


2019 ◽  
Vol 204 ◽  
pp. 03005
Author(s):  
A.S. Parvan

The general formalism for the nonextensive statistics based on the Landsberg-Vedral parametric entropy in the framework of the microcanonical, canonical and grand canonical ensembles was derived. The formulas for the first law of thermodynamics and the thermodynamic quantities in the terms of ensemble averages were obtained in a general form. It was found that under the transformation q → 2 – q the probabilities of microstates of the nonextensive statistics based on the Landsberg-Vedral entropy with the standard expectation values formally resemble the corresponding probabilities of the Tsallis statistics with the generalized expectation values.


2018 ◽  
Vol 119 (4) ◽  
pp. 1367-1393 ◽  
Author(s):  
Scott T. Albert ◽  
Reza Shadmehr

Experience of a prediction error recruits multiple motor learning processes, some that learn strongly from error but have weak retention and some that learn weakly from error but exhibit strong retention. These processes are not generally observable but are inferred from their collective influence on behavior. Is there a robust way to uncover the hidden processes? A standard approach is to consider a state space model where the hidden states change following experience of error and then fit the model to the measured data by minimizing the squared error between measurement and model prediction. We found that this least-squares algorithm (LMSE) often yielded unrealistic predictions about the hidden states, possibly because of its neglect of the stochastic nature of error-based learning. We found that behavioral data during adaptation was better explained by a system in which both error-based learning and movement production were stochastic processes. To uncover the hidden states of learning, we developed a generalized expectation maximization (EM) algorithm. In simulation, we found that although LMSE tracked the measured data marginally better than EM, EM was far more accurate in unmasking the time courses and properties of the hidden states of learning. In a power analysis designed to measure the effect of an intervention on sensorimotor learning, EM significantly reduced the number of subjects that were required for effective hypothesis testing. In summary, we developed a new approach for analysis of data in sensorimotor experiments. The new algorithm improved the ability to uncover the multiple processes that contribute to learning from error. NEW & NOTEWORTHY Motor learning is supported by multiple adaptive processes, each with distinct error sensitivity and forgetting rates. We developed a generalized expectation maximization algorithm that uncovers these hidden processes in the context of modern sensorimotor learning experiments that include error-clamp trials and set breaks. The resulting toolbox may improve the ability to identify the properties of these hidden processes and reduce the number of subjects needed to test the effectiveness of interventions on sensorimotor learning.


Sign in / Sign up

Export Citation Format

Share Document