scholarly journals Synthesis of Broad-Specificity Activity-Based Probes for Exo-β-Mannosidases

Author(s):  
Nicholas McGregor ◽  
Chi-Lin Kuo ◽  
Thomas Beenakker ◽  
Chun-Sing Wong ◽  
Wendy A Offen ◽  
...  

Exo--mannosidases are a broad class of stereochemically retaining hydrolases that are essential for the breakdown of complex carbohydrate substrates found in all kingdoms of life. Yet the detection of exo--mannosidases...

Author(s):  
L. J. Sykes ◽  
J. J. Hren

In electron microscope studies of crystalline solids there is a broad class of very small objects which are imaged primarily by strain contrast. Typical examples include: dislocation loops, precipitates, stacking fault tetrahedra and voids. Such objects are very difficult to identify and measure because of the sensitivity of their image to a host of variables and a similarity in their images. A number of attempts have been made to publish contrast rules to help the microscopist sort out certain subclasses of such defects. For example, Ashby and Brown (1963) described semi-quantitative rules to understand small precipitates. Eyre et al. (1979) published a catalog of images for BCC dislocation loops. Katerbau (1976) described an analytical expression to help understand contrast from small defects. There are other publications as well.


1983 ◽  
Vol 48 (10) ◽  
pp. 2751-2766
Author(s):  
Ondřej Wein ◽  
N. D. Kovalevskaya

Using a new approximate method, transient course of the local and mean diffusion fluxes following a step concentration change on the wall has been obtained for a broad class of steady flow problems.


Author(s):  
Joshua Shepherd

This chapter argues for a normative distinction between disabilities that are inherently negative with respect to well-being and disabilities that are inherently neutral. After clarifying terms, the author discusses recent arguments according to which possession of a disability is inherently neutral with respect to well-being. He notes that although these arguments are compelling, they are only intended to cover certain disabilities and, in fact, that there exists a broad class regarding which they do not apply. He then discusses two problem cases: locked-in syndrome and the minimally conscious state, and explains why these are cases in which possession of these disabilities makes one worse off overall. He argues that disabilities that significantly impair control over one’s situation tend to be inherently negative with respect to well-being; other disabilities do not. The upshot is that we must draw an important normative distinction between disabilities that undermine this kind of control and disabilities that do not.


Author(s):  
Marcello Pericoli ◽  
Marco Taboga

Abstract We propose a general method for the Bayesian estimation of a very broad class of non-linear no-arbitrage term-structure models. The main innovation we introduce is a computationally efficient method, based on deep learning techniques, for approximating no-arbitrage model-implied bond yields to any desired degree of accuracy. Once the pricing function is approximated, the posterior distribution of model parameters and unobservable state variables can be estimated by standard Markov Chain Monte Carlo methods. As an illustrative example, we apply the proposed techniques to the estimation of a shadow-rate model with a time-varying lower bound and unspanned macroeconomic factors.


2021 ◽  
Vol 339 ◽  
pp. 129872
Author(s):  
Aori Qileng ◽  
Hongshuai Zhu ◽  
Siqian Liu ◽  
Liang He ◽  
Weiwei Qin ◽  
...  

Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 126
Author(s):  
Sharu Theresa Jose ◽  
Osvaldo Simeone

Meta-learning, or “learning to learn”, refers to techniques that infer an inductive bias from data corresponding to multiple related tasks with the goal of improving the sample efficiency for new, previously unobserved, tasks. A key performance measure for meta-learning is the meta-generalization gap, that is, the difference between the average loss measured on the meta-training data and on a new, randomly selected task. This paper presents novel information-theoretic upper bounds on the meta-generalization gap. Two broad classes of meta-learning algorithms are considered that use either separate within-task training and test sets, like model agnostic meta-learning (MAML), or joint within-task training and test sets, like reptile. Extending the existing work for conventional learning, an upper bound on the meta-generalization gap is derived for the former class that depends on the mutual information (MI) between the output of the meta-learning algorithm and its input meta-training data. For the latter, the derived bound includes an additional MI between the output of the per-task learning procedure and corresponding data set to capture within-task uncertainty. Tighter bounds are then developed for the two classes via novel individual task MI (ITMI) bounds. Applications of the derived bounds are finally discussed, including a broad class of noisy iterative algorithms for meta-learning.


Sign in / Sign up

Export Citation Format

Share Document