large batch
Recently Published Documents


TOTAL DOCUMENTS

56
(FIVE YEARS 19)

H-INDEX

7
(FIVE YEARS 2)

Pharmaceutics ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2083
Author(s):  
Francesc Navarro-Pujol ◽  
Sanja Bulut ◽  
Charlotte Hessman ◽  
Kostas Karabelas ◽  
Carles Nieto ◽  
...  

The European Medical Agency (EMA) has issued a draft guideline on the quality and equivalence of topical products. The equivalence for complex semisolid formulations involves several steps: the same quantitative content, the same microstructure, the same release, and permeation profile. In this paper, several batches of a low strength topical product, which we used as a reference/comparator product, were evaluated according to the recommendations of the EMA draft guideline. The batches were 0.025% capsaicin emulsions from the same manufacturer that were evaluated in terms of droplet size, X-ray diffraction patterns, rheology, release, and permeation profile. The generated data revealed a large batch-to-batch variability, and if the EMA guideline was applied, these batches would not be considered equivalent, although they were produced by the same manufacturer. The result of this work illustrates the difficulties in obtaining equivalence according to the current draft guidelines. It also highlights that the equivalence guidelines should consider the variability of the comparator product, and in our opinion, the guidelines should allow for claiming equivalence by comparing the limits in the variability of the data generated for the comparator product with the limits in the variability of the data generated for the intended equivalence product.


Author(s):  
Zhi Zhao ◽  
Weifeng Liu ◽  
Sen Wang ◽  
Shibo Gao
Keyword(s):  

Author(s):  
Andi Han ◽  
Junbin Gao

We propose a stochastic recursive momentum method for Riemannian non-convex optimization that achieves a nearly-optimal complexity to find epsilon-approximate solution with one sample. The new algorithm requires one-sample gradient evaluations per iteration and does not require restarting with a large batch gradient, which is commonly used to obtain a faster rate. Extensive experiment results demonstrate the superiority of the proposed algorithm. Extensions to nonsmooth and constrained optimization settings are also discussed.


Author(s):  
Xuewang Zhang ◽  
Jinzhao Lin ◽  
Yin Zhou

Cross-modal hashing has attracted considerable attention as it can implement rapid cross-modal retrieval through mapping data of different modalities into a common Hamming space. With the development of deep learning, more and more cross-modal hashing methods based on deep learning are proposed. However, most of these methods use a small batch to train a model. The large batch training can get better gradients and can improve training efficiency. In this paper, we propose the DHLBT method, which uses the large batch training and introduces orthogonal regularization to improve the generalization ability of the DHLBT model. Moreover, we consider the discreteness of hash codes and add the distance between hash codes and features to the objective function. Extensive experiments on three benchmarks show that our method achieves better performance than several existing hashing methods.


2021 ◽  
Vol 15 (3) ◽  
Author(s):  
William D Heavlin

AbstractWe consider the active learning problem for a supervised learning model: That is, after training a black box model on a given dataset, we determine which (large batch of) unlabeled candidates to label in order to improve the model further. We concentrate on the large batch case, because this is most aligned with most machine learning applications, and because it is more theoretically rich. Our approach blends three ideas: (1) We quantify model uncertainty with jackknife-like 50-per cent sub-samples (“half-samples”). (2) To select which n of C candidates to label, we consider (a rank-$$(M-1)$$ ( M - 1 ) estimate of) the associated $$C\times C$$ C × C prediction covariance matrix, which has good properties. (3) Our algorithm works only indirectly with this covariance matrix, using a linear-in-C object. We illustrate by fitting a deep neural network to about 20 percent of the CIFAR-10 image dataset. The statistical efficiency we achieve is $$3\times$$ 3 × random selection.


Author(s):  
Fawze Alnadari ◽  
Yemin Xue ◽  
Aisha Almakas ◽  
Amani Mohedein ◽  
Abdel Samie ◽  
...  

Author(s):  
Michael Todinov

Abstract The paper discusses applications of the domain-independent method of algebraic inequalities, for reducing uncertainty and risk. Algebraic inequalities have been used for revealing the intrinsic reliability of competing systems and ranking the systems in terms of reliability in the absence of knowledge related to the reliabilities of their components. An algebraic inequality has also been used to establish the principle of the well-ordered parallel-series systems which, in turn, has been applied to maximize the reliability of common parallel-series systems. The paper introduces linking an abstract inequality to a real process by a meaningful interpretation of the variables entering the inequality and its left- and right-hand parts. The meaningful interpretation of a simple algebraic inequality led to a counterintuitive result. If two varieties of items are present in a large batch, the probability of selecting randomly two items of different variety is smaller than the probability of selecting randomly two items of the same variety.


Sign in / Sign up

Export Citation Format

Share Document