scholarly journals Enhanced attraction between particles in a bidisperse mixture with random pair-wise interactions

2020 ◽  
Vol 93 (9) ◽  
pp. 895-908
Author(s):  
Madhu Priya ◽  
Prabhat K. Jaiswal
Keyword(s):  
2013 ◽  
Vol 70 (9) ◽  
pp. 1306-1316 ◽  
Author(s):  
Timothy J. Miller

Selectivity and catch comparison studies are important for surveys that use two or more gears to collect relative abundance information. Prevailing model-based analytical methods for studies using a paired-gear design assume a binomial model for the data from each pair of gear sets. Important generalizations include nonparametric smooth size effects and normal random pair and size effects, but current methods for fitting models that account for random smooth size effects are restrictive, and observations within pairs may exhibit extra-binomial variation. I propose a hierarchical model that accounts for random smooth size effects among pairs and extra-binomial variation within pairs with a conditional beta-binomial distribution. I compared relative performance of models with different conditional distribution and random effects assumptions fit to data on 16 species from an experiment carried out in the US Northwest Atlantic Ocean comparing a new and a retiring vessel. For more than half of the species, conditional beta-binomial models performed better than binomial models, and accounting for random variation among pairs in the relative efficiency was important for all species.


MATEMATIKA ◽  
2019 ◽  
Vol 35 (3) ◽  
Author(s):  
Nurfarah Zulkifli ◽  
Nor Muhainiah Mohd Ali

Let G be a finite group. The probability of a random pair of elements in G are said to be co-prime when the greatest common divisor of order x and y, where x and y in G, is equal to one. Meanwhile the co-prime graph of a group is defined as a graph whose vertices are elements of G and two distinct vertices are adjacent if and only if the greatest common divisor of order x and y is equal to one. In this paper, the co-prime probability and its graph such as the type and the properties of the graph are determined.


1972 ◽  
Vol 37 (3) ◽  
pp. 501-506 ◽  
Author(s):  
John Stillwell

Ever since Spector's brilliant application of measure theory to recursion theory in 1958 [6] it has been realized that measure theory promotes sweeping simplifications in the theory of degrees. Results previously thought to be pathological were shown by Spector, and later Sacks [4], [5], to hold for almost all degrees (“almost all” in the sense of Lebesgue measure), often with much simpler proofs. Good examples of this phenomenon are Spector's demonstration that almost all pairs of sets are of incomparable degree (as an immediate consequence of Fubini's theorem) and Sacks' exquisitely simple deduction from this result that almost every degree is the join of two incomparable degrees (for if a random sequence is decomposed into its even and odd parts, the result is a random pair).The present paper attempts to vindicate the feeling that almost all degrees behave in a simple manner by showing that if the quantifier in the theory of degrees with ′(jump), ∪ (join) and ∩ (meet) is taken to be (almost ∀a) instead of (∀a) then the theory is decidable. We are able to use ∩ because it will be shown that if t1, t2 are any terms built from degree variables a1, …, am with ′ and ∪ then t1 ∩ t2 exists for almost all a1, …, am. Thus the “almost all” theory presents a sharp contrast to the standard theory, where ∩ is not always defined (Kleene-Post [1]) and which is known to be undecidable (Lachlan [2]).


Author(s):  
Hongyu Guo ◽  
Yongyi Mao ◽  
Richong Zhang

MixUp (Zhang et al. 2017) is a recently proposed dataaugmentation scheme, which linearly interpolates a random pair of training examples and correspondingly the one-hot representations of their labels. Training deep neural networks with such additional data is shown capable of significantly improving the predictive accuracy of the current art. The power of MixUp, however, is primarily established empirically and its working and effectiveness have not been explained in any depth. In this paper, we develop an understanding for MixUp as a form of “out-of-manifold regularization”, which imposes certain “local linearity” constraints on the model’s input space beyond the data manifold. This analysis enables us to identify a limitation of MixUp, which we call “manifold intrusion”. In a nutshell, manifold intrusion in MixUp is a form of under-fitting resulting from conflicts between the synthetic labels of the mixed-up examples and the labels of original training data. Such a phenomenon usually happens when the parameters controlling the generation of mixing policies are not sufficiently fine-tuned on the training data. To address this issue, we propose a novel adaptive version of MixUp, where the mixing policies are automatically learned from the data using an additional network and objective function designed to avoid manifold intrusion. The proposed regularizer, AdaMixUp, is empirically evaluated on several benchmark datasets. Extensive experiments demonstrate that AdaMixUp improves upon MixUp when applied to the current art of deep classification models.


2016 ◽  
Vol 78 (3-2) ◽  
Author(s):  
Muhanizah Abdul Hamid ◽  
Nor Muhainiah Mohd Ali ◽  
Nor Haniza Sarmin ◽  
Ahmad Erfanian ◽  
Fadila Normahia Abd Manaf

The commutativity degree of a finite group is the probability that a random pair of elements in the group commute. Furthermore, the n-th power commutativity degree of a group is a generalization of the commutativity degree of a group which is defined as the probability that the n-th power of a random pair of elements in the group commute. In this paper, the n-th power commutativity degree for some dihedral groups is computed for the case n equal to 2, called the squared commutativity degree.


2018 ◽  
Vol 21 (06n07) ◽  
pp. 1850021 ◽  
Author(s):  
GUILLAUME DEFFUANT ◽  
ILARIA BERTAZZI ◽  
SYLVIE HUET

We consider a simple model of agents modifying their opinion about themselves and about the others during random pair interactions. Two unexpected patterns emerge: (1) without gossips, starting from zero, agents’ opinions tend to grow and stabilize on average at a positive value; (2) when introducing gossips, this pattern is inverted; the opinions tend to decrease and stabilize on average at a negative value. We show that these patterns can be explained by the relative influence of a positive bias on self-opinions and of a negative bias on opinions about others. Without gossips, the positive bias on self-opinions dominates, leading to a positive average opinion. Gossips increase the negative bias about others, which can dominate the positive bias on self-opinions, leading to a negative average opinion.


Sign in / Sign up

Export Citation Format

Share Document