ORBIT CHANGES UNDER THE SMALL CONSTANT DECELERATION

2016 ◽  
Vol 22 (6) ◽  
pp. 20-25 ◽  
Author(s):  
A.I. Maslova ◽  
◽  
A.V. Pirozhenko ◽  
Keyword(s):  
2010 ◽  
Vol 156-157 ◽  
pp. 1702-1707
Author(s):  
Xiang Wen Cheng ◽  
Jinchao Liu ◽  
Qi Zhi Ding ◽  
Li Ming Song ◽  
Zhan Lin Wang

How to predict the relationship among particle size and among product size, to establish the relationship between the granularity and working parameters in the process of grinding and to determine the optimum operating parameters. With proposing BS squeeze crush model by L. Bass and the idea of roll surface division as the material uneven extrusion force are adopted. Based on field experiments the experimental data is analyzed, the select function and the breakage functions are fitted with MATLAB software, and obtaining their model. The comminution model is determined by the roller division. We obtain the model parameter through the experimental data. Through model analysis shows: the relationship between particle breakage and energy absorption, namely the smaller size of the same power, the lower broken; the breakage diminishes with the decrease of particle size ratio and it will be tending to a small constant when the smaller particle size ratio. The breakage functions rapidly decrease within ratio of between 0.2-0.7. This shows: the energy consumption will rapidly increase when the particle size of less than 0.2 in broken; the selection diminish with the decrease of particle size. Pressure (8-9MPa) should be the most appropriate value.


2021 ◽  
Vol 13 (3) ◽  
pp. 1-21
Author(s):  
Suryajith Chillara

In this article, we are interested in understanding the complexity of computing multilinear polynomials using depth four circuits in which the polynomial computed at every node has a bound on the individual degree of r ≥ 1 with respect to all its variables (referred to as multi- r -ic circuits). The goal of this study is to make progress towards proving superpolynomial lower bounds for general depth four circuits computing multilinear polynomials, by proving better bounds as the value of r increases. Recently, Kayal, Saha and Tavenas (Theory of Computing, 2018) showed that any depth four arithmetic circuit of bounded individual degree r computing an explicit multilinear polynomial on n O (1) variables and degree d must have size at least ( n / r 1.1 ) Ω(√ d / r ) . This bound, however, deteriorates as the value of r increases. It is a natural question to ask if we can prove a bound that does not deteriorate as the value of r increases, or a bound that holds for a larger regime of r . In this article, we prove a lower bound that does not deteriorate with increasing values of r , albeit for a specific instance of d = d ( n ) but for a wider range of r . Formally, for all large enough integers n and a small constant η, we show that there exists an explicit polynomial on n O (1) variables and degree Θ (log 2 n ) such that any depth four circuit of bounded individual degree r ≤ n η must have size at least exp(Ω(log 2 n )). This improvement is obtained by suitably adapting the complexity measure of Kayal et al. (Theory of Computing, 2018). This adaptation of the measure is inspired by the complexity measure used by Kayal et al. (SIAM J. Computing, 2017).


2002 ◽  
Vol 87 (2) ◽  
pp. 1057-1067 ◽  
Author(s):  
Akira Haji ◽  
Mari Okazaki ◽  
Hiromi Yamazaki ◽  
Ryuji Takeda

To assess the functional significance of late inspiratory (late-I) neurons in inspiratory off-switching (IOS), membrane potential and discharge properties were examined in vagotomized, decerebrate cats. During spontaneous IOS, late-I neurons displayed large membrane depolarization and associated discharge of action potentials that started in late inspiration, peaked at the end of inspiration, and ended during postinspiration. Depolarization was decreased by iontophoresis of dizocilpine and eliminated by tetrodotoxin. Stimulation of the vagus nerve or the nucleus parabrachialis medialis (NPBM) also evoked depolarization of late-I neurons and IOS. Waves of spontaneous chloride-dependent inhibitory postsynaptic potentials (IPSPs) preceded membrane depolarization during early inspiration and followed during postinspiration and stage 2 expiration of the respiratory cycle. Iontophoresed bicuculline depressed the IPSPs. Intravenous dizocilpine caused a greatly prolonged inspiratory discharge of the phrenic nerve (apneusis) and suppressed late-inspiratory depolarization as well as early-inspiratory IPSPs, resulting in a small constant depolarization throughout the apneusis. NPBM or vagal stimulation after dizocilpine produced small, stimulus-locked excitatory postsynaptic potentials (EPSPs) in late-I neurons. Neurobiotin-labeled late-I neurons revealed immunoreactivity for glutamic acid decarboxylase as well as N-methyl-d-aspartate (NMDA) receptors. These results suggest that late-I neurons are GABAergic inhibitory neurons, while the effects of bicuculline and dizocilpine indicate that they receive periodic waves of GABAergic IPSPs and glutamatergic EPSPs. The data lead to the conclusion that late-I neurons play an important inhibitory role in IOS. NMDA receptors are assumed to augment and/or synchronize late-inspiratory depolarization and discharge of late-I neurons, leading to GABA release and consequently off-switching of bulbar inspiratory neurons and phrenic motoneurons.


Author(s):  
Matthew Coudron ◽  
Jalex Stark ◽  
Thomas Vidick

AbstractThe generation of certifiable randomness is the most fundamental information-theoretic task that meaningfully separates quantum devices from their classical counterparts. We propose a protocol for exponential certified randomness expansion using a single quantum device. The protocol calls for the device to implement a simple quantum circuit of constant depth on a 2D lattice of qubits. The output of the circuit can be verified classically in linear time, and is guaranteed to contain a polynomial number of certified random bits assuming that the device used to generate the output operated using a (classical or quantum) circuit of sub-logarithmic depth. This assumption contrasts with the locality assumption used for randomness certification based on Bell inequality violation and more recent proposals for randomness certification based on computational assumptions. Furthermore, to demonstrate randomness generation it is sufficient for a device to sample from the ideal output distribution within constant statistical distance. Our procedure is inspired by recent work of Bravyi et al. (Science 362(6412):308–311, 2018), who introduced a relational problem that can be solved by a constant-depth quantum circuit, but provably cannot be solved by any classical circuit of sub-logarithmic depth. We develop the discovery of Bravyi et al. into a framework for robust randomness expansion. Our results lead to a new proposal for a demonstrated quantum advantage that has some advantages compared to existing proposals. First, our proposal does not rest on any complexity-theoretic conjectures, but relies on the physical assumption that the adversarial device being tested implements a circuit of sub-logarithmic depth. Second, success on our task can be easily verified in classical linear time. Finally, our task is more noise-tolerant than most other existing proposals that can only tolerate multiplicative error, or require additional conjectures from complexity theory; in contrast, we are able to allow a small constant additive error in total variation distance between the sampled and ideal distributions.


2016 ◽  
Vol 12 (2) ◽  
pp. 75-82 ◽  
Author(s):  
C. Berkman ◽  
M.C. Pereira ◽  
K.B. Nardi ◽  
G.T. Pereira ◽  
O.A.B. Soares ◽  
...  

Little information is available comparing the i-STAT and the YSI 2300 Stat Plus devices to determine the lactate concentration [Lac] in dogs undergoing intense exercise. The reproducibility of the YSI 2300 for quantifying the [Lac] in canine blood [Lac]b and plasma [Lac]p samples has been observed. In addition, the i-STAT handheld device was used to quantify [Lac] in dogs subjected to exercise, and the results were compared with that of YSI 2300. Venous blood samples of Beagle and American Pit Bull Terrier dogs were obtained during an intense exercise training on a treadmill. [Lac]p and [Lac]b were quantified using the YSI 2300 instrument to determine the reproducibility of the results. A total of 52 specimens were compared for both plasma and whole blood. For comparing the devices (YSI 2300 vs i-STAT), 96 samples were used. Ordinary least products regression, the correlation coefficient, and Bland-Altman plots were used to assess the agreement of using the i-STAT device. The relationship between duplicate measurements of both [Lac]b and [Lac]p by YSI 2300 was strong (r=0.99). A correlation between the data obtained using the i-STAT and YSI 2300 instruments was observed for both the [Lac]p (r=0.97) and [Lac]b (r=0.88). The i-STAT exhibited a small constant bias (-0.25 mmol/l) compared to YSI 2300 ([Lac]b). There were proportional biases of 0.89 mmol/l for [Lac]p and 1.22 mmol/l for [Lac]b when using YSI 2300 vs i-STAT. We confirmed the reproducibility of the YSI 2300 for canine lactate blood/plasma samples. The results obtained by the i-STAT and YSI 2300 analyser were highly correlated, but a small constant bias was observed between them. The i-STAT device can be used in clinical evaluations, and it is also adequate for designing and monitoring fitness programmes.


2021 ◽  
Vol 182 (3) ◽  
pp. 219-242
Author(s):  
Mostafa Haghir Chehreghani ◽  
Albert Bifet ◽  
Talel Abdessalem

Graphs (networks) are an important tool to model data in different domains. Realworld graphs are usually directed, where the edges have a direction and they are not symmetric. Betweenness centrality is an important index widely used to analyze networks. In this paper, first given a directed network G and a vertex r ∈ V (G), we propose an exact algorithm to compute betweenness score of r. Our algorithm pre-computes a set ℛ𝒱(r), which is used to prune a huge amount of computations that do not contribute to the betweenness score of r. Time complexity of our algorithm depends on |ℛ𝒱(r)| and it is respectively Θ(|ℛ𝒱(r)| · |E(G)|) and Θ(|ℛ𝒱(r)| · |E(G)| + |ℛ𝒱(r)| · |V(G)| log |V(G)|) for unweighted graphs and weighted graphs with positive weights. |ℛ𝒱(r)| is bounded from above by |V(G)| – 1 and in most cases, it is a small constant. Then, for the cases where ℛ𝒱(r) is large, we present a simple randomized algorithm that samples from ℛ𝒱(r) and performs computations for only the sampled elements. We show that this algorithm provides an (ɛ, δ)-approximation to the betweenness score of r. Finally, we perform extensive experiments over several real-world datasets from different domains for several randomly chosen vertices as well as for the vertices with the highest betweenness scores. Our experiments reveal that for estimating betweenness score of a single vertex, our algorithm significantly outperforms the most efficient existing randomized algorithms, in terms of both running time and accuracy. Our experiments also reveal that our algorithm improves the existing algorithms when someone is interested in computing betweenness values of the vertices in a set whose cardinality is very small.


2014 ◽  
Vol 739 ◽  
pp. 263-268 ◽  
Author(s):  
Gregory Gabadadze
Keyword(s):  

2018 ◽  
Vol 28 (4) ◽  
pp. 600-617
Author(s):  
P. V. POBLETE ◽  
A. VIOLA

Thirty years ago, the Robin Hood collision resolution strategy was introduced for open addressing hash tables, and a recurrence equation was found for the distribution of its search cost. Although this recurrence could not be solved analytically, it allowed for numerical computations that, remarkably, suggested that the variance of the search cost approached a value of 1.883 when the table was full. Furthermore, by using a non-standard mean-centred search algorithm, this would imply that searches could be performed in expected constant time even in a full table.In spite of the time elapsed since these observations were made, no progress has been made in proving them. In this paper we introduce a technique to work around the intractability of the recurrence equation by solving instead an associated differential equation. While this does not provide an exact solution, it is sufficiently powerful to prove a bound of π2/3 for the variance, and thus obtain a proof that the variance of Robin Hood is bounded by a small constant for load factors arbitrarily close to 1. As a corollary, this proves that the mean-centred search algorithm runs in expected constant time.We also use this technique to study the performance of Robin Hood hash tables under a long sequence of insertions and deletions, where deletions are implemented by marking elements as deleted. We prove that, in this case, the variance is bounded by 1/(1−α), where α is the load factor.To model the behaviour of these hash tables, we use a unified approach that we apply also to study the First-Come-First-Served and Last-Come-First-Served collision resolution disciplines, both with and without deletions.


2017 ◽  
Vol 24 (s1) ◽  
pp. 213-223 ◽  
Author(s):  
Pawel Śliwiński

Abstract In this paper volumetric losses in hydraulic motor supplied with water and mineral oil (two liquids having significantly different viscosity and lubricating properties) are described and compared. The experimental tests were conducted using an innovative hydraulic satellite motor, that is dedicated to work with different liquids, including water. The sources of leaks in this motor are also characterized and described. On this basis, a mathematical model of volumetric losses and model of effective rotational speed have been developed and presented. The results of calculation of volumetric losses according to the model are compared with the results of experiment. It was found that the difference is not more than 20%. Furthermore, it has been demonstrated that this model well describes in both the volumetric losses in the motor supplied with water and oil. Experimental studies have shown that the volumetric losses in the motor supplied with water are even three times greater than the volumetric losses in the motor supplied with oil. It has been shown, that in a small constant stream of water the speed of the motor is reduced even by half in comparison of speed of motor supplied with the same stream of oil.


1995 ◽  
Vol 74 (6) ◽  
pp. 2415-2426 ◽  
Author(s):  
K. Moradmand ◽  
M. D. Goldfinger

1. The purpose of this work was to determine whether computed temporally coded axonal information generated by Poisson process stimulation were modified during long-distance propagation, as originally suggested by S. A. George. Propagated impulses were computed with the use of the Hodgkin-Huxley equations and cable theory to simulate excitation and current spread in 100-microns-diam unmyelinated axons, whose total length was 8.1 cm (25 lambda) or 101.4 cm (312.5 lambda). Differential equations were solved numerically, with the use of trapezoidal integration over small, constant electrotonic and temporal steps (0.125 lambda and 1.0 microsecond, respectively). 2. Using dual-pulse stimulation, we confirmed that for interstimulus intervals between 5 and 11 ms, the conduction velocity of the second of a short-interval pair of impulses was slower than that of the first impulse. Further, with sufficiently long propagation distance, the second impulse's conduction velocity increased steadily and eventually approached that of the first impulse. This effect caused a spatially varying interspike interval: as propagation proceeded, the interspike interval increased and eventually approached stabilization. 3. With Poisson stimulation, the peak amplitude of propagating action potentials varied with interspike interval durations between 5 and 11 ms. Such amplitude attenuation was caused by the incomplete relaxation of parameters n (macroscopic K-conductance activation) and h (macroscopic Na-conductance inactivation) during the interspike period. 4. The stochastic properties of the impulse train became less Poisson-like with propagation distance. In cases of propagation over 99.4 cm, the impulse trains developed marked periodicities in Interevent Interval Distribution and Expectation Density function because of the axially modulated transformation of interspike intervals. 5. Despite these changes in impulse train parameters, the arithmetic value of the mean interspike interval did not change as a function of propagation distance. This work showed that in theory, whereas the pattern of Poisson-like impulse codes was modified during long-distance propagation, their mean rate was conserved.


Sign in / Sign up

Export Citation Format

Share Document