uniform distributions
Recently Published Documents


TOTAL DOCUMENTS

308
(FIVE YEARS 80)

H-INDEX

23
(FIVE YEARS 4)

Computation ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 142
Author(s):  
Tair Askar ◽  
Bekdaulet Shukirgaliyev ◽  
Martin Lukac ◽  
Ernazar Abdikamalov

Monte Carlo methods rely on sequences of random numbers to obtain solutions to many problems in science and engineering. In this work, we evaluate the performance of different pseudo-random number generators (PRNGs) of the Curand library on a number of modern Nvidia GPU cards. As a numerical test, we generate pseudo-random number (PRN) sequences and obtain non-uniform distributions using the acceptance-rejection method. We consider GPU, CPU, and hybrid CPU/GPU implementations. For the GPU, we additionally consider two different implementations using the host and device application programming interfaces (API). We study how the performance depends on implementation parameters, including the number of threads per block and the number of blocks per streaming multiprocessor. To achieve the fastest performance, one has to minimize the time consumed by PRNG seed setup and state update. The duration of seed setup time increases with the number of threads, while PRNG state update decreases. Hence, the fastest performance is achieved by the optimal balance of these opposing effects.


2021 ◽  
Vol 70 (1) ◽  
Author(s):  
Hans-Rolf Gregorius ◽  
Elizabeth M. Gillet

AbstractWhile evenness is understood to be maximal if all types (species, genotypes, alleles, etc.) are represented equally (via abundance, biomass, area, etc.), its opposite, maximal unevenness, either remains conceptually in the dark or is conceived as the type distribution that minimizes the applied evenness index. The latter approach, however, frequently leads to conceptual inconsistency due to the fact that the minimizing distribution is not specifiable or is monomorphic. The state of monomorphism, however, is indeterminate in terms of its evenness/unevenness characteristics. Indeed, the semantic indeterminacy also shows up in the observation that monomorphism represents a state of pronounced discontinuity for the established evenness indices. This serious conceptual inconsistency is latent in the widely held idea that evenness is an independent component of diversity. As a consequence, the established evenness indices largely appear as indicators of relative polymorphism rather than as indicators of evenness. In order to arrive at consistent measures of evenness/unevenness, it seems indispensable to determine which states are of maximal unevenness and then to assess the position of a given type distribution between states of maximal evenness and maximal unevenness. Since semantically, unevenness implies inequality among type representations, its maximum is reached if all type representations are equally different. For given number of types, this situation is realized if type representations, when ranked in descending order, show equal differences between adjacent types. We term such distributions “stepladders” as opposed to “plateaus” for uniform distributions. Two approaches to new evenness measures are proposed that reflect different perspectives on the positioning of type distributions between the closest stepladders and the closest plateaus. Their two extremes indicate states of complete evenness and complete unevenness, and the midpoint is postulated to represent the turning point between prevailing evenness and prevailing unevenness. The measures are graphically illustrated by evenness surfaces plotted above frequency simplices for three types, and by transects through evenness surfaces for more types. The approach can be generalized to include variable differences between types (as required in analyses of functional evenness) by simply replacing types with pairs of different types. Pairs, as the new types, can be represented by their abundances, for example, and these can be modified in various ways by the differences between the two types that form the pair. Pair representations thus consist of both the difference between the paired types and their frequency. Omission of pair frequencies leads to conceptual ambiguity. Given this specification of pair representations, their evenness/unevenness can be evaluated using the same indices developed for simple types. Pair evenness then turns out to quantify dispersion evenness.


Energies ◽  
2021 ◽  
Vol 14 (23) ◽  
pp. 8067
Author(s):  
Emre Kantar

One of the most important causes of insulation system failure is the breakdown of the interface between two solid dielectrics; understanding the mechanisms governing this breakdown phenomenon is therefore critical. To that end, investigating and reviewing the practical limitations of the electrical breakdown strength of solid–solid interfaces present in insulating components is the primary objective of this work. The published literature from experimental and theoretical studies carried out in order to scrutinize the effects of the presence of solid–solid interfaces is investigated and discussed, considering macro, micro, and nano-scale characteristics. The reviewed literature suggests that solid–solid interfaces in accessories have non-uniform distributions of electrical fields within them in comparison to cables, where the distribution is mostly radial and symmetrical. Many agree that the elastic modulus (elasticity), radial/tangential pressure, surface smoothness/roughness, and dielectric strength of the ambient environment are the main parameters determining the tangential AC breakdown strength of solid–solid interfaces.


Author(s):  
Andrea Berdondini

ABSTRACT: Any result can be generated randomly and any random result is useless. Traditional methods define uncertainty as a measure of the dispersion around the true value and are based on the hypothesis that any divergence from uniformity is the result of a deterministic event. The problem with this approach is that even non-uniform distributions can be generated randomly and the probability of this event rises as the number of hypotheses tested increases. Consequently, there is a risk of considering a random and therefore non-repeatable hypothesis as deterministic. Indeed, it is believed that this way of acting is the cause of the high number of non-reproducible results. Therefore, we believe that the probability of obtaining an equal or better result randomly is the true uncertainty of the statistical data. Because it represents the probability that the data is useful and therefore the validity of any other analysis depends on this parameter.


2021 ◽  
Vol 21 (11) ◽  
pp. 5659-5665
Author(s):  
P. Sakthivel ◽  
R. Jothi Ramalingam ◽  
D. Pradeepa ◽  
S. Rathika ◽  
Chandra Sekhar Dash ◽  
...  

In the present study, combustion technique is adopted to study the impact of Mg2+ ion doping on ZnAI2O4 nanoparticles (NPs). L-arginine is used as a fuel component. The Mg2+ ions play a pivotal role in persuading various characteristics of ZnAI2O4 NPs. Various characterization technqiues such as Fourier transform infrared spectroscopy (FT-IR), X-ray diffraction (XRD), energy dispersive X-ray analysis (EDX), high resolution scanning electron microscopy (HR-SEM), diffuse reflectance spectroscopy (DRS), Thermo-gravimetric/differential thermal analysis (TG-DTA) and vibrating sample magnetometer (VSM) were carried out in order to synthesize the nanoparticles. Single phase cubic spinel structure of ZnAl2O4 (gahnite) formation was confirmed from the XRD characterization process of the nanoparticles. Estimated average crystallite size range of 11.85 nm to 19.02 nm was observed from Debye-Scherrer. Spherical morphology with uniform distributions was observed from HR-SEM characterization images. From the band gap studies, the attained band gap values were found to lie within 5.41 eV–4.66 eV range. The ZnAl2O4 and Mg:ZnAl2O4 NPs exhibited super-paramagnetic nature confirmed by magnetic measurements. The obtained results make ZnAl2O 4and Mg:ZnAl2O4 NPs appropriate for various optical, catalytic, energy and data storage applications.


2021 ◽  
pp. 1-16
Author(s):  
Admir Barolli ◽  
Kevin Bylykbashi ◽  
Ermioni Qafzezi ◽  
Shinji Sakamoto ◽  
Leonard Barolli ◽  
...  

Wireless Mesh Networks (WMNs) are gaining a lot of attention from researchers due to their advantages such as easy maintenance, low upfront cost and high robustness. Connectivity and stability directly affect the performance of WMNs. However, WMNs have some problems such as node placement problem, hidden terminal problem and so on. In our previous work, we implemented a simulation system to solve the node placement problem in WMNs considering Particle Swarm Optimization (PSO) and Distributed Genetic Algorithm (DGA), called WMN-PSODGA. In this paper, we compare chi-square and uniform distributions of mesh clients for different router replacement methods. The router replacement methods considered are Constriction Method (CM), Random Inertia Weight Method (RIWM), Linearly Decreasing Inertia Weight Method (LDIWM), Linearly Decreasing Vmax Method (LDVM) and Rational Decrement of Vmax Method (RDVM). The simulation results show that for chi-square distribution the mesh routers cover all mesh clients for all router replacement methods. In terms of load balancing, the method that achieves the best performance is RDVM. When using the uniform distribution, the mesh routers do not cover all mesh clients, but this distribution shows good load balancing for four router replacement methods, with RIWM showing the best performance. The only method that shows poor performance for this distribution is LDIWM. However, since not all mesh clients are covered when using uniform distribution, the best scenario is chi-square distribution of mesh clients with RDVM as a router replacement method.


2021 ◽  
Vol 71 (5) ◽  
pp. 1309-1318
Author(s):  
Abbas Eftekharian ◽  
Morad Alizadeh

Abstract The problem of finding optimal tests in the family of uniform distributions is investigated. The general forms of the uniformly most powerful and generalized likelihood ratio tests are derived. Moreover, the problem of finding the uniformly most powerful unbiased test for testing two-sided hypothesis in the presence of nuisance parameter is investigated, and it is shown that such a test is equivalent to the generalized likelihood ratio test for the same problem. The simulation study is performed to evaluate the performance of power function of the tests.


Molecules ◽  
2021 ◽  
Vol 26 (18) ◽  
pp. 5503
Author(s):  
Elena Bartolomé ◽  
Ana Arauzo ◽  
Sergio Herce ◽  
Anna Palau ◽  
Narcis Mestres ◽  
...  

The synthesis of a terbium-based 2D metal–organic framework (MOF), of formula [Tb(MeCOO)(PhCOO)2] (1), a crystalline material formed by neutral nanosheets held together by Van der Waals interactions, is presented. The material can be easily exfoliated by sonication and deposited onto different substrates. Uniform distributions of Tb-2D MOF flakes onto silicon were obtained by spin-coating. We report the luminescent and magnetic properties of the deposited flakes compared with those of the bulk. Complex 1 is luminescent in the visible and has a sizeable quantum yield of QY = 61% upon excitation at 280 nm. Photoluminescence measurements performed using a micro-Raman set up allowed us to characterize the luminescent spectra of individual flakes on silicon. Magnetization measurements of flakes-on-silicon with the applied magnetic field in-plane and out-of-plane display anisotropy. Ac susceptibility measurements show that 1 in bulk exhibits field-induced slow relaxation of the magnetization through two relaxation paths and the slowest one, with a relaxation time of τlf ≈ 0.5 s, is assigned to a direct process mechanism. The reported exfoliation of lanthanide 2D-MOFs onto substrates is an attractive approach for the development of multifunctional materials and devices for different applications.


Author(s):  
U. Dobramysl ◽  
D. Holcman

We develop a computational approach to locate the source of a steady-state gradient of diffusing particles from the fluxes through narrow windows distributed either on the boundary of a three-dimensional half-space or on a sphere. This approach is based on solving the mixed boundary stationary diffusion equation with Neumann–Green’s function. The method of matched asymptotic expansions enables the computation of the probability fluxes. To explore the range of validity of this expansion, we develop a fast analytical-Brownian numerical scheme. This scheme accelerates the simulation time by avoiding the explicit computation of Brownian trajectories in the infinite domain. The results obtained from our derived analytical formulae and the fast numerical simulation scheme agree on a large range of parameters. Using the analytical representation of the particle fluxes, we show how to reconstruct the location of the point source. Furthermore, we investigate the uncertainty in the source reconstruction due to additive fluctuations present in the fluxes. We also study the influence of various window configurations: clustered versus uniform distributions on recovering the source position. Finally, we discuss possible applications for cell navigation in biology.


Sign in / Sign up

Export Citation Format

Share Document