scholarly journals A deterministic algorithm for constructing multiple rank-1 lattices of near-optimal size

2021 ◽  
Vol 47 (6) ◽  
Author(s):  
Craig Gross ◽  
Mark A. Iwen ◽  
Lutz Kämmerer ◽  
Toni Volkmer
2013 ◽  
pp. 109-135
Author(s):  
Y. Goland

The article refutes popular belief about the necessity to abolish the New Economic Policy (NEP) of the 1920s for the purpose of industrialization. It is shown that it started successfully under NEP although due to a number of reasons the efficiency of the investments was low. The abolishment of NEP was caused not by the necessity to accelerate the industrialization but by the wrong policy towards the agriculture that stopped the development of farms. The article analyzes the discussion about possible rates of the domestic capital formation. In the course of this discussion, the sensible approach to finding the optimal size of investments depending on their efficiency was offered. This approach is still relevant today.


GIS Business ◽  
2019 ◽  
Vol 14 (6) ◽  
pp. 577-585
Author(s):  
T. Vivekanandan ◽  
S. Sachithanantham

In inventory control, suitable models for various real life systems are constructed with the objective of determining the optimal inventory level.  A new type of inventory model using the so-called change of distribution property is analyzed in this paper. There are two machines M1 and M2  in series and the output of M1 is the input of M2. Hence a reserve inventory between M1 and M2 is to be maintained. The method of obtaining the optimal size of reserve inventory, assuming cost of excess inventory, cost of shortage and when the rate of consumption of M2  is a constant, has already been attempted.  In this paper, it is assumed that the repair time of M1  is a random variable and the distribution of the same undergoes a change of distribution  after the truncation point X0 , which is taken to be a random variable.  The optimal size of the reserve inventory is obtained under the above said  assumption . Numerical illustrations are also provided.


2020 ◽  
Vol 12 ◽  
Author(s):  
Alexandra Atyaksheva ◽  
Yermek Sarsikeyev ◽  
Anastasia Atyaksheva ◽  
Olga Galtseva ◽  
Alexander Rogachev

Aims:: The main goals of this research are exploration of energy-efficient building materials when replacing natural materials with industrial waste and development of the theory and practice of obtaining light and ultra-light gravel materials based on mineral binders and waste dump ash and slag mixtures of hydraulic removal. Background.: Experimental data on the conditions of formation of gravel materials containing hollow aluminum and silica microsphere with opportunity of receipt of optimum structure and properties depending on humidity with the using of various binders are presented in this article. This article dwells on the scientific study of opportunity physical-mechanical properties of composite materials optimization are considered. Objective.: Composite material contains hollow aluminum and silica microsphere. Method.: The study is based on the application of the method of separation of power and heat engineering functions. The method is based on the use of the factor structure optimality, which takes into account the primary and secondary stress fields of the structural gravel material. This indicates the possibility of obtaining gravel material with the most uniform distribution of nano - and microparticles in the gravel material and the formation of stable matrices with minimization of stress concentrations. Experiments show that the thickness of the cement shell, which performs power functions, is directly related to the size of the raw granules. At the same time, the thickness of the cement crust, regardless of the type of binder, with increasing moisture content has a higher rate of formation for granules of larger diameter. Results.: The conditions for the formation of gravel composite materials containing a hollow aluminosilicate microsphere are studied. The optimal structure and properties of the gravel composite material were obtained. The dependence of the strength function on humidity and the type of binder has been investigated. The optimal size and shape of binary form of gravel material containing a hollow aluminosilicate microsphere with a minimum thickness of a cement shell and a maximum strength function was obtained. Conclusion.: Received structure allows to separate power and heat engineering functions in material and to minimize the content of the excited environment centers.


Author(s):  
Kai Han ◽  
Shuang Cui ◽  
Tianshuai Zhu ◽  
Enpei Zhang ◽  
Benwei Wu ◽  
...  

Data summarization, i.e., selecting representative subsets of manageable size out of massive data, is often modeled as a submodular optimization problem. Although there exist extensive algorithms for submodular optimization, many of them incur large computational overheads and hence are not suitable for mining big data. In this work, we consider the fundamental problem of (non-monotone) submodular function maximization with a knapsack constraint, and propose simple yet effective and efficient algorithms for it. Specifically, we propose a deterministic algorithm with approximation ratio 6 and a randomized algorithm with approximation ratio 4, and show that both of them can be accelerated to achieve nearly linear running time at the cost of weakening the approximation ratio by an additive factor of ε. We then consider a more restrictive setting without full access to the whole dataset, and propose streaming algorithms with approximation ratios of 8+ε and 6+ε that make one pass and two passes over the data stream, respectively. As a by-product, we also propose a two-pass streaming algorithm with an approximation ratio of 2+ε when the considered submodular function is monotone. To the best of our knowledge, our algorithms achieve the best performance bounds compared to the state-of-the-art approximation algorithms with efficient implementation for the same problem. Finally, we evaluate our algorithms in two concrete submodular data summarization applications for revenue maximization in social networks and image summarization, and the empirical results show that our algorithms outperform the existing ones in terms of both effectiveness and efficiency.


2020 ◽  
Vol 7 (Supplement_1) ◽  
pp. S300-S300
Author(s):  
Jeffrey Rewley

Abstract Background In the early stages of a novel pandemic, testing is simultaneously in high need but low supply, making efficient use of tests of paramount importance. One approach to improve the efficiency of tests is to mix samples from multiple individuals, only testing individuals when the pooled sample returns a positive. Methods I build on current models which assume patients’ sero-status is independent by allowing for correlation betweenconsecutive tests (e.g. if a family were all infected and were all tested together). In this model, I simulate 10,000 patients being tested in sequence, with population sero-prevalence ranging from 1% to 25%, using batch sizes from 3 to 10, and assuming the increased probability of consecutive infections ranged from 0% to 50%. Results I find that as the likelihood of consecutive infected patients increases, the efficiency of specimen pooling increases. As well, the optimal size of the batch increases in the presence of clustered sequences of infected patients. Heat map indicating the manner in which the number of tests needed is reduced as population prevalence and correlation between cases changes. Red indicates that there is no reduction in the number of tests, and blue indicates a near 100% reduction in the number of tests, with intermediate colors indicating intermediate fractions. Conclusion This analysis indicates further improvements in specimen pooling efficiency can begained by taking advantage of the pattern of patient testing. Disclosures Jeffrey Rewley, PhD, MS, American Board of Internal Medicine (Employee)


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
N. C. Angastiniotis ◽  
S. Christopoulos ◽  
K. C. Petallidou ◽  
A. M. Efstathiou ◽  
A. Othonos ◽  
...  

AbstractA bulk scale process is implemented for the production of nanostructured film composites comprising unary or multi-component metal oxide nanoparticles dispersed in a suitable polymer matrix. The as-received nanoparticles, namely Al$$_2$$ 2 O$$_3$$ 3 , SiO$$_2$$ 2 and TiO$$_2$$ 2 and binary combinations, are treated following specific chemical and mechanical processes in order to be suspended at the optimal size and composition. Subsequently, a polymer extrusion technique is employed for the fabrication of each film, while the molten polymer is mixed with the treated metal oxide nanoparticles. Transmission and reflection measurements are performed in order to map the optical properties of the fabricated, nanostructured films in the UV, VIS and IR. The results substantiate the capability of the overall methodology to regulate the optical properties of the films depending on the type of nanoparticle formation which can be adjusted both in size and composition.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Luca Gamberi ◽  
Yanik-Pascal Förster ◽  
Evan Tzanis ◽  
Alessia Annibale ◽  
Pierpaolo Vivo

AbstractAn important question in representative democracies is how to determine the optimal parliament size of a given country. According to an old conjecture, known as the cubic root law, there is a fairly universal power-law relation, with an exponent equal to 1/3, between the size of an elected parliament and the country’s population. Empirical data in modern European countries support such universality but are consistent with a larger exponent. In this work, we analyse this intriguing regularity using tools from complex networks theory. We model the population of a democratic country as a random network, drawn from a growth model, where each node is assigned a constituency membership sampled from an available set of size D. We calculate analytically the modularity of the population and find that its functional relation with the number of constituencies is strongly non-monotonic, exhibiting a maximum that depends on the population size. The criterion of maximal modularity allows us to predict that the number of representatives should scale as a power-law in the size of the population, a finding that is qualitatively confirmed by the empirical analysis of real-world data.


Sign in / Sign up

Export Citation Format

Share Document