MINIMIZING THE COMPLETE INFLUENCE TIME IN A SOCIAL NETWORK WITH STOCHASTIC COSTS FOR INFLUENCING NODES

Author(s):  
YAODONG NI ◽  
QIAONI SHI

In this paper, we study the problem of targeting a set of individuals to trigger a cascade of influence in a social network such that the influence diffuses to all individuals with the minimum time, given that the cost of initially influencing each individual is with randomness and that the budget available is limited. We adopt the incremental chance model to characterize the diffusion of influence, and propose three stochastic programming models that correspond to three different decision criteria respectively. A modified greedy algorithm is presented to solve the proposed models, which can flexibly trade off between solution performance and computational complexity. Experiments are performed on random graphs, by which we show that the algorithm we present is effective.

2020 ◽  
Vol 4 (02) ◽  
pp. 34-45
Author(s):  
Naufal Dzikri Afifi ◽  
Ika Arum Puspita ◽  
Mohammad Deni Akbar

Shift to The Front II Komplek Sukamukti Banjaran Project is one of the projects implemented by one of the companies engaged in telecommunications. In its implementation, each project including Shift to The Front II Komplek Sukamukti Banjaran has a time limit specified in the contract. Project scheduling is an important role in predicting both the cost and time in a project. Every project should be able to complete the project before or just in the time specified in the contract. Delay in a project can be anticipated by accelerating the duration of completion by using the crashing method with the application of linear programming. Linear programming will help iteration in the calculation of crashing because if linear programming not used, iteration will be repeated. The objective function in this scheduling is to minimize the cost. This study aims to find a trade-off between the costs and the minimum time expected to complete this project. The acceleration of the duration of this study was carried out using the addition of 4 hours of overtime work, 3 hours of overtime work, 2 hours of overtime work, and 1 hour of overtime work. The normal time for this project is 35 days with a service fee of Rp. 52,335,690. From the results of the crashing analysis, the alternative chosen is to add 1 hour of overtime to 34 days with a total service cost of Rp. 52,375,492. This acceleration will affect the entire project because there are 33 different locations worked on Shift to The Front II and if all these locations can be accelerated then the duration of completion of the entire project will be effective


1996 ◽  
Vol 05 (01) ◽  
pp. 27-72 ◽  
Author(s):  
R. CLARK ◽  
C. GROSSNER ◽  
T. RADHAKRISHNAN

Resolving disparate viewpoints via planning is an important aspect of the distributed problem solving performed by the agents in Cooperative Intelligent Information Systems. We propose a distributed planning protocol called Consensus. Consensus specifies a methodology by which agents exchange information indicating their preferred actions, integrate these different sets of actions, resolve any conflicts that exist, and choose a joint set of actions. Within the framework of the Consensus protocol, two different heuristics are proposed for resolving conflicts which overcome the computational complexity of an exhaustive search. An implementation of the Consensus protocol is analyzed experimentally to assess its performance in resolving conflicts. The experimental results indicate the trade-off between the cost of planning and the quality of the plan produced with respect to the heuristics used for resolving conflicts.


2020 ◽  
Vol 12 (7) ◽  
pp. 2767 ◽  
Author(s):  
Víctor Yepes ◽  
José V. Martí ◽  
José García

The optimization of the cost and CO 2 emissions in earth-retaining walls is of relevance, since these structures are often used in civil engineering. The optimization of costs is essential for the competitiveness of the construction company, and the optimization of emissions is relevant in the environmental impact of construction. To address the optimization, black hole metaheuristics were used, along with a discretization mechanism based on min–max normalization. The stability of the algorithm was evaluated with respect to the solutions obtained; the steel and concrete values obtained in both optimizations were analyzed. Additionally, the geometric variables of the structure were compared. Finally, the results obtained were compared with another algorithm that solved the problem. The results show that there is a trade-off between the use of steel and concrete. The solutions that minimize CO 2 emissions prefer the use of concrete instead of those that optimize the cost. On the other hand, when comparing the geometric variables, it is seen that most remain similar in both optimizations except for the distance between buttresses. When comparing with another algorithm, the results show a good performance in optimization using the black hole algorithm.


Author(s):  
Vincent E. Castillo ◽  
John E. Bell ◽  
Diane A. Mollenkopf ◽  
Theodore P. Stank

2021 ◽  
Vol 11 (15) ◽  
pp. 7007
Author(s):  
Janusz P. Paplinski ◽  
Aleksandr Cariow

This article presents an efficient algorithm for computing a 10-point DFT. The proposed algorithm reduces the number of multiplications at the cost of a slight increase in the number of additions in comparison with the known algorithms. Using a 10-point DFT for harmonic power system analysis can improve accuracy and reduce errors caused by spectral leakage. This paper compares the computational complexity for an L×10M-point DFT with a 2M-point DFT.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 957
Author(s):  
Branislav Popović ◽  
Lenka Cepova ◽  
Robert Cep ◽  
Marko Janev ◽  
Lidija Krstanović

In this work, we deliver a novel measure of similarity between Gaussian mixture models (GMMs) by neighborhood preserving embedding (NPE) of the parameter space, that projects components of GMMs, which by our assumption lie close to lower dimensional manifold. By doing so, we obtain a transformation from the original high-dimensional parameter space, into a much lower-dimensional resulting parameter space. Therefore, resolving the distance between two GMMs is reduced to (taking the account of the corresponding weights) calculating the distance between sets of lower-dimensional Euclidean vectors. Much better trade-off between the recognition accuracy and the computational complexity is achieved in comparison to measures utilizing distances between Gaussian components evaluated in the original parameter space. The proposed measure is much more efficient in machine learning tasks that operate on large data sets, as in such tasks, the required number of overall Gaussian components is always large. Artificial, as well as real-world experiments are conducted, showing much better trade-off between recognition accuracy and computational complexity of the proposed measure, in comparison to all baseline measures of similarity between GMMs tested in this paper.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jeonghyuk Park ◽  
Yul Ri Chung ◽  
Seo Taek Kong ◽  
Yeong Won Kim ◽  
Hyunho Park ◽  
...  

AbstractThere have been substantial efforts in using deep learning (DL) to diagnose cancer from digital images of pathology slides. Existing algorithms typically operate by training deep neural networks either specialized in specific cohorts or an aggregate of all cohorts when there are only a few images available for the target cohort. A trade-off between decreasing the number of models and their cancer detection performance was evident in our experiments with The Cancer Genomic Atlas dataset, with the former approach achieving higher performance at the cost of having to acquire large datasets from the cohort of interest. Constructing annotated datasets for individual cohorts is extremely time-consuming, with the acquisition cost of such datasets growing linearly with the number of cohorts. Another issue associated with developing cohort-specific models is the difficulty of maintenance: all cohort-specific models may need to be adjusted when a new DL algorithm is to be used, where training even a single model may require a non-negligible amount of computation, or when more data is added to some cohorts. In resolving the sub-optimal behavior of a universal cancer detection model trained on an aggregate of cohorts, we investigated how cohorts can be grouped to augment a dataset without increasing the number of models linearly with the number of cohorts. This study introduces several metrics which measure the morphological similarities between cohort pairs and demonstrates how the metrics can be used to control the trade-off between performance and the number of models.


2020 ◽  
Vol 15 (1) ◽  
pp. 4-17
Author(s):  
Jean-François Biasse ◽  
Xavier Bonnetain ◽  
Benjamin Pring ◽  
André Schrottenloher ◽  
William Youmans

AbstractWe propose a heuristic algorithm to solve the underlying hard problem of the CSIDH cryptosystem (and other isogeny-based cryptosystems using elliptic curves with endomorphism ring isomorphic to an imaginary quadratic order 𝒪). Let Δ = Disc(𝒪) (in CSIDH, Δ = −4p for p the security parameter). Let 0 < α < 1/2, our algorithm requires:A classical circuit of size $2^{\tilde{O}\left(\log(|\Delta|)^{1-\alpha}\right)}.$A quantum circuit of size $2^{\tilde{O}\left(\log(|\Delta|)^{\alpha}\right)}.$Polynomial classical and quantum memory.Essentially, we propose to reduce the size of the quantum circuit below the state-of-the-art complexity $2^{\tilde{O}\left(\log(|\Delta|)^{1/2}\right)}$ at the cost of increasing the classical circuit-size required. The required classical circuit remains subexponential, which is a superpolynomial improvement over the classical state-of-the-art exponential solutions to these problems. Our method requires polynomial memory, both classical and quantum.


2012 ◽  
Vol 239-240 ◽  
pp. 1522-1527
Author(s):  
Wen Bo Wu ◽  
Yu Fu Jia ◽  
Hong Xing Sun

The bottleneck assignment (BA) and the generalized assignment (GA) problems and their exact solutions are explored in this paper. Firstly, a determinant elimination (DE) method is proposed based on the discussion of the time and space complexity of the enumeration method for both BA and GA problems. The optimization algorithm to the pre-assignment problem is then discussed and the adjusting and transformation to the cost matrix is adopted to reduce the computational complexity of the DE method. Finally, a synthesis method for both BA and GA problems is presented. The numerical experiments are carried out and the results indicate that the proposed method is feasible and of high efficiency.


Sign in / Sign up

Export Citation Format

Share Document