A Model for Dynamic Routing of Multiuser Communication Network

Author(s):  
Rajeshri A. Puranik ◽  
Sapana P. Dubey

In this paper a dynamic model for competitive routing in multi user communication network is presented. Dynamics has been introduced by considering the status of the communication network over a period of time, time dependence of link capacity and availability, and accordingly the cost function. We use game theoretical concepts to analyze this model. We assume that each user of the communication network can control the amount of flow to optimize his gain (or to minimize cost).

2021 ◽  
Vol 1 ◽  
pp. 131-140
Author(s):  
Federica Cappelletti ◽  
Marta Rossi ◽  
Michele Germani ◽  
Mohammad Shadman Hanif

AbstractDe-manufacturing and re-manufacturing are fundamental technical solutions to efficiently recover value from post-use products. Disassembly in one of the most complex activities in de-manufacturing because i) the more manual it is the higher is its cost, ii) disassembly times are variable due to uncertainty of conditions of products reaching their EoL, and iii) because it is necessary to know which components to disassemble to balance the cost of disassembly. The paper proposes a methodology that finds ways of applications: it can be applied at the design stage to detect space for product design improvements, and it also represents a baseline from organizations approaching de-manufacturing for the first time. The methodology consists of four main steps, in which firstly targets components are identified, according to their environmental impact; secondly their disassembly sequence is qualitatively evaluated, and successively it is quantitatively determined via disassembly times, predicting also the status of the component at their End of Life. The aim of the methodology is reached at the fourth phase when alternative, eco-friendlier End of Life strategies are proposed, verified, and chosen.


2021 ◽  
Vol 193 (7) ◽  
Author(s):  
Heini Hyvärinen ◽  
Annaliina Skyttä ◽  
Susanna Jernberg ◽  
Kristian Meissner ◽  
Harri Kuosa ◽  
...  

AbstractGlobal deterioration of marine ecosystems, together with increasing pressure to use them, has created a demand for new, more efficient and cost-efficient monitoring tools that enable assessing changes in the status of marine ecosystems. However, demonstrating the cost-efficiency of a monitoring method is not straightforward as there are no generally applicable guidelines. Our study provides a systematic literature mapping of methods and criteria that have been proposed or used since the year 2000 to evaluate the cost-efficiency of marine monitoring methods. We aimed to investigate these methods but discovered that examples of actual cost-efficiency assessments in literature were rare, contradicting the prevalent use of the term “cost-efficiency.” We identified five different ways to compare the cost-efficiency of a marine monitoring method: (1) the cost–benefit ratio, (2) comparative studies based on an experiment, (3) comparative studies based on a literature review, (4) comparisons with other methods based on literature, and (5) subjective comparisons with other methods based on experience or intuition. Because of the observed high frequency of insufficient cost–benefit assessments, we strongly advise that more attention is paid to the coverage of both cost and efficiency parameters when evaluating the actual cost-efficiency of novel methods. Our results emphasize the need to improve the reliability and comparability of cost-efficiency assessments. We provide guidelines for future initiatives to develop a cost-efficiency assessment framework and suggestions for more unified cost-efficiency criteria.


2021 ◽  
Vol 11 (2) ◽  
pp. 850
Author(s):  
Dokkyun Yi ◽  
Sangmin Ji ◽  
Jieun Park

Artificial intelligence (AI) is achieved by optimizing the cost function constructed from learning data. Changing the parameters in the cost function is an AI learning process (or AI learning for convenience). If AI learning is well performed, then the value of the cost function is the global minimum. In order to obtain the well-learned AI learning, the parameter should be no change in the value of the cost function at the global minimum. One useful optimization method is the momentum method; however, the momentum method has difficulty stopping the parameter when the value of the cost function satisfies the global minimum (non-stop problem). The proposed method is based on the momentum method. In order to solve the non-stop problem of the momentum method, we use the value of the cost function to our method. Therefore, as the learning method processes, the mechanism in our method reduces the amount of change in the parameter by the effect of the value of the cost function. We verified the method through proof of convergence and numerical experiments with existing methods to ensure that the learning works well.


BMJ Open ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. e042553
Author(s):  
Youngji Jo ◽  
Amnesty Elizabeth LeFevre ◽  
Hasmot Ali ◽  
Sucheta Mehra ◽  
Kelsey Alland ◽  
...  

ObjectiveWe estimated the cost-effectiveness of a digital health intervention package (mCARE) for community health workers, on pregnancy surveillance and care-seeking reminders compared with the existing paper-based status quo, from 2018 to 2027, in Bangladesh.InterventionsThe mCARE programme involved digitally enhanced pregnancy surveillance, individually targeted text messages and in-person home-visit to pregnant women for care-seeking reminders for antenatal care, child delivery and postnatal care.Study designWe developed a model to project population and service coverage increases with annual geographical expansion (from 1 million to 10 million population over 10 years) of the mCARE programme and the status quo.Major outcomesFor this modelling study, we used Lives Saved Tool to estimate the number of deaths and disability-adjusted life years (DALYs) that would be averted by 2027, if the coverage of health interventions was increased in mCARE programme and the status quo, respectively. Economic costs were captured from a societal perspective using an ingredients approach and expressed in 2018 US dollars. Probabilistic sensitivity analysis was undertaken to account for parameter uncertainties.ResultsWe estimated the mCARE programme to avert 3076 deaths by 2027 at an incremental cost of $43 million relative to the status quo, which is translated to $462 per DALY averted. The societal costs were estimated to be $115 million for mCARE programme (48% of which are programme costs, 35% user costs and 17% provider costs). With the continued implementation and geographical scaling-up, the mCARE programme improved its cost-effectiveness from $1152 to $462 per DALY averted from 5 to 10 years.ConclusionMobile phone-based pregnancy surveillance systems with individually scheduled text messages and home-visit reminder strategies can be highly cost-effective in Bangladesh. The cost-effectiveness may improve as it promotes facility-based child delivery and achieves greater programme cost efficiency with programme scale and sustainability.


2020 ◽  
Vol 18 (02) ◽  
pp. 2050006 ◽  
Author(s):  
Alexsandro Oliveira Alexandrino ◽  
Carla Negri Lintzmayer ◽  
Zanoni Dias

One of the main problems in Computational Biology is to find the evolutionary distance among species. In most approaches, such distance only involves rearrangements, which are mutations that alter large pieces of the species’ genome. When we represent genomes as permutations, the problem of transforming one genome into another is equivalent to the problem of Sorting Permutations by Rearrangement Operations. The traditional approach is to consider that any rearrangement has the same probability to happen, and so, the goal is to find a minimum sequence of operations which sorts the permutation. However, studies have shown that some rearrangements are more likely to happen than others, and so a weighted approach is more realistic. In a weighted approach, the goal is to find a sequence which sorts the permutations, such that the cost of that sequence is minimum. This work introduces a new type of cost function, which is related to the amount of fragmentation caused by a rearrangement. We present some results about the lower and upper bounds for the fragmentation-weighted problems and the relation between the unweighted and the fragmentation-weighted approach. Our main results are 2-approximation algorithms for five versions of this problem involving reversals and transpositions. We also give bounds for the diameters concerning these problems and provide an improved approximation factor for simple permutations considering transpositions.


2005 ◽  
Vol 133 (6) ◽  
pp. 1710-1726 ◽  
Author(s):  
Milija Zupanski

Abstract A new ensemble-based data assimilation method, named the maximum likelihood ensemble filter (MLEF), is presented. The analysis solution maximizes the likelihood of the posterior probability distribution, obtained by minimization of a cost function that depends on a general nonlinear observation operator. The MLEF belongs to the class of deterministic ensemble filters, since no perturbed observations are employed. As in variational and ensemble data assimilation methods, the cost function is derived using a Gaussian probability density function framework. Like other ensemble data assimilation algorithms, the MLEF produces an estimate of the analysis uncertainty (e.g., analysis error covariance). In addition to the common use of ensembles in calculation of the forecast error covariance, the ensembles in MLEF are exploited to efficiently calculate the Hessian preconditioning and the gradient of the cost function. A sufficient number of iterative minimization steps is 2–3, because of superior Hessian preconditioning. The MLEF method is well suited for use with highly nonlinear observation operators, for a small additional computational cost of minimization. The consistent treatment of nonlinear observation operators through optimization is an advantage of the MLEF over other ensemble data assimilation algorithms. The cost of MLEF is comparable to the cost of existing ensemble Kalman filter algorithms. The method is directly applicable to most complex forecast models and observation operators. In this paper, the MLEF method is applied to data assimilation with the one-dimensional Korteweg–de Vries–Burgers equation. The tested observation operator is quadratic, in order to make the assimilation problem more challenging. The results illustrate the stability of the MLEF performance, as well as the benefit of the cost function minimization. The improvement is noted in terms of the rms error, as well as the analysis error covariance. The statistics of innovation vectors (observation minus forecast) also indicate a stable performance of the MLEF algorithm. Additional experiments suggest the amplified benefit of targeted observations in ensemble data assimilation.


2000 ◽  
Vol 25 (2) ◽  
pp. 209-227 ◽  
Author(s):  
Keith R. McLaren ◽  
Peter D. Rossitter ◽  
Alan A. Powell

2021 ◽  
pp. 107754632110324
Author(s):  
Berk Altıner ◽  
Bilal Erol ◽  
Akın Delibaşı

Adaptive optics systems are powerful tools that are implemented to degrade the effects of wavefront aberrations. In this article, the optimal actuator placement problem is addressed for the improvement of disturbance attenuation capability of adaptive optics systems due to the fact that actuator placement is directly related to the enhancement of system performance. For this purpose, the linear-quadratic cost function is chosen, so that optimized actuator layouts can be specialized according to the type of wavefront aberrations. It is then considered as a convex optimization problem, and the cost function is formulated for the disturbance attenuation case. The success of the presented method is demonstrated by simulation results.


2014 ◽  
Vol 665 ◽  
pp. 643-646
Author(s):  
Ying Liu ◽  
Yan Ye ◽  
Chun Guang Li

Metalearning algorithm learns the base learning algorithm, targeted for improving the performance of the learning system. The incremental delta-bar-delta (IDBD) algorithm is such a metalearning algorithm. On the other hand, sparse algorithms are gaining popularity due to their good performance and wide applications. In this paper, we propose a sparse IDBD algorithm by taking the sparsity of the systems into account. Thenorm penalty is contained in the cost function of the standard IDBD, which is equivalent to adding a zero attractor in the iterations, thus can speed up convergence if the system of interest is indeed sparse. Simulations demonstrate that the proposed algorithm is superior to the competing algorithms in sparse system identification.


Sign in / Sign up

Export Citation Format

Share Document