scholarly journals A Study on Contestable Regions in Europe through the Use of a New Rail Cost Function: An Application to the Hinterland of the New Container Terminal of Leghorn Port

2019 ◽  
Vol 2019 ◽  
pp. 1-35 ◽  
Author(s):  
Marino Lupi ◽  
Antonio Pratelli ◽  
Mattia Canessa ◽  
Andrea Lorenzini ◽  
Alessandro Farina

In this paper, the potential hinterland of the new container terminal of the port of Leghorn (Livorno in Italian) is studied. The study actually analyses the competitiveness of major European ports with respect to some of the most contestable regions in Europe. Travel time and monetary costs of railway paths, connecting ports to their hinterland, have been determined. The rail network of a large part of Europe was modelled using a graph. To each link, which represents a portion of the rail line, a cost function is associated. The travel time on the link is determined from the average speed, which has been determined from the maximum speed via formulae obtained through linear regression. The few cost functions that exist in current literature for the computation of the cost of a rail link are not detailed enough. Therefore, a new cost function has been developed. All cost components were determined in detail: the staff cost, the amortisation, maintenance, and insurance costs of locomotives and wagons, the cost of the usage of rail track, the traction cost. The traction cost was calculated in detail from all resistances to motion. Moreover, for each rail link, the number of locomotives needed to operate the train and the maximum towable weight were determined. The monetary value of time in freight transport registers a high variability; therefore, three different optimisations of the paths—by travel times, monetary costs, and generalised costs—between each origin–destination pair were carried out. The rates of competitiveness of the ports with respect to the examined European contestable regions were analysed.

2021 ◽  
Vol 11 (2) ◽  
pp. 850
Author(s):  
Dokkyun Yi ◽  
Sangmin Ji ◽  
Jieun Park

Artificial intelligence (AI) is achieved by optimizing the cost function constructed from learning data. Changing the parameters in the cost function is an AI learning process (or AI learning for convenience). If AI learning is well performed, then the value of the cost function is the global minimum. In order to obtain the well-learned AI learning, the parameter should be no change in the value of the cost function at the global minimum. One useful optimization method is the momentum method; however, the momentum method has difficulty stopping the parameter when the value of the cost function satisfies the global minimum (non-stop problem). The proposed method is based on the momentum method. In order to solve the non-stop problem of the momentum method, we use the value of the cost function to our method. Therefore, as the learning method processes, the mechanism in our method reduces the amount of change in the parameter by the effect of the value of the cost function. We verified the method through proof of convergence and numerical experiments with existing methods to ensure that the learning works well.


2021 ◽  
Vol 6 (1) ◽  
pp. e004318
Author(s):  
Aduragbemi Banke-Thomas ◽  
Kerry L M Wong ◽  
Francis Ifeanyi Ayomoh ◽  
Rokibat Olabisi Giwa-Ayedun ◽  
Lenka Benova

BackgroundTravel time to comprehensive emergency obstetric care (CEmOC) facilities in low-resource settings is commonly estimated using modelling approaches. Our objective was to derive and compare estimates of travel time to reach CEmOC in an African megacity using models and web-based platforms against actual replication of travel.MethodsWe extracted data from patient files of all 732 pregnant women who presented in emergency in the four publicly owned tertiary CEmOC facilities in Lagos, Nigeria, between August 2018 and August 2019. For a systematically selected subsample of 385, we estimated travel time from their homes to the facility using the cost-friction surface approach, Open Source Routing Machine (OSRM) and Google Maps, and compared them to travel time by two independent drivers replicating women’s journeys. We estimated the percentage of women who reached the facilities within 60 and 120 min.ResultsThe median travel time for 385 women from the cost-friction surface approach, OSRM and Google Maps was 5, 11 and 40 min, respectively. The median actual drive time was 50–52 min. The mean errors were >45 min for the cost-friction surface approach and OSRM, and 14 min for Google Maps. The smallest differences between replicated and estimated travel times were seen for night-time journeys at weekends; largest errors were found for night-time journeys at weekdays and journeys above 120 min. Modelled estimates indicated that all participants were within 60 min of the destination CEmOC facility, yet journey replication showed that only 57% were, and 92% were within 120 min.ConclusionsExisting modelling methods underestimate actual travel time in low-resource megacities. Significant gaps in geographical access to life-saving health services like CEmOC must be urgently addressed, including in urban areas. Leveraging tools that generate ‘closer-to-reality’ estimates will be vital for service planning if universal health coverage targets are to be realised by 2030.


2021 ◽  
Vol 11 (15) ◽  
pp. 6922
Author(s):  
Jeongmin Kim ◽  
Ellen J. Hong ◽  
Youngjee Yang ◽  
Kwang Ryel Ryu

In this paper, we claim that the operation schedule of automated stacking cranes (ASC) in the storage yard of automated container terminals can be built effectively and efficiently by using a crane dispatching policy, and propose a noisy optimization algorithm named N-RTS that can derive such a policy efficiently. To select a job for an ASC, our dispatching policy uses a multi-criteria scoring function to calculate the score of each candidate job using a weighted summation of the evaluations in those criteria. As the calculated score depends on the respective weights of these criteria, and thus a different weight vector gives rise to a different best candidate, a weight vector can be deemed as a policy. A good weight vector, or policy, can be found by a simulation-based search where a candidate policy is evaluated through a computationally expensive simulation of applying the policy to some operation scenarios. We may simplify the simulation to save time but at the cost of sacrificing the evaluation accuracy. N-RTS copes with this dilemma by maintaining a good balance between exploration and exploitation. Experimental results show that the policy derived by N-RTS outperforms other ASC scheduling methods. We also conducted additional experiments using some benchmark functions to validate the performance of N-RTS.


Healthcare ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 888
Author(s):  
Leopoldo Sdino ◽  
Andrea Brambilla ◽  
Marta Dell’Ovo ◽  
Benedetta Sdino ◽  
Stefano Capolongo

The need for 24/7 operation, and the increasing requests of high-quality healthcare services contribute to framing healthcare facilities as a complex topic, also due to the changing and challenging environment and huge impact on the community. Due to its complexity, it is difficult to properly estimate the construction cost in a preliminary phase where easy-to-use parameters are often necessary. Therefore, this paper aims to provide an overview of the issue with reference to the Italian context and proposes an estimation framework for analyzing hospital facilities’ construction cost. First, contributions from literature reviews and 14 case studies were analyzed to identify specific cost components. Then, a questionnaire was administered to construction companies and experts in the field to obtain data coming from practical and real cases. The results obtained from all of the contributions are an overview of the construction cost components. Starting from the data collected and analyzed, a preliminary estimation tool is proposed to identify the minimum and maximum variation in the cost when programming the construction of a hospital, starting from the feasibility phase or the early design stage. The framework involves different factors, such as the number of beds, complexity, typology, localization, technology degree and the type of maintenance and management techniques. This study explores the several elements that compose the cost of a hospital facility and highlights future developments including maintenance and management costs during hospital facilities’ lifecycle.


2020 ◽  
Vol 18 (02) ◽  
pp. 2050006 ◽  
Author(s):  
Alexsandro Oliveira Alexandrino ◽  
Carla Negri Lintzmayer ◽  
Zanoni Dias

One of the main problems in Computational Biology is to find the evolutionary distance among species. In most approaches, such distance only involves rearrangements, which are mutations that alter large pieces of the species’ genome. When we represent genomes as permutations, the problem of transforming one genome into another is equivalent to the problem of Sorting Permutations by Rearrangement Operations. The traditional approach is to consider that any rearrangement has the same probability to happen, and so, the goal is to find a minimum sequence of operations which sorts the permutation. However, studies have shown that some rearrangements are more likely to happen than others, and so a weighted approach is more realistic. In a weighted approach, the goal is to find a sequence which sorts the permutations, such that the cost of that sequence is minimum. This work introduces a new type of cost function, which is related to the amount of fragmentation caused by a rearrangement. We present some results about the lower and upper bounds for the fragmentation-weighted problems and the relation between the unweighted and the fragmentation-weighted approach. Our main results are 2-approximation algorithms for five versions of this problem involving reversals and transpositions. We also give bounds for the diameters concerning these problems and provide an improved approximation factor for simple permutations considering transpositions.


2005 ◽  
Vol 133 (6) ◽  
pp. 1710-1726 ◽  
Author(s):  
Milija Zupanski

Abstract A new ensemble-based data assimilation method, named the maximum likelihood ensemble filter (MLEF), is presented. The analysis solution maximizes the likelihood of the posterior probability distribution, obtained by minimization of a cost function that depends on a general nonlinear observation operator. The MLEF belongs to the class of deterministic ensemble filters, since no perturbed observations are employed. As in variational and ensemble data assimilation methods, the cost function is derived using a Gaussian probability density function framework. Like other ensemble data assimilation algorithms, the MLEF produces an estimate of the analysis uncertainty (e.g., analysis error covariance). In addition to the common use of ensembles in calculation of the forecast error covariance, the ensembles in MLEF are exploited to efficiently calculate the Hessian preconditioning and the gradient of the cost function. A sufficient number of iterative minimization steps is 2–3, because of superior Hessian preconditioning. The MLEF method is well suited for use with highly nonlinear observation operators, for a small additional computational cost of minimization. The consistent treatment of nonlinear observation operators through optimization is an advantage of the MLEF over other ensemble data assimilation algorithms. The cost of MLEF is comparable to the cost of existing ensemble Kalman filter algorithms. The method is directly applicable to most complex forecast models and observation operators. In this paper, the MLEF method is applied to data assimilation with the one-dimensional Korteweg–de Vries–Burgers equation. The tested observation operator is quadratic, in order to make the assimilation problem more challenging. The results illustrate the stability of the MLEF performance, as well as the benefit of the cost function minimization. The improvement is noted in terms of the rms error, as well as the analysis error covariance. The statistics of innovation vectors (observation minus forecast) also indicate a stable performance of the MLEF algorithm. Additional experiments suggest the amplified benefit of targeted observations in ensemble data assimilation.


2000 ◽  
Vol 25 (2) ◽  
pp. 209-227 ◽  
Author(s):  
Keith R. McLaren ◽  
Peter D. Rossitter ◽  
Alan A. Powell

2021 ◽  
pp. 107754632110324
Author(s):  
Berk Altıner ◽  
Bilal Erol ◽  
Akın Delibaşı

Adaptive optics systems are powerful tools that are implemented to degrade the effects of wavefront aberrations. In this article, the optimal actuator placement problem is addressed for the improvement of disturbance attenuation capability of adaptive optics systems due to the fact that actuator placement is directly related to the enhancement of system performance. For this purpose, the linear-quadratic cost function is chosen, so that optimized actuator layouts can be specialized according to the type of wavefront aberrations. It is then considered as a convex optimization problem, and the cost function is formulated for the disturbance attenuation case. The success of the presented method is demonstrated by simulation results.


2014 ◽  
Vol 665 ◽  
pp. 643-646
Author(s):  
Ying Liu ◽  
Yan Ye ◽  
Chun Guang Li

Metalearning algorithm learns the base learning algorithm, targeted for improving the performance of the learning system. The incremental delta-bar-delta (IDBD) algorithm is such a metalearning algorithm. On the other hand, sparse algorithms are gaining popularity due to their good performance and wide applications. In this paper, we propose a sparse IDBD algorithm by taking the sparsity of the systems into account. Thenorm penalty is contained in the cost function of the standard IDBD, which is equivalent to adding a zero attractor in the iterations, thus can speed up convergence if the system of interest is indeed sparse. Simulations demonstrate that the proposed algorithm is superior to the competing algorithms in sparse system identification.


Sign in / Sign up

Export Citation Format

Share Document