A Visual and Statistical Study of a Real World Traffic Optimization Problem

Author(s):  
J.J. Sánchez ◽  
M.J. Galán ◽  
E. Rubio
Author(s):  
Ivan Zelinka ◽  
Martin Kruliš ◽  
Marek Běhálek ◽  
Tung Minh Luu ◽  
Jaroslav Pokorný

Optimization algorithms are a powerful tool for solving many problems of engineering applications from different fields of real life. They are usually used where the solution of a given problem analytically is unsuitable or unrealistic. If implemented in a suitable manner, there is no need for frequent user intervention into the actions of the equipment in which they are used. The majority of the problems of real life applications can be defined as optimization problems, for example, finding the optimum trajectory of a robot, optimal data flows in various processes like city traffic optimization or modelling and optimization of the seasonal variances of supply, traffic and facilities occupation in tourism among the others. The structure of this chapter is such that on the beginning are introduced bio-inspired algorithms, then parallelization of algorithms and parallel hardware and at the end, open research on Ho Chi Minh City traffic optimization real world example is introduced. In Conclusion are discussed possibilities of mutual combinations of introduced methods.


2019 ◽  
Vol 11 (19) ◽  
pp. 2188
Author(s):  
Li ◽  
Zhu ◽  
Guo ◽  
Chen

Spectral unmixing of hyperspectral images is an important issue in the fields of remotesensing. Jointly exploring the spectral and spatial information embedded in the data is helpful toenhance the consistency between mixing/unmixing models and real scenarios. This paper proposesa graph regularized nonlinear unmixing method based on the recent multilinear mixing model(MLM). The MLM takes account of all orders of interactions between endmembers, and indicates thepixel-wise nonlinearity with a single probability parameter. By incorporating the Laplacian graphregularizers, the proposed method exploits the underlying manifold structure of the pixels’ spectra,in order to augment the estimations of both abundances and nonlinear probability parameters.Besides the spectrum-based regularizations, the sparsity of abundances is also incorporated for theproposed model. The resulting optimization problem is addressed by using the alternating directionmethod of multipliers (ADMM), yielding the so-called graph regularized MLM (G-MLM) algorithm.To implement the proposed method on large hypersepectral images in real world, we proposeto utilize a superpixel construction approach before unmixing, and then apply G-MLM on eachsuperpixel. The proposed methods achieve superior unmixing performances to state-of-the-artstrategies in terms of both abundances and probability parameters, on both synthetic and real datasets.


2020 ◽  
Vol 10 (18) ◽  
pp. 6157
Author(s):  
Jose Manuel Gimenez-Guzman ◽  
Alejandra Martínez-Moraian ◽  
Rene D. Reyes-Bardales ◽  
David Orden ◽  
Ivan Marsa-Maestre

This paper models an air traffic optimization problem where, on the one hand, flight operators seek to minimize fuel consumption flying at optimal cruise levels and, on the other hand, air traffic managers aim to keep intersecting airways at as distant as possible flight levels. We study such a problem as a factorized optimization, which is addressed through a spectrum graph coloring model, evaluating the effect that safety constraints have on fuel consumption, and comparing different heuristic approaches for allocation.


Author(s):  
Dietmar Maringer ◽  
Ben Craig ◽  
Sandra Paterlini

AbstractThe structure of networks plays a central role in the behavior of financial systems and their response to policy. Real-world networks, however, are rarely directly observable: banks’ assets and liabilities are typically known, but not who is lending how much and to whom. This paper adds to the existing literature in two ways. First, it shows how to simulate realistic networks that are based on balance-sheet information. To do so, we introduce a model where links cause fixed-costs, independent of contract size; but the costs per link decrease the more connected a bank is (scale economies). Second, to approach the optimization problem, we develop a new algorithm inspired by the transportation planning literature and research in stochastic search heuristics. Computational experiments find that the resulting networks are not only consistent with the balance sheets, but also resemble real-world financial networks in their density (which is sparse but not minimally dense) and in their core-periphery and disassortative structure.


2016 ◽  
Vol 56 ◽  
pp. 119-152 ◽  
Author(s):  
Javad Azimi ◽  
Xiaoli Fern ◽  
Alan Fern

Motivated by a real-world problem, we study a novel budgeted optimization problem where the goal is to optimize an unknown function f(.) given a budget by requesting a sequence of samples from the function. In our setting, however, evaluating the function at precisely specified points is not practically possible due to prohibitive costs. Instead, we can only request constrained experiments. A constrained experiment, denoted by Q, specifies a subset of the input space for the experimenter to sample the function from. The outcome of Q includes a sampled experiment x, and its function output f(x). Importantly, as the constraints of Q become looser, the cost of fulfilling the request decreases, but the uncertainty about the location x increases. Our goal is to manage this trade-off by selecting a set of constrained experiments that best optimize f(.) within the budget. We study this problem in two different settings, the non-sequential (or batch) setting where a set of constrained experiments is selected at once, and the sequential setting where experiments are selected one at a time. We evaluate our proposed methods for both settings using synthetic and real functions. The experimental results demonstrate the efficacy of the proposed methods.


2021 ◽  
Vol 11 (19) ◽  
pp. 9153
Author(s):  
Vinicius Renan de Carvalho ◽  
Ender Özcan ◽  
Jaime Simão Sichman

As exact algorithms are unfeasible to solve real optimization problems, due to their computational complexity, meta-heuristics are usually used to solve them. However, choosing a meta-heuristic to solve a particular optimization problem is a non-trivial task, and often requires a time-consuming trial and error process. Hyper-heuristics, which are heuristics to choose heuristics, have been proposed as a means to both simplify and improve algorithm selection or configuration for optimization problems. This paper novel presents a novel cross-domain evaluation for multi-objective optimization: we investigate how four state-of-the-art online hyper-heuristics with different characteristics perform in order to find solutions for eighteen real-world multi-objective optimization problems. These hyper-heuristics were designed in previous studies and tackle the algorithm selection problem from different perspectives: Election-Based, based on Reinforcement Learning and based on a mathematical function. All studied hyper-heuristics control a set of five Multi-Objective Evolutionary Algorithms (MOEAs) as Low-Level (meta-)Heuristics (LLHs) while finding solutions for the optimization problem. To our knowledge, this work is the first to deal conjointly with the following issues: (i) selection of meta-heuristics instead of simple operators (ii) focus on multi-objective optimization problems, (iii) experiments on real world problems and not just function benchmarks. In our experiments, we computed, for each algorithm execution, Hypervolume and IGD+ and compared the results considering the Kruskal–Wallis statistical test. Furthermore, we ranked all the tested algorithms considering three different Friedman Rankings to summarize the cross-domain analysis. Our results showed that hyper-heuristics have a better cross-domain performance than single meta-heuristics, which makes them excellent candidates for solving new multi-objective optimization problems.


Author(s):  
Lei Feng ◽  
Bo An

Partial label learning deals with the problem where each training instance is assigned a set of candidate labels, only one of which is correct. This paper provides the first attempt to leverage the idea of self-training for dealing with partially labeled examples. Specifically, we propose a unified formulation with proper constraints to train the desired model and perform pseudo-labeling jointly. For pseudo-labeling, unlike traditional self-training that manually differentiates the ground-truth label with enough high confidence, we introduce the maximum infinity norm regularization on the modeling outputs to automatically achieve this consideratum, which results in a convex-concave optimization problem. We show that optimizing this convex-concave problem is equivalent to solving a set of quadratic programming (QP) problems. By proposing an upper-bound surrogate objective function, we turn to solving only one QP problem for improving the optimization efficiency. Extensive experiments on synthesized and real-world datasets demonstrate that the proposed approach significantly outperforms the state-of-the-art partial label learning approaches.


Sign in / Sign up

Export Citation Format

Share Document