Online Learning for Linear Programming with Time-Dependent Cost

2011 ◽  
Vol 230-232 ◽  
pp. 793-797
Author(s):  
Wei Li ◽  
Chong Yang Deng

Machine learning and linear programming with time dependent cost are two popular intelligent optimization tools to handle uncertainty in real world problems. Thus, combining these two technologies is quite attractive. This paper proposed an effective framework to deal with uncertainty in practice, based on combing introducing learning parameter into linear programming models.

2021 ◽  
Author(s):  
Andreas Christ Sølvsten Jørgensen ◽  
Atiyo Ghosh ◽  
Marc Sturrock ◽  
Vahid Shahrezaei

AbstractThe modelling of many real-world problems relies on computationally heavy simulations. Since statistical inference rests on repeated simulations to sample the parameter space, the high computational expense of these simulations can become a stumbling block. In this paper, we compare two ways to mitigate this issue based on machine learning methods. One approach is to construct lightweight surrogate models to substitute the simulations used in inference. Alternatively, one might altogether circumnavigate the need for Bayesian sampling schemes and directly estimate the posterior distribution. We focus on stochastic simulations that track autonomous agents and present two case studies of real-world applications: tumour growths and the spread of infectious diseases. We demonstrate that good accuracy in inference can be achieved with a relatively small number of simulations, making our machine learning approaches orders of magnitude faster than classical simulation-based methods that rely on sampling the parameter space. However, we find that while some methods generally produce more robust results than others, no algorithm offers a one-size-fits-all solution when attempting to infer model parameters from observations. Instead, one must choose the inference technique with the specific real-world application in mind. The stochastic nature of the considered real-world phenomena poses an additional challenge that can become insurmountable for some approaches. Overall, we find machine learning approaches that create direct inference machines to be promising for real-world applications. We present our findings as general guidelines for modelling practitioners.Author summaryComputer simulations play a vital role in modern science as they are commonly used to compare theory with observations. One can thus infer the properties of a observed system by comparing the data to the predicted behaviour in different scenarios. Each of these scenarios corresponds to a simulation with slightly different settings. However, since real-world problems are highly complex, the simulations often require extensive computational resources, making direct comparisons with data challenging, if not insurmountable. It is, therefore, necessary to resort to inference methods that mitigate this issue, but it is not clear-cut what path to choose for any specific research problem. In this paper, we provide general guidelines for how to make this choice. We do so by studying examples from oncology and epidemiology and by taking advantage of developments in machine learning. More specifically, we focus on simulations that track the behaviour of autonomous agents, such as single cells or individuals. We show that the best way forward is problem-dependent and highlight the methods that yield the most robust results across the different case studies. We demonstrate that these methods are highly promising and produce reliable results in a small fraction of the time required by classic approaches that rely on comparisons between data and individual simulations. Rather than relying on a single inference technique, we recommend employing several methods and selecting the most reliable based on predetermined criteria.


Author(s):  
Petr Berka ◽  
Ivan Bruha

The genuine symbolic machine learning (ML) algorithms are capable of processing symbolic, categorial data only. However, real-world problems, e.g. in medicine or finance, involve both symbolic and numerical attributes. Therefore, there is an important issue of ML to discretize (categorize) numerical attributes. There exist quite a few discretization procedures in the ML field. This paper describes two newer algorithms for categorization (discretization) of numerical attributes. The first one is implemented in the KEX (Knowledge EXplorer) as its preprocessing procedure. Its idea is to discretize the numerical attributes in such a way that the resulting categorization corresponds to KEX knowledge acquisition algorithm. Since the categorization for KEX is done "off-line" before using the KEX machine learning algorithm, it can be used as a preprocessing step for other machine learning algorithms, too. The other discretization procedure is implemented in CN4, a large extension of the well-known CN2 machine learning algorithm. The range of numerical attributes is divided into intervals that may form a complex generated by the algorithm as a part of the class description. Experimental results show a comparison of performance of KEX and CN4 on some well-known ML databases. To make the comparison more exhibitory, we also used the discretization procedure of the MLC++ library. Other ML algorithms such as ID3 and C4.5 were run under our experiments, too. Then, the results are compared and discussed.


1977 ◽  
Vol 9 (2) ◽  
pp. 1-8
Author(s):  
C. Richard Shumway ◽  
Hovav Talpaz

Linear programming (LP) models have been developed for a wide range of normative purposes in agricultural production economics. Despite their widespread application, a pervading concern among users is reliability — how well does a particular model actually describe and/or predict real world phenomena when it is so designed.Much attention has been devoted in recent years to methods for making programming models produce results more in line with those actually observed. These efforts have included development of more detail in production activities and restrictions, incorporation of flexibility constraints into recursive programming systems, specification of more realistic behavioral properties, and development of guidelines for reducing aggregation error.


2020 ◽  
Vol 32 (1) ◽  
pp. 25-38 ◽  
Author(s):  
Tonči Carić ◽  
Juraj Fosin

This paper provides a framework for solving the Time Dependent Vehicle Routing Problem (TDVRP) by using historical data. The data are used to predict travel times during certain times of the day and derive zones of congestion that can be used by optimization algorithms. A combination of well-known algorithms was adapted to the time dependent setting and used to solve the real-world problems. The adapted algorithm outperforms the best-known results for TDVRP benchmarks. The proposed framework was applied to a real-world problem and results show a reduction in time delays in serving customers compared to the time independent case.


2021 ◽  
Vol 9 ◽  
pp. 78-86
Author(s):  
Arnav Saini ◽  
Nipun Gauba ◽  
Hardik Chawla ◽  
Jabir Ali

Model predictive contrTraffic Collisions are one of the major sources of deaths, injuries & property damage every year. Road accidents are one of the most difficult real world problems to tackle with, due to its high order of unpredictability. The persistence as well as existence of this problem may be prevalent to a different degree for each & every place. The consequences of this may result in loss of human life & capital. To avoid this, every place needs to tackle the problem with a customized approach depending on the causes that are responsible for the accidents. Even in today's world, where the mass operation of autonomous vehicles is still grim or out of sight, the possibility of predicting a road accident before it takes place, is practically impossible. The only idea or approach that can help to decrease the number of road accidents, is to analyze the reasons that lead to these accidents. The concepts of Data Analysis, Data Visualization & Machine Learning help to tackle real world problems, by exploring & deriving valuable insights, which in turn help in taking measures to solve the targeted problem & drive business growth. In this research study, the dataset pertaining to road mishaps that occurred in UK over time period 2005 - 2015 will be analyzed using these concepts. The defined approach can help the concerned authorities & respective government, to take every possible step & amendment, & hence mitigate the identified causes & scenarios that lead to road accidents.


1986 ◽  
Vol 18 (2) ◽  
pp. 155-164 ◽  
Author(s):  
Bruce A. McCarl ◽  
Jeffrey Apland

AbstractSystematic approaches to validation of linear programming models are discussed for prescriptive and predictive applications to economic problems. Specific references are made to a general linear programming formulation, however, the approaches are applicable to mathematical programming applications in general. Detailed procedures are outlined for validating various aspects of model performance given complete or partial sets of observed, real world values of variables. Alternative evaluation criteria are presented along with procedures for correcting validation problems.


2017 ◽  
Author(s):  
Sudarsun Santhiappan ◽  
Balaraman Ravindran

Data classification task assigns labels to data points using a model that is learned from a collection of pre-labeled data points. The Class Imbalance Learning (CIL) problem is concerned with the performance of classification algorithms in the presence of under-represented data and severe class distribution skews. Due to the inherent complex characteristics of imbalanced datasets, learning from such data requires new understandings, principles, algorithms, and tools to transform vast amounts of raw data effciently into information and knowledge representation. It is important to study CIL because it is rare to find a classification problem in real world scenarios that follows balanced class distributions. In this article, we have presented how machine learning has become the integral part of modern lifestyle and how some of the real world problems are modeled as CIL problems. We have also provided a detailed survey on the fundamentals and solutions to class imbalance learning. We conclude the survey by presenting some of the challenges and opportunities with class imbalance learning.


2015 ◽  
Author(s):  
Joshua G Stern ◽  
Eric A Gaucher

Studying the evolutionary history of life’s molecules - DNA, RNA, and protein - reveals nature-based solutions to real-world problems. We discuss an approach to applied molecular evolution that is well-known within the field but may be unfamiliar to a wider audience. Using a case study at the intersection of molecular evolution and medicine, we introduce the fundamental concepts of orthology and paralogy. We also explain a practical entry point to molecular evolution named STORI: Selectable Taxon Ortholog Retrieval Iteratively. STORI is a machine learning algorithm designed to clear a bottleneck that researchers encounter when studying evolution.


1995 ◽  
Vol 10 (1) ◽  
pp. 77-81
Author(s):  
Claire Nédellec

“Integration of Machine Learning and Knowledge Acquisition” may be a surprising title for an ECAI-94 workshop, since most machine learning (ML) systems are intended for knowledge acquisition (KA). So what seems problematic about integrating ML and KA? The answer lies in the difference between the approaches developed by what is referred to as ML and KA research. Apart from sonic major exceptions, such as learning apprentice tools (Mitchell et al., 1989), or libraries like the Machine Learning Toolbox (MLT Consortium, 1993), most ML algorithms have been described without any characterization in terms of real application needs, in terms of what they could be effectively useful for. Although ML methods have been applied to “real world” problems few general and reusable conclusions have been drawn from these knowledge acquisition experiments. As ML techniques become more and more sophisticated and able to produce various forms of knowledge, the number of possible applications grows. ML methods tend then to be more precisely specified in terms of the domain knowledge initially required, the control knowledge to be set and the nature of the system output (MLT Consortium, 1993; Kodratoff et al., 1994).


Sign in / Sign up

Export Citation Format

Share Document