scholarly journals Accelerate the optimization of large-scale manufacturing planning using game theory

Author(s):  
Hui-Ling Zhen ◽  
Zhenkun Wang ◽  
Xijun Li ◽  
Qingfu Zhang ◽  
Mingxuan Yuan ◽  
...  

AbstractThis paper studies a real-world manufacturing problem, which is modeled as a bi-objective integer programming problem. The variables and constraints involved are usually numerous and dramatically vary according to the manufacturing data. It is very challenging to directly solve such large-scale problems using heuristic algorithms or commercial solvers. Considering that the decision space of such problems is usually sparse and has a block-like structure, we propose to use decomposition methods to accelerate the optimization process. However, the existing decomposition methods require that the problem has strict block structures, which is not suitable for our problem. To deal with problems with such block-like structures, we propose a game theory based decomposition algorithm. This new method can overcome the large-scale issue and guarantee convergence to some extent, as it can narrow down the search space and accelerate the convergence. Extensive experimental results on real-world industrial manufacturing planning problems show that our method is more effective than the world fastest commercial solver Gurobi. The results also indicate that our method is less sensitive to the problem scale comparing with Gurobi.

2017 ◽  
Vol 59 ◽  
pp. 463-494 ◽  
Author(s):  
Shaowei Cai ◽  
Jinkun Lin ◽  
Chuan Luo

The problem of finding a minimum vertex cover (MinVC) in a graph is a well known NP-hard combinatorial optimization problem of great importance in theory and practice. Due to its NP-hardness, there has been much interest in developing heuristic algorithms for finding a small vertex cover in reasonable time. Previously, heuristic algorithms for MinVC have focused on solving graphs of relatively small size, and they are not suitable for solving massive graphs as they usually have high-complexity heuristics. This paper explores techniques for solving MinVC in very large scale real-world graphs, including a construction algorithm, a local search algorithm and a preprocessing algorithm. Both the construction and search algorithms are based on low-complexity heuristics, and we combine them to develop a heuristic algorithm for MinVC called FastVC. Experimental results on a broad range of real-world massive graphs show that, our algorithms are very fast and have better performance than previous heuristic algorithms for MinVC. We also develop a preprocessing algorithm to simplify graphs for MinVC algorithms. By applying the preprocessing algorithm to local search algorithms, we obtain two efficient MinVC solvers called NuMVC2+p and FastVC2+p, which show further improvement on the massive graphs.


1989 ◽  
Vol 111 (3) ◽  
pp. 389-397 ◽  
Author(s):  
G. A. Norris ◽  
R. E. Skelton

This paper selects sensors and actuators (location, type, and number) from an admissible set. We seek an approximate solution to this integer programming problem. Given the optimal use of the entire admissible set of sensors and actuators, it is possible to decompose the quadratic cost function into contributions from each stochastic input and each weighted output. In the past, these suboptimal cost decomposition methods of sensor and actuator selection have been used to locate perfect (infinite bandwidth) sensors and actuators on large scale systems. This paper extends these ideas to the more practical case of imperfect actuators and sensors with dynamics of their own. Secondly, the old cost decomposition methods are discarded for improved formulas for sensor and actuator deletion (from the admissible set). These results show that there exists an optimal number of actuators (it is possible to use too few and too many). Preliminary attempts to solve this new research question are described. It is also shown that there exists optimal dynamics of the actuators. NASA’s SCOLE example demonstrates the concepts.


Diabetes ◽  
2020 ◽  
Vol 69 (Supplement 1) ◽  
pp. 1588-P ◽  
Author(s):  
ROMIK GHOSH ◽  
ASHOK K. DAS ◽  
AMBRISH MITHAL ◽  
SHASHANK JOSHI ◽  
K.M. PRASANNA KUMAR ◽  
...  

Diabetes ◽  
2020 ◽  
Vol 69 (Supplement 1) ◽  
pp. 2258-PUB
Author(s):  
ROMIK GHOSH ◽  
ASHOK K. DAS ◽  
SHASHANK JOSHI ◽  
AMBRISH MITHAL ◽  
K.M. PRASANNA KUMAR ◽  
...  

2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


2021 ◽  
Vol 51 (3) ◽  
pp. 9-16
Author(s):  
José Suárez-Varela ◽  
Miquel Ferriol-Galmés ◽  
Albert López ◽  
Paul Almasan ◽  
Guillermo Bernárdez ◽  
...  

During the last decade, Machine Learning (ML) has increasingly become a hot topic in the field of Computer Networks and is expected to be gradually adopted for a plethora of control, monitoring and management tasks in real-world deployments. This poses the need to count on new generations of students, researchers and practitioners with a solid background in ML applied to networks. During 2020, the International Telecommunication Union (ITU) has organized the "ITU AI/ML in 5G challenge", an open global competition that has introduced to a broad audience some of the current main challenges in ML for networks. This large-scale initiative has gathered 23 different challenges proposed by network operators, equipment manufacturers and academia, and has attracted a total of 1300+ participants from 60+ countries. This paper narrates our experience organizing one of the proposed challenges: the "Graph Neural Networking Challenge 2020". We describe the problem presented to participants, the tools and resources provided, some organization aspects and participation statistics, an outline of the top-3 awarded solutions, and a summary with some lessons learned during all this journey. As a result, this challenge leaves a curated set of educational resources openly available to anyone interested in the topic.


2021 ◽  
Vol 13 (3) ◽  
pp. 1274
Author(s):  
Loau Al-Bahrani ◽  
Mehdi Seyedmahmoudian ◽  
Ben Horan ◽  
Alex Stojcevski

Few non-traditional optimization techniques are applied to the dynamic economic dispatch (DED) of large-scale thermal power units (TPUs), e.g., 1000 TPUs, that consider the effects of valve-point loading with ramp-rate limitations. This is a complicated multiple mode problem. In this investigation, a novel optimization technique, namely, a multi-gradient particle swarm optimization (MG-PSO) algorithm with two stages for exploring and exploiting the search space area, is employed as an optimization tool. The M particles (explorers) in the first stage are used to explore new neighborhoods, whereas the M particles (exploiters) in the second stage are used to exploit the best neighborhood. The M particles’ negative gradient variation in both stages causes the equilibrium between the global and local search space capabilities. This algorithm’s authentication is demonstrated on five medium-scale to very large-scale power systems. The MG-PSO algorithm effectively reduces the difficulty of handling the large-scale DED problem, and simulation results confirm this algorithm’s suitability for such a complicated multi-objective problem at varying fitness performance measures and consistency. This algorithm is also applied to estimate the required generation in 24 h to meet load demand changes. This investigation provides useful technical references for economic dispatch operators to update their power system programs in order to achieve economic benefits.


1978 ◽  
Vol 8 (4) ◽  
pp. 459-477 ◽  
Author(s):  
Ian Budge ◽  
Valentine Herman

Traditional theories of government coalition formation concentrate on formal criteria inspired by – if not directly drawn from – game theory. One such criterion is that the coalition which forms must be winning; another is that it should have no surplus members without whom it would still be winning, i.e. it should be minimal; and a third is that the number of parties should be as few as possible. The closest that such theories come to considering the substantive issues affecting the formation of coalitions in the real world is their focus on reducing the ideological diversity of parties within the government. On many occasions, however, such ideological considerations receive negligible attention from politicians, who often ignore size factors altogether.


Omega ◽  
2021 ◽  
pp. 102442
Author(s):  
Lin Zhou ◽  
Lu Zhen ◽  
Roberto Baldacci ◽  
Marco Boschetti ◽  
Ying Dai ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Seyed Hossein Jafari ◽  
Amir Mahdi Abdolhosseini-Qomi ◽  
Masoud Asadpour ◽  
Maseud Rahgozar ◽  
Naser Yazdani

AbstractThe entities of real-world networks are connected via different types of connections (i.e., layers). The task of link prediction in multiplex networks is about finding missing connections based on both intra-layer and inter-layer correlations. Our observations confirm that in a wide range of real-world multiplex networks, from social to biological and technological, a positive correlation exists between connection probability in one layer and similarity in other layers. Accordingly, a similarity-based automatic general-purpose multiplex link prediction method—SimBins—is devised that quantifies the amount of connection uncertainty based on observed inter-layer correlations in a multiplex network. Moreover, SimBins enhances the prediction quality in the target layer by incorporating the effect of link overlap across layers. Applying SimBins to various datasets from diverse domains, our findings indicate that SimBins outperforms the compared methods (both baseline and state-of-the-art methods) in most instances when predicting links. Furthermore, it is discussed that SimBins imposes minor computational overhead to the base similarity measures making it a potentially fast method, suitable for large-scale multiplex networks.


Sign in / Sign up

Export Citation Format

Share Document