scholarly journals DEA Cross-Efficiency Aggregation with Deviation Degree Based on Standardized Euclidean Distance

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yan Zou ◽  
Weijie Chen ◽  
Mingyu Tong ◽  
Shuo Tao

Data envelopment analysis (DEA) has been extended to cross-efficiency to provide better discrimination and ranking of decision-making units (DMUs). Current researches about cross-efficiency mainly focus on the non-uniqueness of optimal solution of linear programming and information aggregation. As a common distance metric, standardized Euclidean distance is introduced to define the discrimination power between two vectors and the deviation degree for measuring the difference between the individual preference and group ideal preference. Based on above definitions, an alternative method is presented to compare multiple optimal solutions, and further, a universal weighted cross-efficiency model considering both dynamic adjustment of weights and preference formulation is constructed for evaluation and ranking. Two numerical examples are given to illustrate the effectiveness of the comparison method for multiple optimal solutions and weights determination method of DMUs, respectively. At last, a practical application aimed at evaluating environmental treatment efficiency in western area of China is given. Comparative analysis shows that our model could be more moderate, flexible, and general than some available models and methods, which can extend the theoretical research of cross-efficiency evaluation.

Author(s):  
Ruiyang Song ◽  
Kuang Xu

We propose and analyze a temporal concatenation heuristic for solving large-scale finite-horizon Markov decision processes (MDP), which divides the MDP into smaller sub-problems along the time horizon and generates an overall solution by simply concatenating the optimal solutions from these sub-problems. As a “black box” architecture, temporal concatenation works with a wide range of existing MDP algorithms. Our main results characterize the regret of temporal concatenation compared to the optimal solution. We provide upper bounds for general MDP instances, as well as a family of MDP instances in which the upper bounds are shown to be tight. Together, our results demonstrate temporal concatenation's potential of substantial speed-up at the expense of some performance degradation.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Angyan Tu ◽  
Jun Ye ◽  
Bing Wang

In order to simplify the complex calculation and solve the difficult solution problems of neutrosophic number optimization models (NNOMs) in the practical production process, this paper presents two methods to solve NNOMs, where Matlab built-in function “fmincon()” and neutrosophic number operations (NNOs) are used in indeterminate environments. Next, the two methods are applied to linear and nonlinear programming problems with neutrosophic number information to obtain the optimal solution of the maximum/minimum objective function under the constrained conditions of practical productions by neutrosophic number optimization programming (NNOP) examples. Finally, under indeterminate environments, the fit optimal solutions of the examples can also be achieved by using some specified indeterminate scales to fulfill some specified actual requirements. The NNOP methods can obtain the feasible and flexible optimal solutions and indicate the advantage of simple calculations in practical applications.


Impact ◽  
2020 ◽  
Vol 2020 (8) ◽  
pp. 60-61
Author(s):  
Wei Weng

For a production system, 'scheduling' aims to find out which machine/worker processes which job at what time to produce the best result for user-set objectives, such as minimising the total cost. Finding the optimal solution to a large scheduling problem, however, is extremely time consuming due to the high complexity. To reduce this time to one instance, Dr Wei Weng, from the Institute of Liberal Arts and Science, Kanazawa University in Japan, is leading research projects on developing online scheduling and control systems that provide near-optimal solutions in real time, even for large production systems. In her system, a large scheduling problem will be solved as distributed small problems and information of jobs and machines is collected online to provide results instantly. This will bring two big changes: 1. Large scheduling problems, for which it tends to take days to reach the optimal solution, will be solved instantly by reaching near-optimal solutions; 2. Rescheduling, which is still difficult to be made in real time by optimization algorithms, will be completed instantly in case some urgent jobs arrive or some scheduled jobs need to be changed or cancelled during production. The projects have great potential in raising efficiency of scheduling and production control in future smart industry and enabling achieving lower costs, higher productivity and better customer service.


Author(s):  
Jungho Park ◽  
Hadi El-Amine ◽  
Nevin Mutlu

We study a large-scale resource allocation problem with a convex, separable, not necessarily differentiable objective function that includes uncertain parameters falling under an interval uncertainty set, considering a set of deterministic constraints. We devise an exact algorithm to solve the minimax regret formulation of this problem, which is NP-hard, and we show that the proposed Benders-type decomposition algorithm converges to an [Formula: see text]-optimal solution in finite time. We evaluate the performance of the proposed algorithm via an extensive computational study, and our results show that the proposed algorithm provides efficient solutions to large-scale problems, especially when the objective function is differentiable. Although the computation time takes longer for problems with nondifferentiable objective functions as expected, we show that good quality, near-optimal solutions can be achieved in shorter runtimes by using our exact approach. We also develop two heuristic approaches, which are partially based on our exact algorithm, and show that the merit of the proposed exact approach lies in both providing an [Formula: see text]-optimal solution and providing good quality near-optimal solutions by laying the foundation for efficient heuristic approaches.


Author(s):  
Bernard K.S. Cheung

Genetic algorithms have been applied in solving various types of large-scale, NP-hard optimization problems. Many researchers have been investigating its global convergence properties using Schema Theory, Markov Chain, etc. A more realistic approach, however, is to estimate the probability of success in finding the global optimal solution within a prescribed number of generations under some function landscapes. Further investigation reveals that its inherent weaknesses that affect its performance can be remedied, while its efficiency can be significantly enhanced through the design of an adaptive scheme that integrates the crossover, mutation and selection operations. The advance of Information Technology and the extensive corporate globalization create great challenges for the solution of modern supply chain models that become more and more complex and size formidable. Meta-heuristic methods have to be employed to obtain near optimal solutions. Recently, a genetic algorithm has been reported to solve these problems satisfactorily and there are reasons for this.


2020 ◽  
Vol 2020 ◽  
pp. 1-6 ◽  
Author(s):  
Wenlong Xia ◽  
Yuanping Zhou ◽  
Qingdang Meng

In this paper, a downlink virtual-channel-optimization nonorthogonal multiple access (VNOMA) without channel state information at the transmitter (CSIT) is proposed. The novel idea is to construct multiple complex virtual channels by jointly adjusting the amplitudes and phases to maximize the minimum Euclidean distance (MED) among the superposed constellation points. The optimal solution is derived in the absence of CSIT. Considering practical communications with finite input constellations in which symbols are uniformly distributed, we resort to the sum constellation constrained capacity (CCC) to evaluate the performance. For MED criterion, the maximum likelihood (ML) decoder is expected at the receiver. To decrease the computational cost, we propose a reduced-complexity bitwise ML (RBML) decoder. Experimental results are presented to validate the superior of our proposed scheme.


Author(s):  
Houssem Felfel ◽  
Omar Ayadi ◽  
Faouzi Masmoudi

In this paper, a multi-objective, multi-product, multi-period production and transportation planning problem in the context of a multi-site supply chain is proposed. The developed model attempts simultaneously to maximize the profit and to maximize the product quality level. The objective of this paper is to provide the decision maker with a front of Pareto optimal solutions and to help him to select the best Pareto solution. To do so, the epsilon-constraint method is adopted to generate the set of Pareto optimal solutions. Then, the technique for order preference by similarity to ideal solution (TOSIS) is used to choose the best compromise solution. The multi-criteria optimization and compromise solution (VIKOR), a commonly used method in multiple criteria analysis, is applied in order to evaluate the selected solutions using TOPSIS method. This paper offers a numerical example to illustrate the solution approach and to compare the obtained results using TOSIS and VIKOR methods.


2018 ◽  
Vol 35 (06) ◽  
pp. 1850039 ◽  
Author(s):  
Lei Chen ◽  
Fei-Mei Wu ◽  
Feng Feng ◽  
Fujun Lai ◽  
Ying-Ming Wang

Major drawbacks of the traditional data envelopment analysis (DEA) method include selecting optimal weights in a flexible manner, lacking adequate discrimination power for efficient decision-making units, and considering only desirable outputs. By introducing the concept of global efficiency optimization, this study proposed a double frontiers DEA approach with undesirable outputs to generate a common set of weights for evaluating all decision-making units from both the optimistic and pessimistic perspectives. For a unique optimal solution, compromise models for individual efficiency optimization were developed as a secondary goal. Finally, as an illustration, the models were applied to evaluate the energy efficiency of the Chinese regional economy. The results showed that the proposed approach could improve discrimination power and obtain a fair result in a case where both desirable and undesirable outputs exist.


Author(s):  
Sabyasachi Mondal ◽  
Radhakant Padhi

This paper presents an approach to compute the optimal time-to-go and final velocity magnitude in the Generalized Explicit (GENEX) guidance. Time-to-go and final velocity magnitude are two critical input parameters in GENEX guidance implementation. Optimal time-to-go selects that optimal solution which yields less cost compared to the costs yielded by other optimal solutions. In addition to it, the input of realistic final velocity lowers the cost further. These developments relax the existing limitations of GENEX, thereby making this optimal guidance law more optimal, effective and generic.


2020 ◽  
Vol 37 (4) ◽  
pp. 1524-1547
Author(s):  
Gholam Hosein Askarirobati ◽  
Akbar Hashemi Borzabadi ◽  
Aghileh Heydari

Abstract Detecting the Pareto optimal points on the Pareto frontier is one of the most important topics in multiobjective optimal control problems (MOCPs). This paper presents a scalarization technique to construct an approximate Pareto frontier of MOCPs, using an improved normal boundary intersection (NBI) scalarization strategy. For this purpose, MOCP is first discretized and then using a grid of weights, a sequence of single objective optimal control problems is solved to achieve a uniform distribution of Pareto optimal solutions on the Pareto frontier. The aim is to achieve a more even distribution of Pareto optimal solutions on the Pareto frontier and improve the efficiency of the algorithm. It is shown that in contrast to the NBI method, where Pareto optimality of solutions is not guaranteed, the obtained optimal solution of the scalarized problem is a Pareto optimal solution of the MOCP. Finally, the ability of the proposed method is evaluated and compared with other approaches using several practical MOCPs. The numerical results indicate that the proposed method is more efficient and provides more uniform distribution of solutions on the Pareto frontier than the other methods, such a weighted sum, normalized normal constraint and NBI.


Sign in / Sign up

Export Citation Format

Share Document