solution quality
Recently Published Documents


TOTAL DOCUMENTS

620
(FIVE YEARS 270)

H-INDEX

26
(FIVE YEARS 7)

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262499
Author(s):  
Negin Alisoltani ◽  
Mostafa Ameli ◽  
Mahdi Zargayouna ◽  
Ludovic Leclercq

Real-time ride-sharing has become popular in recent years. However, the underlying optimization problem for this service is highly complex. One of the most critical challenges when solving the problem is solution quality and computation time, especially in large-scale problems where the number of received requests is huge. In this paper, we rely on an exact solving method to ensure the quality of the solution, while using AI-based techniques to limit the number of requests that we feed to the solver. More precisely, we propose a clustering method based on a new shareability function to put the most shareable trips inside separate clusters. Previous studies only consider Spatio-temporal dependencies to do clustering on the mobility service requests, which is not efficient in finding the shareable trips. Here, we define the shareability function to consider all the different sharing states for each pair of trips. Each cluster is then managed with a proposed heuristic framework in order to solve the matching problem inside each cluster. As the method favors sharing, we present the number of sharing constraints to allow the service to choose the number of shared trips. To validate our proposal, we employ the proposed method on the network of Lyon city in France, with half-million requests in the morning peak from 6 to 10 AM. The results demonstrate that the algorithm can provide high-quality solutions in a short time for large-scale problems. The proposed clustering method can also be used for different mobility service problems such as car-sharing, bike-sharing, etc.


Author(s):  
Shihui Li

The distribution optimization of WSN nodes is one of the key issues in WSN research, and also is a research hotspot in the field of communication. Aiming at the distribution optimization of WSN nodes, the distribution optimization scheme of nodes based on improved invasive weed optimization algorithm(IIWO) is proposed. IIWO improves the update strategy of the initial position of weeds by using cubic mapping chaotic operator, and uses the Gauss mutation operator to increase the diversity of the population. The simulation results show that the algorithm proposed in this paper has a higher solution quality and faster convergence speed than IWO and CPSO. In distribution optimization example of WSN nodes, the optimal network coverage rate obtained by IIWO is respectively improved by 1.82% and 0.93% than the IWO and CPSO. Under the condition of obtaining the same network coverage rate, the number of nodes required by IIWO is fewer.


Author(s):  
Yan Xiong ◽  
Jiatang Cheng

Background: The generator is a mechanical device that converts other forms of energy into electrical energy. It is widely used in industrial and agricultural production and daily life. Methods: To improve the accuracy of generator fault diagnosis, a fault classification method based on the bare-bones cuckoo search (BBCS) algorithm combined with an artificial neural network is proposed. For this BBCS method, the bare-bones strategy and the modified Levy flight are combined to alleviate premature convergence. After that, the typical fault features are obtained according to the vibration signal and current signal of the generator, and a hybrid diagnosis model based on the back-propagation (BP) neural network optimized by the proposed BBCS algorithm is established. Results: Experimental results indicate that BBCS exhibits better convergence performance in terms of solution quality and convergence rate. Furthermore, the hybrid diagnosis method has higher classification accuracy and can effectively identify generator faults. Conclusion: The proposed method seems effective for generator fault diagnosis.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3177
Author(s):  
Laureano F. Escudero ◽  
Juan F. Monge

The hub location problem (HLP) basically consists of selecting nodes from a network to act as hubs to be used for flow traffic directioning, i.e., flow collection from some origin nodes, probably transfer it to other hubs, and distributing it to destination nodes. A potential expansion on the hub building and capacitated modules increasing along a time horizon is also considered. So, uncertainty is inherent to the problem. Two types of time scaling are dealt with; specifically, a long one (viz., semesters, years), where the strategic decisions are made, and another whose timing is much shorter for the operational decisions. Thus, two types of uncertain parameters are also considered; namely, strategic and operational ones. This work focuses on the development of a stochastic mixed integer linear optimization modeling framework and a matheuristic approach for solving the multistage multiscale allocation hub location network expansion planning problem under uncertainty. Given the intrinsic difficulty of the problem and the huge dimensions of the instances (due to the network size of realistic instances as well as the cardinality of the strategic scenario tree and operational ones), it is unrealistic to seek an optimal solution. A matheuristic algorithm, so-called SFR3, is introduced, which stands for scenario variables fixing and iteratively randomizing the relaxation reduction of the constraints and variables’ integrality. It obtains a (hopefully, good) feasible solution in reasonable time and a lower bound of the optimal solution value to assess the solution quality. The performance of the overall approach is computationally assessed by using stochastic-based perturbed well-known CAB data.


2021 ◽  
Author(s):  
Aditya Kumar Singh ◽  
Pruthvi Raju Vegesna ◽  
Dhruva Prasad ◽  
Saideep Chandrashekar Kachodi ◽  
Sumit Lohiya ◽  
...  

Abstract The Aishwariya Oil Field located in Barmer Basin of Rajasthan India having STOIIP of ∼300 MMBBLS was initially developed with down-dip edge water injection. The main reservoir unit, Fatehgarh Formation, has excellent reservoir characteristics with porosities of 20-30% and permeability of 1 to 5 Darcys. The Fatehgarh Formation is subdivided into Lower Fatehgarh (LF) and Upper Fatehgarh (UF) Formations, of which LF sands are more homogenous and have slightly better reservoir properties. The oil has in-situ viscosity of 10-30 cP. Given its adverse waterflood mobility ratio, the importance of EOR was recognised very early. Initial screening studies identified that chemical EOR (polymer and ASP) was preferred choice of EOR process. Extensive lab studies and simulation work was conducted to develop the polymer flood concept. A polymer flood development plan was prepared targeting the LF sands of the field utilizing the lessons learnt from nearby Mangala Field polymer implementation project. The polymer flood in Aishwariya Field was implemented in two stages. In the first stage, a polymer injectivity test was conducted in 3 wells to establish the potential for polymer injection in these wells. The injection was extended to 3 more wells and continued for ∼4 years. Significant water cut drop was observed in nearby wells during this phase of polymer injection. In the next stage, polymer flooding was extended to the entire LF sands with drilling of 14 new infill wells and conversion of 8 existing wells to polymer injectors. A ∼14 km long pipeline was laid from the Mangala Central Polymer Facility to well pads in the field to cater to the requirement of 6-8 KBPD of ∼15000 ppm polymer mother solution. The philosophy of pre-production for extended periods was considered prior to start of polymer injection for all wells as it significantly improved injection (reduced skin) and conformance. Full field polymer flood project was implemented, and injection was ramped up to the planned 40-50 KBPD of polymerized water within a month owing to good injectivity and polymer solution quality. A detailed laboratory, well and reservoir surveillance program has been implemented and the desired wellhead viscosity of 25-30 cP has been achieved. Initial response shows significant increase in oil production rate and decrease in water-cut. This paper presents the polymer laboratory studies, initial long term injectivity test results, polymer flood development concept and planning, simulation studies and field implementation in LF Formation in Aishwariya Field.


2021 ◽  
Vol 13 (23) ◽  
pp. 13433
Author(s):  
Mohammed Elhenawy ◽  
Hesham A. Rakha ◽  
Youssef Bichiou ◽  
Mahmoud Masoud ◽  
Sebastien Glaser ◽  
...  

City bikes and bike-sharing systems (BSSs) are one solution to the last mile problem. BSSs guarantee equity by presenting affordable alternative transportation means for low-income households. These systems feature a multitude of bike stations scattered around a city. Numerous stations mean users can borrow a bike from one location and return it there or to a different location. However, this may create an unbalanced system, where some stations have excess bikes and others have limited bikes. In this paper, we propose a solution to balance BSS stations to satisfy the expected demand. Moreover, this paper represents a direct extension of the deferred acceptance algorithm-based heuristic previously proposed by the authors. We develop an algorithm that provides a delivery truck with a near-optimal route (i.e., finding the shortest Hamiltonian cycle) as an NP-hard problem. Results provide good solution quality and computational time performance, making the algorithm a viable candidate for real-time use by BSS operators. Our suggested approach is best suited for low-Q problems. Moreover, the mean running times for the largest instance are 143.6, 130.32, and 51.85 s for Q = 30, 20, and 10, respectively, which makes the proposed algorithm a real-time rebalancing algorithm.


Author(s):  
Kaike Zhang ◽  
Xueping Li ◽  
Mingzhou Jin

This study generalizes the r-interdiction median (RIM) problem with fortification to simultaneously consider two types of risks: probabilistic exogenous disruptions and endogenous disruptions caused by intentional attacks. We develop a bilevel programming model that includes a lower-level interdiction problem and a higher-level fortification problem to hedge against such risks. We then prove that the interdiction problem is supermodular and subsequently adopt the cuts associated with supermodularity to develop an efficient cutting-plane algorithm to achieve exact solutions. For the fortification problem, we adopt the logic-based Benders decomposition (LBBD) framework to take advantage of the two-level structure and the property that a facility should not be fortified if it is not attacked at the lower level. Numerical experiments show that the cutting-plane algorithm is more efficient than benchmark methods in the literature, especially when the problem size grows. Specifically, with regard to the solution quality, LBBD outperforms the greedy algorithm in the literature with an up-to 13.2% improvement in the total cost, and it is as good as or better than the tree-search implicit enumeration method. Summary of Contribution: This paper studies an r-interdiction median problem with fortification (RIMF) in a supply chain network that simultaneously considers two types of disruption risks: random disruptions that occur probabilistically and disruptions caused by intentional attacks. The problem is to determine the allocation of limited facility fortification resources to an existing network. It is modeled as a bilevel programming model combining a defender’s problem and an attacker’s problem, which generalizes the r-interdiction median problem with probabilistic fortification. This paper is suitable for IJOC in mainly two aspects: (1) The lower-level attacker’s interdiction problem is a challenging high-degree nonlinear model. In the literature, only a total enumeration method has been applied to solve a special case of this problem. By exploring the special structural property of the problem, namely, the supermodularity of the transportation cost function, we developed an exact cutting-plane method to solve the problem to its optimality. Extensive numerical studies were conducted. Hence, this paper fits in the intersection of operations research and computing. (2) We developed an efficient logic-based Benders decomposition algorithm to solve the higher-level defender’s fortification problem. Overall, this study generalizes several important problems in the literature, such as RIM, RIMF, and RIMF with probabilistic fortification (RIMF-p).


2021 ◽  
Vol 11 (4) ◽  
pp. 204-231
Author(s):  
Ali Mahmoud ◽  
Xiaohui Yuan

A rockfill dam's quality and its economic aspects are inextricably interwoven with each other. Approaching the optimal design of a rockfill dam paves the path to achieve the best quality with the fewest expenses. Choosing the Sardasht rockfill dam as a case study, two semi-empirical models are presented for seepage and safety factor. These two models, together with construction costs, were employed as three objective functions for the Sardasht rockfill dam's shape optimization. Optimization was handled using a robust multi-objective particle swarm optimization algorithm (RCR-MOPSO). A new reproducing method inspired by a Rubik's cube shape (RCR) and NSGA-III are building blocks of RCR-MOPSO. Three benchmark problems and two real-world problems were solved using RCR-MOPSO and compared with NSGA-III and MOPSO to ensure the performance of RCR-MOPSO. The solution quality and performance of RCR-MOPSO are significantly better than the original MOPSO and close to NSGA-III. Nevertheless, RCR-MOPSO recorded a 38% shorter runtime than NSGA-III. RCR-MOPSO presented a set of non-dominated solutions as final results for the Sardasht rockfill dam shape optimization. Due to the defined constraints, all solutions dominate the original design. Regarding the final results, compared with Sardasht dam's original design, the construction price was reduced by 31.12% on average, while seepage and safety factor improved by 15.84% and 27.78% on average, respectively.


Author(s):  
Mauro Bonafini ◽  
Bernhard Schmitzer

AbstractWe study Benamou’s domain decomposition algorithm for optimal transport in the entropy regularized setting. The key observation is that the regularized variant converges to the globally optimal solution under very mild assumptions. We prove linear convergence of the algorithm with respect to the Kullback–Leibler divergence and illustrate the (potentially very slow) rates with numerical examples. On problems with sufficient geometric structure (such as Wasserstein distances between images) we expect much faster convergence. We then discuss important aspects of a computationally efficient implementation, such as adaptive sparsity, a coarse-to-fine scheme and parallelization, paving the way to numerically solving large-scale optimal transport problems. We demonstrate efficient numerical performance for computing the Wasserstein-2 distance between 2D images and observe that, even without parallelization, domain decomposition compares favorably to applying a single efficient implementation of the Sinkhorn algorithm in terms of runtime, memory and solution quality.


Author(s):  
Reinaldo Da Silva Ribeiro ◽  
Rafael Lima de Carvalho ◽  
Tiago Da Silva Almeida

In this research, the application of the Simulated Annealing algorithm to solve the state assignment problem in finite state machines is investigated. The state assignment is a classic NP-Complete problem in digital systems design and impacts directly on both area and power costs as well as on the design time. The solutions found in the literature uses population-based methods that consume additional computer resources. The Simulated Annealing algorithm has been chosen because it does not use populations while seeking a solution. Therefore, the objective of this research is to evaluate the impact on the quality of the solution when using the Simulated Annealing approach. The proposed solution is evaluated using the LGSynth89 benchmark and compared with other approaches in the state-of-the-art. The experimental simulations point out an average loss in solution quality of 14.29%, while an average processing performance of 58.67%. The results indicate that it is possible to have few quality losses with a significant increase in processing performance.


Sign in / Sign up

Export Citation Format

Share Document