scholarly journals Plan Optimization by Plan Rewriting

Author(s):  
José Luis Ambite ◽  
Craig A. Knoblock ◽  
Steven Minton

Planning by Rewriting (PbR) is a paradigm for efficient high-quality planning that exploits declarative plan rewriting rules and efficient local search techniques to transform an easy-to-generate, but possibly suboptimal, initial plan into a high-quality plan. In addition to addressing planning efficiency and plan quality, PbR offers a new anytime planning algorithm. The plan rewriting rules can be either specified by a domain expert or automatically learned. We describe a learning approach based on comparing initial and optimal plans that produce rules competitive with manually specified ones. PbR is fully implemented and has been applied to several existing domains. The experimental results show that the PbR approach provides significant savings in planning effort while generating high-quality plans.

2001 ◽  
Vol 15 ◽  
pp. 207-261 ◽  
Author(s):  
J. L. Ambite ◽  
C. A. Knoblock

Domain-independent planning is a hard combinatorial problem. Taking into account plan quality makes the task even more difficult. This article introduces Planning by Rewriting (PbR), a new paradigm for efficient high-quality domain-independent planning. PbR exploits declarative plan-rewriting rules and efficient local search techniques to transform an easy-to-generate, but possibly suboptimal, initial plan into a high-quality plan. In addition to addressing the issues of planning efficiency and plan quality, this framework offers a new anytime planning algorithm. We have implemented this planner and applied it to several existing domains. The experimental results show that the PbR approach provides significant savings in planning effort while generating high-quality plans.


2020 ◽  
Vol 2020 (4) ◽  
pp. 116-1-116-7
Author(s):  
Raphael Antonius Frick ◽  
Sascha Zmudzinski ◽  
Martin Steinebach

In recent years, the number of forged videos circulating on the Internet has immensely increased. Software and services to create such forgeries have become more and more accessible to the public. In this regard, the risk of malicious use of forged videos has risen. This work proposes an approach based on the Ghost effect knwon from image forensics for detecting forgeries in videos that can replace faces in video sequences or change the mimic of a face. The experimental results show that the proposed approach is able to identify forgery in high-quality encoded video content.


2020 ◽  
Vol 12 (4) ◽  
pp. 676 ◽  
Author(s):  
Yong Yang ◽  
Wei Tu ◽  
Shuying Huang ◽  
Hangyuan Lu

Pansharpening is the process of fusing a low-resolution multispectral (LRMS) image with a high-resolution panchromatic (PAN) image. In the process of pansharpening, the LRMS image is often directly upsampled by a scale of 4, which may result in the loss of high-frequency details in the fused high-resolution multispectral (HRMS) image. To solve this problem, we put forward a novel progressive cascade deep residual network (PCDRN) with two residual subnetworks for pansharpening. The network adjusts the size of an MS image to the size of a PAN image twice and gradually fuses the LRMS image with the PAN image in a coarse-to-fine manner. To prevent an overly-smooth phenomenon and achieve high-quality fusion results, a multitask loss function is defined to train our network. Furthermore, to eliminate checkerboard artifacts in the fusion results, we employ a resize-convolution approach instead of transposed convolution for upsampling LRMS images. Experimental results on the Pléiades and WorldView-3 datasets prove that PCDRN exhibits superior performance compared to other popular pansharpening methods in terms of quantitative and visual assessments.


2021 ◽  
Author(s):  
◽  
Atiya Masood

<p>The Job Shop Scheduling (JSS) problem is considered to be a challenging one due to practical requirements such as multiple objectives and the complexity of production flows. JSS has received great attention because of its broad applicability in real-world situations. One of the prominent solutions approaches to handling JSS problems is to design effective dispatching rules. Dispatching rules are investigated broadly in both academic and industrial environments because they are easy to implement (by computers and shop floor operators) with a low computational cost. However, the manual development of dispatching rules is time-consuming and requires expert knowledge of the scheduling environment. The hyper-heuristic approach that uses genetic programming (GP) to solve JSS problems is known as GP-based hyper-heuristic (GP-HH). GP-HH is a very useful approach for discovering dispatching rules automatically.  Although it is technically simple to consider only a single objective optimization for JSS, it is now widely evidenced in the literature that JSS by nature presents several potentially conflicting objectives, including the maximal flowtime, mean flowtime, and mean tardiness. A few studies in the literature attempt to solve many-objective JSS with more than three objectives, but existing studies have some major limitations. First, many-objective JSS problems have been solved by multi-objective evolutionary algorithms (MOEAs). However, recent studies have suggested that the performance of conventional MOEAs is prone to the scalability challenge and degrades dramatically with many-objective optimization problems (MaOPs). Many-objective JSS using MOEAs inherit the same challenge as MaOPs. Thus, using MOEAs for many-objective JSS problems often fails to select quality dispatching rules. Second, although the reference points method is one of the most prominent and efficient methods for diversity maintenance in many-objective problems, it uses a uniform distribution of reference points which is only appropriate for a regular Pareto-front. However, JSS problems often have irregular Pareto-front and uniformly distributed reference points do not match well with the irregular Pareto-front. It results in many useless points during evolution. These useless points can significantly affect the performance of the reference points-based algorithms. They cannot help to enhance the solution diversity of evolved Pareto-front in many-objective JSS problems. Third, Pareto Local Search (PLS) is a prominent and effective local search method for handling multi-objective JSS optimization problems but the literature does not discover any existing studies which use PLS in GP-HH.  To address these limitations, this thesis's overall goal is to develop GP-HH approaches to evolving effective rules to handle many conflicting objectives simultaneously in JSS problems.  To achieve the first goal, this thesis proposes the first many-objective GP-HH method for JSS problems to find the Pareto-fronts of nondominated dispatching rules. Decision-makers can utilize this GP-HH method for selecting appropriate rules based on their preference over multiple conflicting objectives. This study combines GP with the fitness evaluation scheme of a many-objective reference points-based approach. The experimental results show that the proposed algorithm significantly outperforms MOEAs such as NSGA-II and SPEA2.  To achieve the second goal, this thesis proposes two adaptive reference point approaches (model-free and model-driven). In both approaches, the reference points are generated according to the distribution of the evolved dispatching rules. The model-free reference point adaptation approach is inspired by Particle Swarm Optimization (PSO). The model-driven approach constructs the density model and estimates the density of solutions from each defined sub-location in a whole objective space. Furthermore, the model-driven approach provides smoothness to the model by applying a Gaussian Process model and calculating the area under the mean function. The mean function area helps to find the required number of the reference points in each mean function. The experimental results demonstrate that both adaptive approaches are significantly better than several state-of-the-art MOEAs.  To achieve the third goal, the thesis proposes the first algorithm that combines GP as a global search with PLS as a local search in many-objective JSS. The proposed algorithm introduces an effective fitness-based selection strategy for selecting initial individuals for neighborhood exploration. It defines the GP's proper neighborhood structure and a new selection mechanism for selecting the effective dispatching rules during the local search. The experimental results on the JSS benchmark problem show that the newly proposed algorithm can significantly outperform its baseline algorithm (GP-NSGA-III).</p>


2018 ◽  
Vol 129 (Suppl1) ◽  
pp. 118-124 ◽  
Author(s):  
Alexis Dimitriadis ◽  
Ian Paddick

OBJECTIVEStereotactic radiosurgery (SRS) is characterized by high levels of conformity and steep dose gradients from the periphery of the target to surrounding tissue. Clinical studies have backed up the importance of these factors through evidence of symptomatic complications. Available data suggest that there are threshold doses above which the risk of symptomatic radionecrosis increases with the volume irradiated. Therefore, radiosurgical treatment plans should be optimized by minimizing dose to the surrounding tissue while maximizing dose to the target volume. Several metrics have been proposed to quantify radiosurgical plan quality, but all present certain weaknesses. To overcome limitations of the currently used metrics, a novel metric is proposed, the efficiency index (η50%), which is based on the principle of calculating integral doses: η50% = integral doseTV/integral dosePIV50%.METHODSThe value of η50% can be easily calculated by dividing the integral dose (mean dose × volume) to the target volume (TV) by the integral dose to the volume of 50% of the prescription isodose (PIV50%). Alternatively, differential dose-volume histograms (DVHs) of the TV and PIV50% can be used. The resulting η50% value is effectively the proportion of energy within the PIV50% that falls into the target. This value has theoretical limits of 0 and 1, with 1 being perfect. The index combines conformity, gradient, and mean dose to the target into a single value. The value of η50% was retrospectively calculated for 100 clinical SRS plans.RESULTSThe value of η50% for the 100 clinical SRS plans ranged from 37.7% to 58.0% with a mean value of 49.0%. This study also showed that the same principles used for the calculation of η50% can be adapted to produce an index suitable for multiple-target plans (Gη12Gy). Furthermore, the authors present another adaptation of the index that may play a role in plan optimization by calculating and minimizing the proportion of energy delivered to surrounding organs at risk (OARη50%).CONCLUSIONSThe proposed efficiency index is a novel approach in quantifying plan quality by combining conformity, gradient, and mean dose into a single value. It quantifies the ratio of the dose “doing good” versus the dose “doing harm,” and its adaptations can be used for multiple-target plan optimization and OAR sparing.


Sign in / Sign up

Export Citation Format

Share Document