A constraint-posting framework for scheduling under complex constraints

Author(s):  
Cheng-Chung Cheng ◽  
S.F. Smith
Keyword(s):  
Author(s):  
Fei Tao ◽  
Luning Bi ◽  
Ying Zuo ◽  
A. Y. C. Nee

Disassembly is a very important step in recycling and maintenance, particularly for energy saving. However, disassembly sequence planning (DSP) is a challenging combinatorial optimization problem due to complex constraints of many products. This paper considers partial and parallel disassembly sequence planning for solving the degrees-of-freedom in modular product design, considering disassembly time, cost, and energy consumption. An automatic self-decomposed disassembly precedence matrix (DPM) is designed to generate partial/parallel disassembly sequence for reducing complexity and improving efficiency. A Tabu search-based hyper heuristic algorithm with exponentially decreasing diversity management strategy is proposed. Compared with the low-level heuristics, the proposed algorithm is more efficient in terms of exploration ability and improving energy benefits (EBs). The comparison results of three different disassembly strategies prove that the partial/parallel disassembly has a great advantage in reducing disassembly time, and improving EBs and disassembly profit (DP).


2017 ◽  
Vol 60 (9) ◽  
pp. 2060-2076 ◽  
Author(s):  
Huiping Chu ◽  
Lin Ma ◽  
Kexin Wang ◽  
Zhijiang Shao ◽  
Zhengyu Song

2010 ◽  
Vol 36 (3) ◽  
pp. 481-504 ◽  
Author(s):  
João V. Graça ◽  
Kuzman Ganchev ◽  
Ben Taskar

Word-level alignment of bilingual text is a critical resource for a growing variety of tasks. Probabilistic models for word alignment present a fundamental trade-off between richness of captured constraints and correlations versus efficiency and tractability of inference. In this article, we use the Posterior Regularization framework (Graça, Ganchev, and Taskar 2007) to incorporate complex constraints into probabilistic models during learning without changing the efficiency of the underlying model. We focus on the simple and tractable hidden Markov model, and present an efficient learning algorithm for incorporating approximate bijectivity and symmetry constraints. Models estimated with these constraints produce a significant boost in performance as measured by both precision and recall of manually annotated alignments for six language pairs. We also report experiments on two different tasks where word alignments are required: phrase-based machine translation and syntax transfer, and show promising improvements over standard methods.


2018 ◽  
Vol 49 (3) ◽  
pp. 610-623 ◽  
Author(s):  
Colin Wilson ◽  
Gillian Gallagher

The lexicon of a natural language does not contain all of the phonological structures that are grammatical. This presents a fundamental challenge to the learner, who must distinguish linguistically significant restrictions from accidental gaps ( Fischer-Jørgensen 1952 , Halle 1962 , Chomsky and Halle 1965 , Pierrehumbert 1994 , Frisch and Zawaydeh 2001 , Iverson and Salmons 2005 , Gorman 2013 , Hayes and White 2013 ). The severity of the challenge depends on the size of the lexicon ( Pierrehumbert 2001 ), the number of sounds and their frequency distribution ( Sigurd 1968 , Tambovtsev and Martindale 2007 ), and the complexity of the generalizations that learners must entertain ( Pierrehumbert 1994 , Hayes and Wilson 2008 , Kager and Pater 2012 , Jardine and Heinz 2016 ). In this squib, we consider the problem that accidental gaps pose for learning phonotactic grammars stated on a single, surface level of representation. While the monostratal approach to phonology has considerable theoretical and computational appeal ( Ellison 1993 , Bird and Ellison 1994 , Scobbie, Coleman, and Bird 1996 , Burzio 2002 ), little previous research has investigated how purely surface-based phonotactic grammars can be learned from natural lexicons (but cf. Hayes and Wilson 2008 , Hayes and White 2013 ). The empirical basis of our study is the sound pattern of South Bolivian Quechua, with particular focus on the allophonic distribution of high and mid vowels. We show that, in characterizing the vowel distribution, a surface-based analysis must resort to generalizations of greater complexity than are needed in traditional accounts that derive outputs from underlying forms. This exacerbates the learning problem, because complex constraints are more likely to be surface-true by chance (i.e., the structures they prohibit are more likely to be accidentally absent from the lexicon). A comprehensive quantitative analysis of the Quechua lexicon and phonotactic system establishes that many accidental gaps of the relevant complexity level do indeed exist. We propose that, to overcome this problem, surface-based phonotactic models should have two related properties: they should use distinctive features to state constraints at multiple levels of granularity, and they should select constraints of appropriate granularity by statistical comparison of observed and expected frequency distributions. The central idea is that actual gaps typically belong to statistically robust feature-based classes, whereas accidental gaps are more likely to be featurally isolated and to contain independently rare sounds. A maximum-entropy learning model that incorporates these two properties is shown to be effective at distinguishing systematic and accidental gaps in a whole-language phonotactic analysis of Quechua, outperforming minimally different models that lack features or perform nonstatistical induction.


2021 ◽  
Author(s):  
Zhipeng Huang ◽  
Haokai Sun ◽  
Huimin Wang ◽  
Ziran Zhu ◽  
Jun Yu ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document