Combining satisfiability techniques from AI and OR

2000 ◽  
Vol 15 (1) ◽  
pp. 31-45 ◽  
Author(s):  
HEIDI E. DIXON ◽  
MATTHEW L. GINSBERG

The recent effort to integrate techniques from the fields of artificial intelligence and operations research has been motivated in part by the fact that scientists in each group are often unacquainted with recent (and not so recent) progress in the other field. Our goal in this paper is to introduce the artificial intelligence community to pseudo-Boolean representation and cutting plane proofs, and to introduce the operations research community to restricted learning methods such as relevance-bounded learning. Complete methods for solving satisfiability problems are necessarily bounded from below by the length of the shortest proof of unsatisfiability; the fact that cutting plane proofs of unsatisfiability can be exponentially shorter than the shortest resolution proof can thus in theory lead to substantial improvements in the performance of complete satisfiability engines. Relevance-bounded learning is a method for bounding the size of a learned constraint set. It is currently the best artificial intelligence strategy for deciding which learned constraints to retain and which to discard. We believe these two elements or some analogous form of them are necessary ingredients to improving the performance of satisfiability algorithms generally. We also present a new cutting plane proof of the pigeonhole principle that is of size n2, and show how to implement some intelligent backtracking techniques using pseudo-Boolean representation.

2001 ◽  
Vol 16 (3) ◽  
pp. 277-284 ◽  
Author(s):  
EDUARDO ALONSO ◽  
MARK D'INVERNO ◽  
DANIEL KUDENKO ◽  
MICHAEL LUCK ◽  
JASON NOBLE

In recent years, multi-agent systems (MASs) have received increasing attention in the artificial intelligence community. Research in multi-agent systems involves the investigation of autonomous, rational and flexible behaviour of entities such as software programs or robots, and their interaction and coordination in such diverse areas as robotics (Kitano et al., 1997), information retrieval and management (Klusch, 1999), and simulation (Gilbert & Conte, 1995). When designing agent systems, it is impossible to foresee all the potential situations an agent may encounter and specify an agent behaviour optimally in advance. Agents therefore have to learn from, and adapt to, their environment, especially in a multi-agent setting.


Author(s):  
Y. Selyanin

The US Government has initiated a large-scale activity on artificial intelligence (AI) development and implementation. Numerous departments and agencies including the Pentagon, intelligence community and citizen agencies take part in these efforts. Some of them are responsible for technology, materials and standards development. Others are customers of AI. State AI efforts receive significant budget funding. Moreover, Department of Defense costs on AI are comparable with the whole non-defense funding. American world-leading IT companies support state departments and agencies in organizing AI technologies development and implementation. The USA's highest military and political leadership supports such efforts. Congress provides significant requested funding. However leading specialists criticize the state's approach to creating and implementing AI. Firstly, they consider authorized assignments as not sufficient. Secondly, even this funding is used ineffectively. Therefore Congress created National Security Commission on Artificial Intelligence (NSCAI) in 2018 for identifying problems in the AI area and developing solutions. This article looks at the stakeholders and participants of the state AI efforts, the budget funding authorization, the major existing problems and the NSCAI conclusions regarding the necessary AI funding in FYs 2021-2032.


2020 ◽  
Vol 34 (09) ◽  
pp. 13693-13696
Author(s):  
Emma Strubell ◽  
Ananya Ganesh ◽  
Andrew McCallum

The field of artificial intelligence has experienced a dramatic methodological shift towards large neural networks trained on plentiful data. This shift has been fueled by recent advances in hardware and techniques enabling remarkable levels of computation, resulting in impressive advances in AI across many applications. However, the massive computation required to obtain these exciting results is costly both financially, due to the price of specialized hardware and electricity or cloud compute time, and to the environment, as a result of non-renewable energy used to fuel modern tensor processing hardware. In a paper published this year at ACL, we brought this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training and tuning neural network models for NLP (Strubell, Ganesh, and McCallum 2019). In this extended abstract, we briefly summarize our findings in NLP, incorporating updated estimates and broader information from recent related publications, and provide actionable recommendations to reduce costs and improve equity in the machine learning and artificial intelligence community.


Sign in / Sign up

Export Citation Format

Share Document