Standards for the artificial intelligence community

Author(s):  
Erik Blasch ◽  
James Sung
2001 ◽  
Vol 16 (3) ◽  
pp. 277-284 ◽  
Author(s):  
EDUARDO ALONSO ◽  
MARK D'INVERNO ◽  
DANIEL KUDENKO ◽  
MICHAEL LUCK ◽  
JASON NOBLE

In recent years, multi-agent systems (MASs) have received increasing attention in the artificial intelligence community. Research in multi-agent systems involves the investigation of autonomous, rational and flexible behaviour of entities such as software programs or robots, and their interaction and coordination in such diverse areas as robotics (Kitano et al., 1997), information retrieval and management (Klusch, 1999), and simulation (Gilbert & Conte, 1995). When designing agent systems, it is impossible to foresee all the potential situations an agent may encounter and specify an agent behaviour optimally in advance. Agents therefore have to learn from, and adapt to, their environment, especially in a multi-agent setting.


Author(s):  
Y. Selyanin

The US Government has initiated a large-scale activity on artificial intelligence (AI) development and implementation. Numerous departments and agencies including the Pentagon, intelligence community and citizen agencies take part in these efforts. Some of them are responsible for technology, materials and standards development. Others are customers of AI. State AI efforts receive significant budget funding. Moreover, Department of Defense costs on AI are comparable with the whole non-defense funding. American world-leading IT companies support state departments and agencies in organizing AI technologies development and implementation. The USA's highest military and political leadership supports such efforts. Congress provides significant requested funding. However leading specialists criticize the state's approach to creating and implementing AI. Firstly, they consider authorized assignments as not sufficient. Secondly, even this funding is used ineffectively. Therefore Congress created National Security Commission on Artificial Intelligence (NSCAI) in 2018 for identifying problems in the AI area and developing solutions. This article looks at the stakeholders and participants of the state AI efforts, the budget funding authorization, the major existing problems and the NSCAI conclusions regarding the necessary AI funding in FYs 2021-2032.


2020 ◽  
Vol 34 (09) ◽  
pp. 13693-13696
Author(s):  
Emma Strubell ◽  
Ananya Ganesh ◽  
Andrew McCallum

The field of artificial intelligence has experienced a dramatic methodological shift towards large neural networks trained on plentiful data. This shift has been fueled by recent advances in hardware and techniques enabling remarkable levels of computation, resulting in impressive advances in AI across many applications. However, the massive computation required to obtain these exciting results is costly both financially, due to the price of specialized hardware and electricity or cloud compute time, and to the environment, as a result of non-renewable energy used to fuel modern tensor processing hardware. In a paper published this year at ACL, we brought this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training and tuning neural network models for NLP (Strubell, Ganesh, and McCallum 2019). In this extended abstract, we briefly summarize our findings in NLP, incorporating updated estimates and broader information from recent related publications, and provide actionable recommendations to reduce costs and improve equity in the machine learning and artificial intelligence community.


2021 ◽  
Vol 23 (1) ◽  
pp. 1-3
Author(s):  
Toon Calders ◽  
Eirini Ntoutsi ◽  
Mykola Pechenizkiy ◽  
Bodo Rosenhahn ◽  
Salvatore Ruggieri

Fairness in Artificial Intelligence rightfully receives a lot of attention these days. Many life-impacting decisions are being partially automated, including health-care resource planning decisions, insurance and credit risk predictions, recidivism predictions, etc. Much of work appearing on this topic within the Data Mining, Machine Learning and Artificial Intelligence community is focused on technological aspects. Nevertheless, fairness is much wider than this as it lies at the intersection of philosophy, ethics, legislation, and practical perspectives. Therefore, to fill this gap and bring together scholars of these disciplines working on fairness, the first workshop on Bias and Fairness in AI was held online on September 18, 2020 at the ECML-PKDD 2020 conference. This special section includes six articles presenting different perspectives on bias and fairness from different angles.


AI Magazine ◽  
2012 ◽  
Vol 33 (1) ◽  
pp. 96-98 ◽  
Author(s):  
Deepak Khemani

India is a multilingual and multicultural country that came together less than a century ago. The populace spans wide extremes of wealth and education. The artificial intelligence community, which gained in strength in the eighties, has had a major focus on research directed towards societal goals of bridging the linguistic and educational divide, and delivers the fruits of information technology to all people. In this article we look at a brief history followed by two examples of research aimed at crossing the language barriers.


2009 ◽  
Vol 3 (1) ◽  
pp. 1-15 ◽  
Author(s):  
Colin Hales

Two related and relatively obscure issues in science have eluded empirical tractability. Both can be directly traced to progress in artificial intelligence. The first is scientific proof of consciousness or otherwise in anything. The second is the role of consciousness in intelligent behaviour. This document approaches both issues by exploring the idea of using scientific behaviour self-referentially as a benchmark in an objective test for P-consciousness, which is the relevant critical aspect of consciousness. Scientific behaviour is unique in being both highly formalised and provably critically dependent on the P-consciousness of the primary senses. In the context of the primary senses P-consciousness is literally a formal identity with scientific observation. As such it is intrinsically afforded a status of critical dependency demonstrably no different to any other critical dependency in science, making scientific behaviour ideally suited to a self-referential scientific circumstance. The ‘provability’ derives from the delivery by science of objectively verifiable ‘laws of nature’. By exploiting the critical dependency, an empirical framework is constructed as a refined and specialised version of existing propositions for a ‘test for consciousness’. The specific role of P-consciousness is clarified: it is a human intracranial central nervous system construct that symbolically grounds the scientist in the distal external world, resulting in our ability to recognise, characterise and adapt to distal natural world novelty. It is hoped that in opening a discussion of a novel approach, the artificial intelligence community may eventually find a viable contender for its long overdue scientific basis.


AI Magazine ◽  
2012 ◽  
Vol 33 (2) ◽  
pp. 43
Author(s):  
Joshua Eckroth ◽  
Liang Dong ◽  
Reid G. Smith ◽  
Bruce G. Buchanan

NewsFinder automates the steps involved in finding, selecting, categorizing, and publishing news stories that meet relevance criteria for the Artificial Intelligence community. The software combines a broad search of online news sources with topic-specific trained models and heuristics. Since August 2010, the program has been used to operate the AI in the News service that is part of the AAAI AITopics website.


2000 ◽  
Vol 15 (1) ◽  
pp. 31-45 ◽  
Author(s):  
HEIDI E. DIXON ◽  
MATTHEW L. GINSBERG

The recent effort to integrate techniques from the fields of artificial intelligence and operations research has been motivated in part by the fact that scientists in each group are often unacquainted with recent (and not so recent) progress in the other field. Our goal in this paper is to introduce the artificial intelligence community to pseudo-Boolean representation and cutting plane proofs, and to introduce the operations research community to restricted learning methods such as relevance-bounded learning. Complete methods for solving satisfiability problems are necessarily bounded from below by the length of the shortest proof of unsatisfiability; the fact that cutting plane proofs of unsatisfiability can be exponentially shorter than the shortest resolution proof can thus in theory lead to substantial improvements in the performance of complete satisfiability engines. Relevance-bounded learning is a method for bounding the size of a learned constraint set. It is currently the best artificial intelligence strategy for deciding which learned constraints to retain and which to discard. We believe these two elements or some analogous form of them are necessary ingredients to improving the performance of satisfiability algorithms generally. We also present a new cutting plane proof of the pigeonhole principle that is of size n2, and show how to implement some intelligent backtracking techniques using pseudo-Boolean representation.


Sign in / Sign up

Export Citation Format

Share Document