Logic-based reasoning about actions and plans in artificial intelligence

1993 ◽  
Vol 8 (2) ◽  
pp. 91-120 ◽  
Author(s):  
Huaming Lee ◽  
James Tannock ◽  
Jon Sims Williams

AbstractReasoning about actions and plans is a vital aspect of the rational behaviour of intelligent agents, and hence represents a major research domain in artificial intelligence. Much work has been undertaken to develop logic-based formalisms and problem solving procedures for plan representation and plan synthesis. This paper consists of a survey of various paradigms for reasoning about actions and plans in artificial intelligence. Attention is focused on the logic-based theoretical frameworks which have built a formal foundation for the domain-independent approaches to the general principles of reasoning about actions and plans.

First Monday ◽  
2019 ◽  
Author(s):  
Katrin Etzrodt ◽  
Sven Engesser

Research on the social implications of technological developments is highly relevant. However, a broader comprehension of current innovations and their underlying theoretical frameworks is limited by their rapid evolution, as well as a plethora of different terms and definitions. The terminology used to describe current innovations varies significantly among disciplines, such as social sciences and computer sciences. This article contributes to systematic and cross-disciplinary research on current technological applications in everyday life by identifying the most relevant concepts (i.e., Ubiquitous Computing, Internet of Things, Smart Objects and Environments, Ambient Environments and Artificial Intelligence) and relating them to each other. Key questions, core aspects, similarities and differences are identified. Theoretically disentangling terminology results in four distinct analytical dimensions (connectivity, invisibility, awareness, and agency) that facilitate and address social implications. This article provides a basis for a deeper understanding, precise operationalisations, and an increased anticipation of impending developments.


AI Magazine ◽  
2015 ◽  
Vol 36 (4) ◽  
pp. 3-4 ◽  
Author(s):  
Stuart Russell ◽  
Tom Dietterich ◽  
Eric Horvitz ◽  
Bart Selman ◽  
Francesca Rossi ◽  
...  

Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents — systems that perceive and act in some environment. In this context, "intelligence" is related to statistical and economic notions of rationality — colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems. As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008–09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document [see page X] gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself. In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.


2019 ◽  
Vol 3 (2) ◽  
pp. 34
Author(s):  
Hiroshi Yamakawa

In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, a new possibility for peacekeeping among human societies using the appropriate interventions of an advanced system will be available in the near future. To achieve this goal, an artificial intelligence (AI) system must operate continuously and stably (condition 1) and have an intervention method for maintaining peace among human societies based on a common value (condition 2). However, as a premise, it is necessary to have a minimum common value upon which all of human society can agree (condition 3). In this study, an AI system to achieve condition 1 was investigated. This system was designed as a group of distributed intelligent agents (IAs) to ensure robust and rapid operation. Even if common goals are shared among all IAs, each autonomous IA acts on each local value to adapt quickly to each environment that it faces. Thus, conflicts between IAs are inevitable, and this situation sometimes interferes with the achievement of commonly shared goals. Even so, they can maintain peace within their own societies if all the dispersed IAs think that all other IAs aim for socially acceptable goals. However, communication channel problems, comprehension problems, and computational complexity problems are barriers to realization. This problem can be overcome by introducing an appropriate goal-management system in the case of computer-based IAs. Then, an IA society could achieve its goals peacefully, efficiently, and consistently. Therefore, condition 1 will be achievable. In contrast, humans are restricted by their biological nature and tend to interact with others similar to themselves, so the eradication of conflicts is more difficult.


2002 ◽  
Vol 1 (1) ◽  
pp. 125-143 ◽  
Author(s):  
Rolf Pfeifer

Artificial intelligence is by its very nature synthetic, its motto is “Understanding by building”. In the early days of artificial intelligence the focus was on abstract thinking and problem solving. These phenomena could be naturally mapped onto algorithms, which is why originally AI was considered to be part of computer science and the tool was computer programming. Over time, it turned out that this view was too limited to understand natural forms of intelligence and that embodiment must be taken into account. As a consequence the focus changed to systems that are able to autonomously interact with their environment and the main tool became the robot. The “developmental robotics” approach incorporates the major implications of embodiment with regard to what has been and can potentially be learned about human cognition by employing robots as cognitive tools. The use of “robots as cognitive tools” is illustrated in a number of case studies by discussing the major implications of embodiment, which are of a dynamical and information theoretic nature.


1989 ◽  
Vol 5 (1) ◽  
pp. 1-15 ◽  
Author(s):  
Alison King

Verbal interaction and problem-solving behavior of small cooperative peer groups were observed while these groups worked on a computer-assisted (non-programming) problem-solving task. The purpose of the study was to identify problem-solving behaviors which relate to success within this context. Thirty-six fourth grade students were assigned to groups of three to form six groups of high and six of average academic ability. All groups used a nonprogramming version of Logo turtle graphics to reproduce a given line design on the computer screen. Results indicate that there was no relationship between success and ability, and that successful groups asked more task-related questions, spent more time on strategy, and reached higher levels of strategy elaboration than did unsuccessful groups. High ability groups made a greater number of long task statements than did average groups, bindings are discussed within the theoretical frameworks of social cognition and modeling. Instructional implications, including those for the development of computer-assisted learning materials for peer group problem solving, are also discussed.


2021 ◽  
Vol 21 (2) ◽  
pp. 97-117
Author(s):  
Dominique Garingan ◽  
Alison Jane Pickard

AbstractIn response to evolving legal technologies, this article by Dominique Garingan and Alison Jane Pickard explores the concept of algorithmic literacy, a technological literacy which facilitates metacognitive practices surrounding the use of artificially intelligent systems and the principles that shape ethical and responsible user experiences. This article examines the extent to which existing information, digital, and computer literacy frameworks and professional competency standards ground algorithmic literacy. It proceeds to identify various elements of algorithmic literacy within existing literature, provide examples of algorithmic literacy initiatives in academic and non-academic settings, and explore the need for an algorithmic literacy framework to ground algorithmic literacy initiatives within the legal information profession.


2021 ◽  
pp. 026638212110619
Author(s):  
Sharon Richardson

During the past two decades, there have been a number of breakthroughs in the fields of data science and artificial intelligence, made possible by advanced machine learning algorithms trained through access to massive volumes of data. However, their adoption and use in real-world applications remains a challenge. This paper posits that a key limitation in making AI applicable has been a failure to modernise the theoretical frameworks needed to evaluate and adopt outcomes. Such a need was anticipated with the arrival of the digital computer in the 1950s but has remained unrealised. This paper reviews how the field of data science emerged and led to rapid breakthroughs in algorithms underpinning research into artificial intelligence. It then discusses the contextual framework now needed to advance the use of AI in real-world decisions that impact human lives and livelihoods.


2020 ◽  
Vol 12 (14) ◽  
pp. 5568 ◽  
Author(s):  
Thomas K.F. Chiu ◽  
Ching-sing Chai

The teaching of artificial intelligence (AI) topics in school curricula is an important global strategic initiative in educating the next generation. As AI technologies are new to K-12 schools, there is a lack of studies that inform schools’ teachers about AI curriculum design. How to prepare and engage teachers, and which approaches are suitable for planning the curriculum for sustainable development, are unclear. Therefore, this case study aimed to explore the views of teachers with and without AI teaching experience on key considerations for the preparation, implementation and continuous refinement of a formal AI curriculum for K-12 schools. It drew on the self-determination theory (SDT) and four basic curriculum planning approaches—content, product, process and praxis—as theoretical frameworks to explain the research problems and findings. We conducted semi-structured interviews with 24 teachers—twelve with and twelve without experience in teaching AI—and used thematic analysis to analyze the interview data. Our findings revealed that genuine curriculum creation should encompass all four forms of curriculum design approach that are coordinated by teachers’ self-determination to be orchestrators of student learning experiences. This study also proposed a curriculum development cycle for teachers and curriculum officers.


Sign in / Sign up

Export Citation Format

Share Document