Some recent work in artificial intelligence

1966 ◽  
Vol 54 (12) ◽  
pp. 1687-1697 ◽  
Author(s):  
R.J. Solomonoff
2021 ◽  
Vol 1 (1) ◽  
pp. 76-87
Author(s):  
Alexander Buhmann ◽  
Christian Fieseler

Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.


1997 ◽  
Vol 352 (1358) ◽  
pp. 1257-1265 ◽  
Author(s):  
Aaron F. Bobick

This paper presents several approaches to the machine perception of motion and discusses the role and levels of knowledge in each. In particular, different techniques of motion understanding as focusing on one of movement, activity or action are described. Movements are the most atomic primitives, requiring no contextual or sequence knowledge to be recognized; movement is often addressed using either view–invariant or view–specific geometric techniques. Activity refers to sequences of movements or states, where the only real knowledge required is the statistics of the sequence; much of the recent work in gesture understanding falls within this category of motion perception. Finally, actions are larger–scale events, which typically include interaction with the environment and causal relationships; action understanding straddles the grey division between perception and cognition, computer vision and artificial intelligence. These levels are illustrated with examples drawn mostly from the group's work in understanding motion in video imagery. It is argued that the utility of such a division is that it makes explicit the representational competencies and manipulations necessary for perception.


Author(s):  
Lirong Xia

We summarize some of our recent work on using AI to improve group decision-making by taking a unified approach from statistics, economics, and computation. We then discuss a few ongoing and future directions.


Author(s):  
Alexey Ignatiev

Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.


2021 ◽  
Vol 1 (1) ◽  
pp. 76-87
Author(s):  
Alexander Buhmann ◽  
Christian Fieseler

Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.


2020 ◽  
pp. 019145372093190
Author(s):  
Martin Beck Matuštík

Can we keep relying on sources of values dating back to the Axial Age, or do cognitive changes in the present age require a completely new foundation? An uncertainty arises with the crisis of values that can support the human in the age of artificial intelligence. Should we seek contemporary access points to the archaic origins of the species? Or must we also imagine new Anthropocenic-Axial values to reground the human event? In his most recent work, Habermas affirms the continuing importance of the contemporary access to the First Axial values, but before him Jaspers anticipates that a second cognitive revolution opens areas that may be receptive to new value foundations. Habermas’ justification of the postsecular turn may not be thinkable without Jaspers’ discovery of the postaxial imaginary.


2007 ◽  
Vol 22 (1) ◽  
pp. 87-109 ◽  
Author(s):  
CHRIS REED ◽  
DOUGLAS WALTON ◽  
FABRIZIO MACAGNO

AbstractIn this paper, we present a survey of the development of the technique of argument diagramming covering not only the fields in which it originated — informal logic, argumentation theory, evidence law and legal reasoning — but also more recent work in applying and developing it in computer science and artificial intelligence (AI). Beginning with a simple example of an everyday argument, we present an analysis of it visualized as an argument diagram constructed using a software tool. In the context of a brief history of the development of diagramming, it is then shown how argument diagrams have been used to analyse and work with argumentation in law, philosophy and AI.


2021 ◽  
Vol 1 (1) ◽  
pp. 74-85
Author(s):  
Alexander Buhmann ◽  
Christian Fieseler

Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.


Author(s):  
Joel Walmsley

The two main philosophical questions with which artificial intelligence (AI) has been traditionally concerned are (1) 'Could a machine think?’ and (2) 'Are we (humans) thinking machines?’ (see Walmsley 2012). Recent work in AI has continued to seek answers to these questions either by building technological tools to perform activities and accomplish tasks that human minds can do, or by helping us to understand the processes and mechanisms involved in human cognition. In addition, recent work in AI attempts to go beyond the human case to raise questions about the cognitive capacities of actual and potential systems that are more powerful than the human mind, and to understand the consequences—and the risks—of the underlying technological developments. Recent extensions of previous work in AI include the development of ‘Deep Learning’ algorithms to enable artificial systems to learn from complex data with much less intervention and supervision by the human programmer. This is partly inspired by success in neuroscientific research on the structure and function of the human brain (and developments in connectionist and neural network AI), and has a wide range of technological applications and psychological implications. Such work goes hand-in-hand with work in brain emulation and the fine-grained replication of neurological structure in non-biological substrates. In addition to building on earlier successes, recent work has begun to address related questions about the consequences – and risks – of the development of AI technologies. First, there is a significant (and increasingly nuanced) debate about the possibility of (and the timeframe for) the development of ‘superintelligent’ AI (AI systems that exceed the cognitive capacities of humans): a hypothetical point in the future that has come to be known as ‘the singularity’. Second, recent work in AI has seen a growing appreciation of the risks of both current, and hypothetical future, AI. As a consequence, recent work in AI continues to address—and shed light on—many familiar philosophical questions. In addition to the general questions of whether machines could think, or whether the human mind could be understood in mechanical terms, there are also several specific questions that touch on other areas of philosophy, such as: would the development of AI show that physicalism, or functionalism, about the mind is correct? How should we understand the identity of a person over time? How should we understand the relationship between phenomenal consciousness and the brain? What is the relationship between values, motivation and behaviour?


2016 ◽  
Vol 8 (1) ◽  
pp. 1
Author(s):  
Douglas Walton

<p>This paper is an introduction to recent work on practical (means-end, goal-directed) reasoning in artificial intelligence. By using an example of community deliberation concerning whether to change to a no-fault system of insurance, it is explained how practical reasoning is used in public deliberation. It is shown how argument mapping and argumentation schemes are useful tools for modeling the structure of the argumentation in such cases. The distinction between instrumental practical reasoning and value-based practical reasoning is modeled using argumentation schemes.</p>


Sign in / Sign up

Export Citation Format

Share Document