Robots as cognitive tools

2002 ◽  
Vol 1 (1) ◽  
pp. 125-143 ◽  
Author(s):  
Rolf Pfeifer

Artificial intelligence is by its very nature synthetic, its motto is “Understanding by building”. In the early days of artificial intelligence the focus was on abstract thinking and problem solving. These phenomena could be naturally mapped onto algorithms, which is why originally AI was considered to be part of computer science and the tool was computer programming. Over time, it turned out that this view was too limited to understand natural forms of intelligence and that embodiment must be taken into account. As a consequence the focus changed to systems that are able to autonomously interact with their environment and the main tool became the robot. The “developmental robotics” approach incorporates the major implications of embodiment with regard to what has been and can potentially be learned about human cognition by employing robots as cognitive tools. The use of “robots as cognitive tools” is illustrated in a number of case studies by discussing the major implications of embodiment, which are of a dynamical and information theoretic nature.

Philosophies ◽  
2019 ◽  
Vol 4 (2) ◽  
pp. 24
Author(s):  
Steven Umbrello ◽  
Stefan Lorenz Sorgner

Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in her analysis of the human cognition-consciousness connection is discussed in relation to how nonconscious cognition can be envisioned and exacerbated in embodied AI. Similarly, this paper offers a way of understanding the concept of suffering in a way that is different than the conventional sense of attributing it to either a purely physical state or a conscious state, instead of grounding at least a type of suffering in this form of cognition.


Author(s):  
Joel Weijia Lai ◽  
Candice Ke En Ang ◽  
U. Rajendra Acharya ◽  
Kang Hao Cheong

Artificial Intelligence in healthcare employs machine learning algorithms to emulate human cognition in the analysis of complicated or large sets of data. Specifically, artificial intelligence taps on the ability of computer algorithms and software with allowable thresholds to make deterministic approximate conclusions. In comparison to traditional technologies in healthcare, artificial intelligence enhances the process of data analysis without the need for human input, producing nearly equally reliable, well defined output. Schizophrenia is a chronic mental health condition that affects millions worldwide, with impairment in thinking and behaviour that may be significantly disabling to daily living. Multiple artificial intelligence and machine learning algorithms have been utilized to analyze the different components of schizophrenia, such as in prediction of disease, and assessment of current prevention methods. These are carried out in hope of assisting with diagnosis and provision of viable options for individuals affected. In this paper, we review the progress of the use of artificial intelligence in schizophrenia.


2020 ◽  
pp. 089443932098012
Author(s):  
Teresa M. Harrison ◽  
Luis Felipe Luna-Reyes

While there is growing consensus that the analytical and cognitive tools of artificial intelligence (AI) have the potential to transform government in positive ways, it is also clear that AI challenges traditional government decision-making processes and threatens the democratic values within which they are framed. These conditions argue for conservative approaches to AI that focus on cultivating and sustaining public trust. We use the extended Brunswik lens model as a framework to illustrate the distinctions between policy analysis and decision making as we have traditionally understood and practiced them and how they are evolving in the current AI context along with the challenges this poses for the use of trustworthy AI. We offer a set of recommendations for practices, processes, and governance structures in government to provide for trust in AI and suggest lines of research that support them.


Author(s):  
Kate Crowley ◽  
Jenny Stewart ◽  
Adrian Kay ◽  
Brian W. Head

Although institutions are central to the study of public policy, the focus upon them has shifted over time. This chapter is concerned with the role of institutions in problem solving and the utility of an evolving institutional theory that has significantly fragmented. It argues that the rise of new institutionalism in particular is symptomatic of the growing complexity in problems and policy making. We review the complex landscape of institutional theory, we reconsider institutions in the context of emergent networks and systems in the governance era, and we reflect upon institutions and the notion of policy shaping in contemporary times. We find that network institutionalism, which draws upon policy network and community approaches, has a particular utility for depicting and explaining complex policy.


2016 ◽  
Author(s):  
Falk Lieder ◽  
Tom Griffiths

Many contemporary accounts of human reasoning assume that the mind is equipped with multiple heuristics that could be deployed to perform a given task. This raises the question how the mind determines when to use which heuristic. To answer this question, we developed a rational model of strategy selection, based on the theory of rational metareasoning developed in the artificial intelligence literature. According to our model people learn to efficiently choose the strategy with the best cost-benefit tradeoff by learning a predictive model of each strategy’s performance. We found that our model can provide a unifying explanation for classic findings from domains ranging from decision-making to problem-solving and arithmetic by capturing the variability of people’s strategy choices, their dependence on task and context, and their development over time. Systematic model comparisons supported our theory, and four new experiments confirmed its distinctive predictions. Our findings suggest that people gradually learn to make increasingly more rational use of fallible heuristics. This perspective reconciles the two poles of the debate about human rationality by integrating heuristics and biases with learning and rationality.


Sign in / Sign up

Export Citation Format

Share Document