Thinking Machines and the Philosophy of Computer Science
Latest Publications


TOTAL DOCUMENTS

22
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

Published By IGI Global

9781616920142, 9781616920159

Author(s):  
Kevin Warwick
Keyword(s):  

It is now possible to grow a biological brain within a robot body. As an outsider it is exciting to consider what the brain is thinking about, when it is interacting with the world at large, and what issues cause it to ponder on its break times. As a result it appears that it will not be too long before we actually find out what it would really be like to be a robot. Here we look at the technology involved and investigate the possibilities on offer. Fancy the idea of being a robot yourself? Then read on!


Author(s):  
Jordi Vallverdú

From recent debates about the paper of scientific instruments and human vision, we can conclude that we don't see through our instruments, but we see with them. All our observations, perceptions and scientific data are biologically, socially, and cognitively mediated. So, there is not ‘pure vision’, nor ‘pure objective data’. At a certain level, we can say that we have an extended epistemology, which embraces human and instrumental entities. We can make better science because we can deal better with scientific data. But at the same time, the point is not that be ‘see’ better, but that we only can see because we design those cognitive interfaces. Computational simulations are the middleware of our mindware, acting as mediators between our instruments, brains, the worlds and our minds. We are contemporary Thomas, who believe what we can see.


Author(s):  
Joseph Brenner

The conjunction of the disciplines of computing and philosophy implies that discussion of computational models and approaches should include explicit statements of their underlying worldview, given the fact that reality includes both computational and non-computational domains. As outlined at ECAP08, both domains of reality can be characterized by the different logics applicable to them. A new “Logic in Reality” (LIR) was proposed as best describing the dynamics of real, non-computable processes. The LIR process view of the real macroscopic world is compared here with recent computational and information-theoretic models. Proposals that the universe can be described as a mathematical structure equivalent to a computer or by simple cellular automata are deflated. A new interpretation of quantum superposition as supporting a concept of paraconsistent parallelism in quantum computing and an appropriate ontological commitment for computational modeling are discussed.


Author(s):  
Luc Schneider

This contribution tries to assess how the Web is changing the ways in which scientific knowledge is produced, distributed and evaluated, in particular how it is transforming the conventional conception of scientific authorship. After having properly introduced the notions of copyright, public domain and (e-)commons, I will critically assess James Boyle's (2003, 2008) thesis that copyright and scientific (e-) commons are antagonistic, but I will mostly agree with the related claim by Stevan Harnad (2001a,b, 2008) that copyright has become an obstacle to the accessibility of scientific works. I will even go further and argue that Open Access schemes not only solve the problem of the availability of scientific literature, but may also help to tackle the uncontrolled multiplication of scientific publications, since these publishing schemes are based on free public licenses allowing for (acknowledged) re-use of texts. However, the scientific community does not seem to be prepared yet to move towards an Open Source model of authorship, probably due to concerns related to attributing credit and responsability for the expressed hypotheses and results. Some strategies and tools that may encourage a change of academic mentality in favour of a conception of scientific authorship modelled on the Open Source paradigm are discussed.


Author(s):  
David J. Saab ◽  
Uwe V. Riss

In this chapter we will investigate the nature of abstraction in detail, its entwinement with logical thinking, and the general role it plays for the mind. We find that non-logical capabilities are not only important for input processing, but also for output processing.  Human beings jointly use analytic and embodied capacities for thinking and acting, where analytic thinking mirrors reflection and logic, and where abstraction is the form in which embodied thinking is revealed to us. We will follow the philosophical analyses of Heidegger and Polanyi to elaborate the fundamental difference between abstraction and logics and how they come together in the mind.  If computational approaches to mind are to be successful, they must be able to recognize meaningful and salient elements of a context and engage in abstraction. Computational minds must be able to imagine and volitionally blend abstractions as a way of recognizing gestalt contexts.  And it must be able to discern the validity of these blendings in ways that, in humans, arise from a sensus communis.


Author(s):  
David Casacuberta ◽  
Saray Ayala ◽  
Jordi Vallverdú

After several decades of success in different areas and numerous effective applications, algorithmic Artificial Intelligence has revealed its limitations. If in our quest for artificial intelligence we want to understand natural forms of intelligence, we need to shift/move from platform-free algorithms to embodied and embedded agents. Under the embodied perspective, intelligence is not so much a matter of algorithms, but of the continuous interactions of an embodied agent with the real world. In this paper we adhere to a specific reading of the embodied view usually known as enactivism, to argue that 1) It is a more reasonable model of how the mind really works; 2) It has both theoretical and empirical benefits for Artificial Intelligence and 3) Can be easily implemented in simple robotic sets like Lego Mindstorms (TM). In particular, we will explore the computational role that morphology can play in artificial systems. We will illustrate our ideas presenting several Lego Mindstorms robots where morphology is critical for the robot’s behaviour.


Author(s):  
Antoni Diller

Considerable progress is being made in AI and Robotics to produce an android with human-like abilities. The work currently being done in mainstream laboratories cannot, unfortunately, succeed in making a machine that can interact meaningfully with people. This is because that work does not take seriously the fact that an intelligent agent receives most of the information he or she needs to be a productive member of society by accepting other people’s assertions. AI and Robotics are not alone in marginalising the study of testimony; this happens in science generally and also in philosophy. After explaining the main reason for this and surveying some of what has been done in AI and philosophy on understanding testimony, by people working outside the mainstream, I present a theory of testimony and investigate its implementability.


Author(s):  
Juan Manuel Durán

This work is meant to revisit Francesco Guala’s paper Models, simulations, and experiments. The main intention is to rise some reasonable doubts on the conception of ‘ontological account’ described in his work. Accordingly, I develop my arguments in three (plus one) steps: firstly, I show that his conception of ‘experiment’ is too narrow, suggesting a more accurate version instead. Secondly, I object to his notion of ‘simulation’ and, following Trenholme, I make a further distinction between ‘analogical’ and ‘digital’ simulations. This distinction will also be an enrichment of the concept of ‘experiment’. In addition, I suggest that his notion of ‘computer simulation’ is too narrow as well. All these arguments have the advantage of moving the ‘ontological account’ into a new ontological map, but not getting rid of it. Hence, as a third step I discuss cellular automata as a potential solution of this new problem. Finally, I object to his conception of ‘hybrid simulations’ as another way of misrepresenting computational activity.


Author(s):  
Timothy Colburn ◽  
Gary Shute

Among empirical disciplines, computer science and the engineering fields share the distinction of creating their own subject matter, raising questions about the kinds of knowledge they engender. We argue that knowledge acquisition in computer science fits models as diverse as those proposed by Piaget and Lakatos. However, contrary to natural science, the knowledge acquired by computer science is not knowledge of objective truth, but of values.


Author(s):  
Matteo Casu ◽  
Luca Albergante

The notion of identity has been discussed extensively in the past. Leibniz was the first to present this notion in a logically coherent way, using a formulation generally recognized as “Leibniz's Law”. Although some authors criticized this formulation, Leibniz's Law is generally accepted as the definition of identity. This work interprets Leibniz's Law as a limit notion: perfectly reasonable in a <i>God's eye</i> view of reality, but very difficult to use in the real world because of the limitedness of finite agents. To illustrate our approach we use “description logics” to describe the properties of objects, and present an approach to relativize Leibniz's Law. This relativization is further developed in a semantic web context, where the utility of our approach is suggested.


Sign in / Sign up

Export Citation Format

Share Document