scholarly journals Mind the robot! Variation in attributions of mind to a wide set of real and fictional robots

2020 ◽  
Author(s):  
Oliver Jacobs ◽  
Kamel Gazzaz ◽  
Alan Kingstone

The rapid rise of computing power over the last half-century has prompted the desire to understand and develop a paradigm of affective computing systems that can recognize, process, and simulate human features, including qualities like empathy and morality. Quantitatively comparing different computing systems in their abilities to simulate human qualities has been a major technical challenge. A recent framework put forth by Gray, Gray, and Wegner (2007) provides promise as a new means for comparing a wide landscape of different digital agents both real and fictional. Using this framework, we sought to investigate if attributions of mind towards robots suggests that people perceive robots as capable of emulating different degrees of mind. We asked participants to rate the agency (the ability "to do") and experience (the ability "to feel") of 24 characters made up of humans, robots, inanimate objects, and animals. Although robots were collectively rated much lower than humans on agency and experience, there was significant variation among robots (both real and well-known fictional robots). This implies that building digital agents to imitate aspects of experience is a fruitful avenue for future development. In addition, age was a critical factor in people’s attributions of agency and experience indicating that there may be a generational shift towards greater acceptance of robots’ ability to both do and feel.

2021 ◽  
Vol 2 (02) ◽  
pp. 52-58
Author(s):  
Sharmeen M.Saleem Abdullah Abdullah ◽  
Siddeeq Y. Ameen Ameen ◽  
Mohammed Mohammed sadeeq ◽  
Subhi Zeebaree

New research into human-computer interaction seeks to consider the consumer's emotional status to provide a seamless human-computer interface. This would make it possible for people to survive and be used in widespread fields, including education and medicine. Multiple techniques can be defined through human feelings, including expressions, facial images, physiological signs, and neuroimaging strategies. This paper presents a review of emotional recognition of multimodal signals using deep learning and comparing their applications based on current studies. Multimodal affective computing systems are studied alongside unimodal solutions as they offer higher accuracy of classification. Accuracy varies according to the number of emotions observed, features extracted, classification system and database consistency. Numerous theories on the methodology of emotional detection and recent emotional science address the following topics. This would encourage studies to understand better physiological signals of the current state of the science and its emotional awareness problems.


Author(s):  
Peter R Slowinski

The core of artificial intelligence (AI) applications is software of one sort or another. But while available data and computing power are important for the recent quantum leap in AI, there would not be any AI without computer programs or software. Therefore, the rise in importance of AI forces us to take—once again—a closer look at software protection through intellectual property (IP) rights, but it also offers us a chance to rethink this protection, and while perhaps not undoing the mistakes of the past, at least to adapt the protection so as not to increase the dysfunctionality that we have come to see in this area of law in recent decades. To be able to establish the best possible way to protect—or not to protect—the software in AI applications, this chapter starts with a short technical description of what AI is, with readers referred to other chapters in this book for a deeper analysis. It continues by identifying those parts of AI applications that constitute software to which legal software protection regimes may be applicable, before outlining those protection regimes, namely copyright and patents. The core part of the chapter analyses potential issues regarding software protection with respect to AI using specific examples from the fields of evolutionary algorithms and of machine learning. Finally, the chapter draws some conclusions regarding the future development of IP regimes with respect to AI.


Author(s):  
Jia Ai

The development of computer technology is extremely rapid in today's world. The faster it develops, the more security risks it is exposed to and the stronger computing power of the computing systems and software it requires. There are many advantages of embedded computer systems, including not only good reliability, but also strong practicability. Therefore, embedded computer systems are widely applied in business and industry. Open or commercial applications allow computers to embed radar systems in many industries. Therefore, the computer embedded radar operating system has bright development prospects.


Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 1029
Author(s):  
Anabi Hilary Kelechi ◽  
Mohammed H. Alsharif ◽  
Okpe Jonah Bameyi ◽  
Paul Joan Ezra ◽  
Iorshase Kator Joseph ◽  
...  

Power-consuming entities such as high performance computing (HPC) sites and large data centers are growing with the advance in information technology. In business, HPC is used to enhance the product delivery time, reduce the production cost, and decrease the time it takes to develop a new product. Today’s high level of computing power from supercomputers comes at the expense of consuming large amounts of electric power. It is necessary to consider reducing the energy required by the computing systems and the resources needed to operate these computing systems to minimize the energy utilized by HPC entities. The database could improve system energy efficiency by sampling all the components’ power consumption at regular intervals and the information contained in a database. The information stored in the database will serve as input data for energy-efficiency optimization. More so, device workload information and different usage metrics are stored in the database. There has been strong momentum in the area of artificial intelligence (AI) as a tool for optimizing and processing automation by leveraging on already existing information. This paper discusses ideas for improving energy efficiency for HPC using AI.


2021 ◽  
Vol 11 (11) ◽  
pp. 1392
Author(s):  
Yue Hua ◽  
Xiaolong Zhong ◽  
Bingxue Zhang ◽  
Zhong Yin ◽  
Jianhua Zhang

Affective computing systems can decode cortical activities to facilitate emotional human–computer interaction. However, personalities exist in neurophysiological responses among different users of the brain–computer interface leads to a difficulty for designing a generic emotion recognizer that is adaptable to a novel individual. It thus brings an obstacle to achieve cross-subject emotion recognition (ER). To tackle this issue, in this study we propose a novel feature selection method, manifold feature fusion and dynamical feature selection (MF-DFS), under transfer learning principle to determine generalizable features that are stably sensitive to emotional variations. The MF-DFS framework takes the advantages of local geometrical information feature selection, domain adaptation based manifold learning, and dynamical feature selection to enhance the accuracy of the ER system. Based on three public databases, DEAP, MAHNOB-HCI and SEED, the performance of the MF-DFS is validated according to the leave-one-subject-out paradigm under two types of electroencephalography features. By defining three emotional classes of each affective dimension, the accuracy of the MF-DFS-based ER classifier is achieved at 0.50–0.48 (DEAP) and 0.46–0.50 (MAHNOBHCI) for arousal and valence emotional dimensions, respectively. For the SEED database, it achieves 0.40 for the valence dimension. The corresponding accuracy is significantly superior to several classical feature selection methods on multiple machine learning models.


2012 ◽  
Vol 5 (1) ◽  
pp. 1-17
Author(s):  
Frank Ortmeier

In 1988, Marc Weiser was one of the first computer scientists who envisioned that computers would become invisible; that computing power and communication technology would become part of many objects of society’s daily life. Many modern systems would not be able without pervasive technology. Today, most such systems might be invisible or wearable, but society is either still aware of them or they only communicate to a limited extent with each other. In the near future many objects of day to day life will be equipped with some kind of computing and communication capability and people won’t be aware of it anymore. The great benefit is that they will offer citizens support and guidance in everyday life. For example, most people do not know that nice features like jam prediction and avoidance rely on feedback of the navigation system to some centralized server clusters. These analyze the data and thus predict possible traffic jams. Although, dependability issues most often form rigid limits. Because the systems are so smoothly integrated into normal life, they are expected to be robust against intended manipulations to guarantee functional requirements and/or to be traceable and understandable for the human user. In addition, the adaptive nature of many Pervasive Computing systems makes them very difficult to analyze and predict.


2009 ◽  
Vol 45 (4) ◽  
pp. 995-1010 ◽  
Author(s):  
Jiří Wiedermann ◽  
Lukáš Petrů

Inventions ◽  
2018 ◽  
Vol 3 (3) ◽  
pp. 48 ◽  
Author(s):  
Joseph Plazak ◽  
Marta Kersten-Oertel

Recent developments pertaining to ear-mounted wearable computer interfaces (i.e., “hearables”) offer a number of distinct affordances over other wearable devices in ambient and ubiquitous computing systems. This paper provides a survey of hearables and the possibilities that they offer as computer interfaces. Thereafter, these affordances are examined with respect to other wearable interfaces. Finally, several historical trends are noted within this domain, and multiple paths for future development are offered.


2019 ◽  
Vol 214 ◽  
pp. 03012 ◽  
Author(s):  
Federico Stagni ◽  
Andrei Tsaregorodtsev ◽  
Christophe Haen ◽  
Philippe Charpentier ◽  
Zoltan Mathe ◽  
...  

The DIRAC project is developing interware to build and operate distributed computing systems. It provides a development framework and a rich set of services for both Workload and Data Management tasks of large scientific communities. DIRAC is adopted by a growing number of collaborations, including LHCb, Belle2, CLIC, and CTA. The LHCb experiment will be upgraded during the second long LHC shutdown (2019-2020). At restart of data taking in Run 3, the instantaneous luminosity will increase by a factor of five. The LHCb computing model also need be upgraded. Oversimplifying, this translates into the need for significantly more computing power and resources, and more storage with respect to what LHCb uses right now. The DIRAC interware will keep being the tool to handle all of LHCb distributed computing resources. Within this contribution, we highlight the ongoing and planned efforts to ensure that DIRAC will be able to provide an optimal usage of its distributed computing resources. This contribution focuses on DIRAC plans for increasing the scalability of the overall system, taking in consideration that the main requirement is keeping a running system working. This requirement translates into the need of studies and developments within the current DIRAC architecture. We believe that scalability is about traffic growth, dataset growth, and maintainability: within this contribution we address all of them, showing the technical solutions we are adopting.


Sign in / Sign up

Export Citation Format

Share Document