AI for interactive performance: Challenges and techniques

2021 ◽  
Vol 14 (2) ◽  
pp. 231-243
Author(s):  
Rossana Damiano ◽  
Vincenzo Lombardo ◽  
Giulia Monticone ◽  
Antonio Pizzo

AI techniques and systems are pervasive to the media and entertainment industry, with application ranging from chatbots and characters to games and virtual environments. A common feature characterising these applications is given by the intent to introduce a narrative element in the user experience, often conveyed through some type of performance. In this paper, we analyse the contribution of AI techniques in the design and realization of a dramatic performance, an interactive system participated by human performers and audiences through some type of enactment. Drawing on real applications developed for innovative performances, we propose an architectural model that forms the technical platform of the system, and discuss how it can be deployed using Artificial Intelligence techniques with reference to real, experimental applications created in the last two decades.

Author(s):  
Valeria Carofiglio ◽  
Fabio Abbattista

In order to develop a complex interactive system, user-centered evaluation (UCE) is an essential component. The new interaction paradigms encourage exploring new variables for accounting the users’ experience in terms of their needs and preferences. This is especially important for Adaptable Virtual Environments (AVE). In this context, to obtain a more engaging overall user’s experience, a good designer should perform proper formative and summative usability tests based on the user’s emotional level, which become a UCE activity. Our methodology tries to overcome the weaknesses of traditional methods by employing a Brain Computer Interface (BCI) to collect additional information on user’s needs and preferences. A set of preliminary usability experiments has been conducted for (i) determining if the outcome of a BCI is suitable to drive the designer in organizing the user-system dialog within AVE and (ii) evaluating the user-system dialog, in terms of dynamic increase of the emotionally-driven interaction’s customization.


Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 275
Author(s):  
Peter Cihon ◽  
Jonas Schuett ◽  
Seth D. Baum

Corporations play a major role in artificial intelligence (AI) research, development, and deployment, with profound consequences for society. This paper surveys opportunities to improve how corporations govern their AI activities so as to better advance the public interest. The paper focuses on the roles of and opportunities for a wide range of actors inside the corporation—managers, workers, and investors—and outside the corporation—corporate partners and competitors, industry consortia, nonprofit organizations, the public, the media, and governments. Whereas prior work on multistakeholder AI governance has proposed dedicated institutions to bring together diverse actors and stakeholders, this paper explores the opportunities they have even in the absence of dedicated multistakeholder institutions. The paper illustrates these opportunities with many cases, including the participation of Google in the U.S. Department of Defense Project Maven; the publication of potentially harmful AI research by OpenAI, with input from the Partnership on AI; and the sale of facial recognition technology to law enforcement by corporations including Amazon, IBM, and Microsoft. These and other cases demonstrate the wide range of mechanisms to advance AI corporate governance in the public interest, especially when diverse actors work together.


2012 ◽  
Vol 21 (1) ◽  
pp. 96-116 ◽  
Author(s):  
Haringer Matthias ◽  
Steffi Beckhaus

In this paper we introduce novel methods of intensifying and varying the user experience in virtual environments (VE). VEs technically have numerous means for crafting the user experience. Little has yet been done to evaluate those means of expression (MoEs) for their emotional impact on people and to use their capability to create different experiences and subtly guide the user. One of the reasons is that this requires a system which is capable of easily and dynamically providing those MoEs in such a way that they can easily be composed, evaluated, and compared between applications and users. In the following, we first introduce our model of both informational and emotional impact of VEs on users, introduce our dynamic, expressive VR-system, and present our novel evaluation and rating method for MoEs. MoEs can be used to guide attention to specific objects or build up an emotion or mood over time. We then present a study in which users experience 30 selected MoEs and rate their qualitative emotional impact using this rating method. We found that different MoEs can be used to elicit many diverse emotions which were surprisingly consistent among the test persons. With these results, our work enables new ways to make VEs more interesting and emotionally engaging, especially over a longer period of time, opening new possibilities, for example, to increase the motivation for long, stressful, and tiresome training as in neurorehabilitation.


2021 ◽  
Vol 73 (01) ◽  
pp. 12-13
Author(s):  
Manas Pathak ◽  
Tonya Cosby ◽  
Robert K. Perrons

Artificial intelligence (AI) has captivated the imagination of science-fiction movie audiences for many years and has been used in the upstream oil and gas industry for more than a decade (Mohaghegh 2005, 2011). But few industries evolve more quickly than those from Silicon Valley, and it accordingly follows that the technology has grown and changed considerably since this discussion began. The oil and gas industry, therefore, is at a point where it would be prudent to take stock of what has been achieved with AI in the sector, to provide a sober assessment of what has delivered value and what has not among the myriad implementations made so far, and to figure out how best to leverage this technology in the future in light of these learnings. When one looks at the long arc of AI in the oil and gas industry, a few important truths emerge. First among these is the fact that not all AI is the same. There is a spectrum of technological sophistication. Hollywood and the media have always been fascinated by the idea of artificial superintelligence and general intelligence systems capable of mimicking the actions and behaviors of real people. Those kinds of systems would have the ability to learn, perceive, understand, and function in human-like ways (Joshi 2019). As alluring as these types of AI are, however, they bear little resemblance to what actually has been delivered to the upstream industry. Instead, we mostly have seen much less ambitious “narrow AI” applications that very capably handle a specific task, such as quickly digesting thousands of pages of historical reports (Kimbleton and Matson 2018), detecting potential failures in progressive cavity pumps (Jacobs 2018), predicting oil and gas exports (Windarto et al. 2017), offering improvements for reservoir models (Mohaghegh 2011), or estimating oil-recovery factors (Mahmoud et al. 2019). But let’s face it: As impressive and commendable as these applications have been, they fall far short of the ambitious vision of highly autonomous systems that are capable of thinking about things outside of the narrow range of tasks explicitly handed to them. What is more, many of these narrow AI applications have tended to be modified versions of fairly generic solutions that were originally designed for other industries and that were then usefully extended to the oil and gas industry with a modest amount of tailoring. In other words, relatively little AI has been occurring in a way that had the oil and gas sector in mind from the outset. The second important truth is that human judgment still matters. What some technology vendors have referred to as “augmented intelligence” (Kimbleton and Matson 2018), whereby AI supplements human judgment rather than sup-plants it, is not merely an alternative way of approaching AI; rather, it is coming into focus that this is probably the most sensible way forward for this technology.


Author(s):  
Aleshia T. Hayes ◽  
Carrie L. Straub ◽  
Lisa A. Dieker ◽  
Charlie E. Hughes ◽  
Michael C. Hynes

New and emerging technology in the field of virtual environments has permitted a certain malleability of learning milieus. These emerging environments allow learning and transfer through interactions that have been intentionally designed to be pleasurable experiences. TLE TeachLivE™ is just such an emerging environment that engages teachers in practice on pedagogical and content aspects of teaching in a simulator. The sense of presence, engagement, and ludus of TLE TeachLivE™ are derived from the compelling Mixed Reality that includes components of off-the shelf and emerging technologies. Some of the noted features that have been identified relevant to the ludic nature of TeachLivE include the flow, fidelity, unpredicability, suspension of disbelief, social presence, and gamelike elements. This article explores TLE TeachLivE™ in terms of the ludology, paideic user experience, the source of the ludus, and outcomes of the ludic nature of the experience.


2018 ◽  
Vol 14 (4) ◽  
pp. 734-747 ◽  
Author(s):  
Constance de Saint Laurent

There has been much hype, over the past few years, about the recent progress of artificial intelligence (AI), especially through machine learning. If one is to believe many of the headlines that have proliferated in the media, as well as in an increasing number of scientific publications, it would seem that AI is now capable of creating and learning in ways that are starting to resemble what humans can do. And so that we should start to hope – or fear – that the creation of fully cognisant machine might be something we will witness in our life time. However, much of these beliefs are based on deep misconceptions about what AI can do, and how. In this paper, I start with a brief introduction to the principles of AI, machine learning, and neural networks, primarily intended for psychologists and social scientists, who often have much to contribute to the debates surrounding AI but lack a clear understanding of what it can currently do and how it works. I then debunk four common myths associated with AI: 1) it can create, 2) it can learn, 3) it is neutral and objective, and 4) it can solve ethically and/or culturally sensitive problems. In a third and last section, I argue that these misconceptions represent four main dangers: 1) avoiding debate, 2) naturalising our biases, 3) deresponsibilising creators and users, and 4) missing out some of the potential uses of machine learning. I finally conclude on the potential benefits of using machine learning in research, and thus on the need to defend machine learning without romanticising what it can actually do.


i-com ◽  
2016 ◽  
Vol 15 (1) ◽  
Author(s):  
Holger Fischer ◽  
Michaela Kauer-Franz ◽  
Dominique Winter ◽  
Stefan Latt

AbstractThe establishment of human-centered design within software development processes is still a challenge. Numerous methods exist that aim to increase the usability and user experience of an interactive system. Nevertheless, the selection of appropriate methods remains to be challenging, as there are multiple different factors that have a significant impact on the appropriateness of the methods in their context of use. The present article investigates current strategies of method selection based on a conference workshop with practitioners. The results show that usability and user experience professionals concentrate on five to seven well-known methods and will need more support to select and use further ones.


1998 ◽  
Vol 25 (1) ◽  
pp. 64-67 ◽  
Author(s):  
René Verry

Susan Lederman (SL) is an invited member of the International Council of Research Fellows for the Braille Research Center and a Fellow of he Canadian Psychology Association. She was also an Associate of the Canadian Institute for Advanced Research in the Robotics and Artificial Intelligence Programme for 8 years. A Professor in the Departments of Psychology and Computing & Information Science at Queen's University at Kingston (Ontario, Canada), she has written and coauthored numerous articles on tactile psychophysics, haptic perception and cognition, motor control, and haptic applications in robotics, teleoperation, and virtual environments. She is currently the coorganizer of the Annual Symposium a Haptic Interfaces for Teleoperation and Virtual Environment Systems. René Verry (RV) is a psychology professor at Millikin University (Decatur, IL), where she teaches a variety of courses in the experimental core, including Sensation and Perception. She chose the often-subordinated somatic senses as the focus of her interview, and recruited Susan Lederman as our research specialist.


10.28945/4644 ◽  
2020 ◽  
Vol 4 ◽  
pp. 177-192
Author(s):  
Chrissann R. Ruehle

The Artificial Intelligence (AI) industry has experienced tremendous growth in recent years. Consequently, there has been considerable hype, interest, and even misinformation in the media regarding this emergent technology. Practitioners and academics alike are interested in learning how this market functions in order to make evidence-based decisions regarding its adoption. The purpose of this manuscript is to perform a systematic examination of the current market dynamics as well as identify future growth opportunities for the benefit of incumbents in addition to firms seeking to enter the AI market. The primary research question is: how do market and governmental forces reportedly shape AI adoptions? Drawing on predominantly practitioner focused literature, along with several seminal academic sources, the article begins by examining and mapping stakeholders in the market. This approach allows for the identification and analysis of key stakeholders. Semiconductor and cloud computing firms play a substantive role in the AI adoption ecosystem as they wield substantial power as revealed in this analysis. Subsequently, the TOE framework, which includes the technology, organization and environmental contexts, is applied in order to understand the role of these forces in shaping the AI market. This analysis demonstrates that large firms have a significant competitive advantage due to their extensive data collection and management capabilities in addition to attracting data scientists and high performing analytics professionals. Large firms are actively acquiring small and medium sized AI businesses in order to expand their offerings, particularly in dynamic emerging fields such as facial recognition technology and deep learning.


Sign in / Sign up

Export Citation Format

Share Document