scholarly journals Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers

2021 ◽  
Vol 11 (18) ◽  
pp. 8531
Author(s):  
Tim Murray-Browne ◽  
Panagiotis Tigas

Most Human–Computer Interfaces are built on the paradigm of manipulating abstract representations. This can be limiting when computers are used in artistic performance or as mediators of social connection, where we rely on qualities of embodied thinking: intuition, context, resonance, ambiguity and fluidity. We explore an alternative approach to designing interaction that we call the emergent interface: interaction leveraging unsupervised machine learning to replace designed abstractions with contextually derived emergent representations. The approach offers opportunities to create interfaces bespoke to a single individual, to continually evolve and adapt the interface in line with that individual’s needs and affordances, and to bridge more deeply with the complex and imprecise interaction that defines much of our non-digital communication. We explore this approach through artistic research rooted in music, dance and AI with the partially emergent system Sonified Body. The system maps the moving body into sound using an emergent representation of the body derived from a corpus of improvised movement from the first author. We explore this system in a residency with three dancers. We reflect on the broader implications and challenges of this alternative way of thinking about interaction, and how far it may help users avoid being limited by the assumptions of a system’s designer.

2021 ◽  
Vol 18 (3) ◽  
pp. 1-22
Author(s):  
Charlotte M. Reed ◽  
Hong Z. Tan ◽  
Yang Jiao ◽  
Zachary D. Perez ◽  
E. Courtenay Wilson

Stand-alone devices for tactile speech reception serve a need as communication aids for persons with profound sensory impairments as well as in applications such as human-computer interfaces and remote communication when the normal auditory and visual channels are compromised or overloaded. The current research is concerned with perceptual evaluations of a phoneme-based tactile speech communication device in which a unique tactile code was assigned to each of the 24 consonants and 15 vowels of English. The tactile phonemic display was conveyed through an array of 24 tactors that stimulated the dorsal and ventral surfaces of the forearm. Experiments examined the recognition of individual words as a function of the inter-phoneme interval (Study 1) and two-word phrases as a function of the inter-word interval (Study 2). Following an average training period of 4.3 hrs on phoneme and word recognition tasks, mean scores for the recognition of individual words in Study 1 ranged from 87.7% correct to 74.3% correct as the inter-phoneme interval decreased from 300 to 0 ms. In Study 2, following an average of 2.5 hours of training on the two-word phrase task, both words in the phrase were identified with an accuracy of 75% correct using an inter-word interval of 1 sec and an inter-phoneme interval of 150 ms. Effective transmission rates achieved on this task were estimated to be on the order of 30 to 35 words/min.


Author(s):  
Shannon K. T. Bailey ◽  
Daphne E. Whitmer ◽  
Bradford L. Schroeder ◽  
Valerie K. Sims

Human-computer interfaces are changing to meet the evolving needs of users and overcome limitations of previous generations of computer systems. The current state of computers consists largely of graphical user interfaces (GUI) that incorporate windows, icons, menus, and pointers (WIMPs) as visual representations of computer interactions controlled via user input on a mouse and keyboard. Although this model of interface has dominated human-computer interaction for decades, WIMPs require an extra step between the user’s intent and the computer action, imposing both limitations on the interaction and introducing cognitive demands (van Dam, 1997). Alternatively, natural user interfaces (NUI) employ input methods such as speech, touch, and gesture commands. With NUIs, users can interact directly with the computer without using an intermediary device (e.g., mouse, keyboard). Using the body as an input device may be more “natural” because it allows the user to apply existing knowledge of how to interact with the world (Roupé, Bosch-Sijtsema, & Johansson, 2014). To utilize the potential of natural interfaces, research must first determine what interactions can be considered natural. For the purpose of this paper, we focus on the naturalness of gesture-based interfaces. The purpose of this study was to determine how people perform natural gesture-based computer actions. To answer this question, we first narrowed down potential gestures that would be considered natural for an action. In a previous study, participants ( n=17) were asked how they would gesture to interact with a computer to complete a series of actions. After narrowing down the potential natural gestures by calculating the most frequently performed gestures for each action, we asked participants ( n=188) to rate the naturalness of the gestures in the current study. Participants each watched 26 videos of gestures (3-5 seconds each) and were asked how natural or arbitrary they interpreted each gesture for the series of computer commands (e.g., move object left, shrink object, select object, etc.). The gestures in these videos included the 17 gestures that were most often performed in the previous study in which participants were asked what gesture they would naturally use to complete the computer actions. Nine gestures were also included that were created arbitrarily to act as a comparison to the natural gestures. By analyzing the ratings on a continuum from “Completely Arbitrary” to “Completely Natural,” we found that the natural gestures people produced in the first study were also interpreted as the intended action by this separate sample of participants. All the gestures that were rated as either “Mostly Natural” or “Completely Natural” by participants corresponded to how the object manipulation would be performed physically. For example, the gesture video that depicts a fist closing was rated as “natural” by participants for the action of “selecting an object.” All of the gestures that were created arbitrarily were interpreted as “arbitrary” when they did not correspond to the physical action. Determining how people naturally gesture computer commands and how people interpret those gestures is useful because it can inform the development of NUIs and contributes to the literature on what makes gestures seem “natural.”


2012 ◽  
Vol 8 (5) ◽  
pp. 842-845 ◽  
Author(s):  
W. I. Sellers ◽  
J. Hepworth-Bell ◽  
P. L. Falkingham ◽  
K. T. Bates ◽  
C. A. Brassey ◽  
...  

Body mass is a critical parameter used to constrain biomechanical and physiological traits of organisms. Volumetric methods are becoming more common as techniques for estimating the body masses of fossil vertebrates. However, they are often accused of excessive subjective input when estimating the thickness of missing soft tissue. Here, we demonstrate an alternative approach where a minimum convex hull is derived mathematically from the point cloud generated by laser-scanning mounted skeletons. This has the advantage of requiring minimal user intervention and is thus more objective and far quicker. We test this method on 14 relatively large-bodied mammalian skeletons and demonstrate that it consistently underestimates body mass by 21 per cent with minimal scatter around the regression line. We therefore suggest that it is a robust method of estimating body mass where a mounted skeletal reconstruction is available and demonstrate its usage to predict the body mass of one of the largest, relatively complete sauropod dinosaurs: Giraffatitan brancai (previously Brachiosaurus ) as 23200 kg.


Genome ◽  
1989 ◽  
Vol 31 (1) ◽  
pp. 422-425 ◽  
Author(s):  
Reinhard Schuh ◽  
Herbert Jäckle

The conventional technique for assigning a particular genetic function to a cloned transcription unit has relied on the rescue of the mutant phenotype by germ line transformation. An alternative approach is to mimic a mutant phenotype by the use of antisense RNA injections to produce phenocopies. This approach has been successfully used to identify genes involved in early pattern forming processes in the Drosophila embryo. At the time when antisense RNA is injected, the embryo develops as a syncytium composed of about 5000 nuclei which share a common cytoplasm. The gene interactions required to establish the body plan occur before cellularization at the blastoderm stage. Thus the nuclei and their exported transcripts are accessible to the injected antisense RNA. The antisense RNA interferes with the endogenous RNA by an as yet unidentified mechanism. The extent of interference is only partial and produces phenocopies with characteristics of weak mutant alleles. In our lab and others, this approach has been successfully used to identify several genes required for normal Drosophila pattern formation.Key words: Drosophila segmentation, phenocopy, antisense RNA, Krüppel gene.


1982 ◽  
Vol 26 (5) ◽  
pp. 435-435
Author(s):  
Dennis B. Beringer ◽  
Susan R. Maxwell

Interest in optimized human-computer interfaces has resulted in the development of a number of interesting devices that allow the computer and human operator to interact through a common drawing surface. These devices include the lightpen, lightgun (Goodwin, 1975), and a variety of touch-sensitive display overlay devices. Although touch devices were being investigated as early as 1965 (Orr and Hopkin, circa 1966), behavioral and performance data are scarce in relation to other sources of human-machine interface data. Availability of these devices has increased in the last 10 years and it is now possible to retrofit such devices to a wide variety of video display terminals at a reasonable cost. With the possibility of increased use looming on the horizon, it would be quite useful to examine the ergonomics of such devices and the behavioral adaptation or maladaptation that occurs for each user. Performance data available at this point from previous studies suggests that some positive increments in performance can be expected for graphic-based tasks while no serious decrements should be expected for discrete data entry tasks (Beringer, 1980; Stammers and Bird, 1980). The performance gains expected from this format of interaction are not to be won without some sacrifice elsewhere, however. Positioning of the display surface for optimum viewing may cause serious operator fatigue problems after extended use of the device if the device is to be used with relatively high frequency. The relationship of device positioning, device sensing resolution, and task type are being examined as they contribute to the comission of errors and the onset of fatigue. Experimentation was planned to examine how positioning of the device, or what can truly be called a “control/display unit”, affected the performance of visual discrimination tasks and manual designation tasks. Initial investigations used a single task to examine these questions by requiring the operator/subject to visually detect and manually designate the location of a break in one of 54 circles presented on a color c.r.t. display (essentially a Landholt C target). Responses were accepted by an infrared touch panel mounted on the display face. The c.r.t. was placed at four declinations during the blocks of trials; 90, 67, 45, and 35 degrees to the line of sight. Although a very strong learning effect was observed over the first 8 blocks of 25 trials each, performance leveled off, on the average, beginning with the ninth block of trials. No reliable effects of screen declination were found in the examination of response times or number of errors. Responses did tend to be located slightly lower than the target, however, for the greater declinations of the display surface. Subjective reports of physical difficulty of responding and fatigue did vary regularly with declination of the display. The relatively high location of the device resulted in shoulder and arm fatigue when the display was at 90 degrees and wrist fatigue when the display was at 35 degrees. Subsequent phases of the investigation will allow subjects to adjust parameters of height and declination (Brown and Schaum, 1980) and will use hand skin temperature and quantified postural information to assess the degree of fatigue incurred during device operation.


Sign in / Sign up

Export Citation Format

Share Document