Trip report: the University of Maryland human-computer interaction laboratory's 17th annual symposium and open house

2000 ◽  
Vol 9 (1) ◽  
pp. 31-34
Author(s):  
Peter Wasilko
interactions ◽  
2013 ◽  
Vol 20 (5) ◽  
pp. 50-57 ◽  
Author(s):  
Ben Shneiderman ◽  
Kent Norman ◽  
Catherine Plaisant ◽  
Benjamin B. Bederson ◽  
Allison Druin ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2308 ◽  
Author(s):  
Dilana Hazer-Rau ◽  
Sascha Meudt ◽  
Andreas Daucher ◽  
Jennifer Spohrs ◽  
Holger Hoffmann ◽  
...  

In this paper, we present a multimodal dataset for affective computing research acquired in a human-computer interaction (HCI) setting. An experimental mobile and interactive scenario was designed and implemented based on a gamified generic paradigm for the induction of dialog-based HCI relevant emotional and cognitive load states. It consists of six experimental sequences, inducing Interest, Overload, Normal, Easy, Underload, and Frustration. Each sequence is followed by subjective feedbacks to validate the induction, a respiration baseline to level off the physiological reactions, and a summary of results. Further, prior to the experiment, three questionnaires related to emotion regulation (ERQ), emotional control (TEIQue-SF), and personality traits (TIPI) were collected from each subject to evaluate the stability of the induction paradigm. Based on this HCI scenario, the University of Ulm Multimodal Affective Corpus (uulmMAC), consisting of two homogenous samples of 60 participants and 100 recording sessions was generated. We recorded 16 sensor modalities including 4 × video, 3 × audio, and 7 × biophysiological, depth, and pose streams. Further, additional labels and annotations were also collected. After recording, all data were post-processed and checked for technical and signal quality, resulting in the final uulmMAC dataset of 57 subjects and 95 recording sessions. The evaluation of the reported subjective feedbacks shows significant differences between the sequences, well consistent with the induced states, and the analysis of the questionnaires shows stable results. In summary, our uulmMAC database is a valuable contribution for the field of affective computing and multimodal data analysis: Acquired in a mobile interactive scenario close to real HCI, it consists of a large number of subjects and allows transtemporal investigations. Validated via subjective feedbacks and checked for quality issues, it can be used for affective computing and machine learning applications.


Author(s):  
Sonia Franckel ◽  
Elizabeth Bonsignore ◽  
Allison Druin

Mobile technologies offer novel opportunities for children to express themselves in-context, seamlessly, without disrupting the flow of their formal learning activities or informal play. Most contemporary mobile devices are equipped with multimedia support that can be used to create multimodal stories that represent the rich life narratives children experience, imagine, and want to share. The authors investigated these issues over a 9-month series of participatory design sessions in the Human-Computer Interaction Lab (HCIL) at the University of Maryland. In this article, the authors describe their work with children in designing mobile tools for story creation and collaboration. Throughout this work, they asked the following questions: What stories do children want to tell, and how do they want to convey them in a mobile context? The findings suggest the need for mobile technology-based applications that support children’s unique storytelling habits, particularly interruptability and multimodality.


2011 ◽  
pp. 60-69
Author(s):  
Gary A. Berg

I come to the subject of this book from a very different path than most of those thinking about the use of computers in educational environments. My formal education focused originally on literature and film studies, and film production at the University of California at Berkeley, San Francisco State University, and the University of California at Los Angeles. I became professionally involved in educational administration through the backdoor of continuing education focused first on the entertainment industry, and then more broadly. It was after this combined experience of studying film and television and working in adult education that I began research in education and earned a doctorate in the field of higher education from Claremont Graduate University, with a special emphasis on distance learning. I hope that the different point of view I have developed from my eclectic background gives me the ability to make something of a unique contribution to this evolving new field. What follows is an attempt to spark a discussion that will lead to answers to the question of what are the most effective techniques for the design of computer learning environments. This is not a how-to book—we are too early in the evolutionary process of the medium to give such specific guidance. Rather, my intention is to offer some theories to elevate the thinking bout computers in education. Because the subject is interdisciplinary, combining science with the humanities, the theoretical discussion draws from abroad range of disciplines: psychology, educational theory, film criticism, and computer science. The book looks at the notion of computer as medium and what such an idea might mean for education. I suggest that the understanding of computers as a medium may be a key to re-envisioning educational technology. Oren (1995) argues that understanding computers as a medium means enlarging human-computer interaction (HCI) research to include issues such as the psychology of media, evolution of genre and form, and the societal implications of media, all of which are discussed here. Computers began to be used in educational environments much later than film, and I would have to agree with others who claim that the use of computers instructionally is still quite unsophisticated.


Author(s):  
Inguna Skadiņa ◽  
Didzis Goško

Human-computer interaction, especially in form of dialogue systems and chatbots, has become extremely popular during the last decade. The dominant approach in the recent development of practical virtual assistants is the application of deep learning techniques. However, in case of less resourced language (or domain), the application of deep learning could be very complicated due to the lack of necessary training data. In this paper, we discuss possibility to apply hybrid approach to dialogue modelling by combining data-driven approach with the knowledge-based approach. Our hypothesis is that by combining different agents (general domain chatbot, frequently asked questions module and goal oriented virtual assistant) into single virtual assistant we can facilitate adequacy and fluency of the conversation. We investigate suitability of different widely used techniques in less resourced settings. We demonstrate feasibility of our approach for morphologically rich less resourced language Latvian through initial virtual assistant prototype for the student service of the University of Latvia.


Author(s):  
Vassilis Kostakos ◽  
Eamonn O’Neill

In this chapter, we present existing and ongoing research within the Human-Computer Interaction group at the University of Bath into the development of novel interaction techniques. With our research, we aim to improve the way in which users interact with mobile and pervasive systems. More specifically, we present work in three broad categories of interaction: stroke interaction, kinaesthetic interaction, and text entry. Finally, we describe some of our currently ongoing work as well as planned future work.


2017 ◽  
Vol 59 (5) ◽  
Author(s):  
Enkelejda Kasneci

AbstractThe human gaze provides paramount cues for communication and interaction. Following this insight, gaze-based interfaces have been proposed for human-computer interaction (HCI) since the early 90s, with some believing that such interfaces will revolutionize the way we interact with our devices. Since then gaze-based HCI in stationary scenarios (e. g., desktop computing) has been rapidly maturing, and the production costs of mainstream eye trackers have been steadily decreasing. In consequence, a variety of new applications with the ambitious goal to apply eye tracking to dynamic, real-world HCI tasks and scenarios have emerged. This article gives an overview of the research conducted by the Perception Engineering Group at the University of Tübingen.


Sign in / Sign up

Export Citation Format

Share Document