The Quick Brown Fox Jumps Over the Lazy Dog: Perec, Description and the Scene of Everyday Computer Use

Author(s):  
Rowan Wilken

This chapter takes up Georges Perec’s call to ‘question the habitual’ and applies it to the scene of everyday computer use. My questioning of habituated computer use is framed within a consideration, first, of human-computer interaction (HCI) research on skilled typing and, second, in relation to computer-based typing and everyday computer use. The central argument of this chapter is that Perec’s use of description offers an innovative method for generating new insights into the material contexts and conditions of media use, and can assist us in grasping the fuller significance of our ‘infraordinary’ techno-somatic interactions with keyboards and screens, and the places and situational contexts in which these interactions occur.

Author(s):  
Tanveer J. Siddiqui ◽  
Uma Shanker Tiwary

Spoken dialogue systems are a step forward towards the realization of human-like interaction with computer-based systems. This chapter focuses on issues related to spoken dialog systems. It presents a general architecture for spoken dialogue systems for human-computer interaction, describes its components, and highlights key research challenges in them. One important variation in the architecture is modeling knowledge as a separate component. This is unlike existing dialogue systems in which knowledge is usually embedded within other components. This separation makes the architecture more general. The chapter also discusses some of the existing evaluation methods for spoken dialogue systems.


2009 ◽  
pp. 1436-1458
Author(s):  
Wolfgang Hürst ◽  
Khaireel A. Mohamed

This chapter focuses on HCI aspects to overcome problems arising from technologies and applications that may hinder the normal teaching process in ICT-ready classrooms. It investigates different input devices on their usage and interactivity for classroom teaching and argues that pen-based computing is the mode of choice for lecturing in modern lecture halls. It also discusses the software design of the interface where digital ink, as a “first class” data type is used to communicate visual contents and interact with the ICT.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Junhao Huang ◽  
Zhicheng Zhang ◽  
Guoping Xie ◽  
Hui He

Noncontact human-computer interaction has an important value in wireless sensor networks. This work is aimed at achieving accurate interaction on a computer based on auto eye control, using a cheap webcam as the video source. A real-time accurate human-computer interaction system based on eye state recognition, rough gaze estimation, and tracking is proposed. Firstly, binary classification of the eye states (opening or closed) is carried on using the SVM classification algorithm with HOG features of the input eye image. Second, rough appearance-based gaze estimation is implemented based on a simple CNN model. And the head pose is estimated to judge whether the user is facing the screen or not. Based on these recognition results, noncontact mouse control and character input methods are designed and developed to replace the standard mouse and keyboard hardware. Accuracy and speed of the proposed interaction system are evaluated by four subjects. The experimental results show that users can use only a common monocular camera to achieve gaze estimation and tracking and to achieve most functions of real-time precise human-computer interaction on the basis of auto eye control.


Author(s):  
W. Hürst

This chapter focuses on HCI aspects to overcome problems arising from technologies and applications that may hinder the normal teaching process in ICT-ready classrooms. It investigates different input devices on their usage and interactivity for classroom teaching and argues that pen-based computing is the mode of choice for lecturing in modern lecture halls. It also discusses the software design of the interface where digital ink, as a “first class” data type is used to communicate visual contents and interact with the ICT.


2012 ◽  
Vol 3 (2) ◽  
pp. 48-67 ◽  
Author(s):  
Lena Quinto ◽  
William Forde Thompson

Most people communicate emotion through their voice, facial expressions, and gestures. However, it is assumed that only “experts” can communicate emotions in music. The authors have developed a computer-based system that enables musically untrained users to select relevant acoustic attributes to compose emotional melodies. Nonmusicians (Experiment 1) and musicians (Experiment 3) were progressively presented with pairs of melodies that each differed in an acoustic attribute (e.g., intensity - loud vs. soft). For each pair, participants chose the melody that most strongly conveyed a target emotion (anger, fear, happiness, sadness or tenderness). Once all decisions were made, a final melody containing all choices was generated. The system allowed both untrained and trained participants to compose a range of emotional melodies. New listeners successfully decoded the emotional melodies of nonmusicians (Experiment 2) and musicians (Experiment 4). Results indicate that human-computer interaction can facilitate the composition of emotional music by musically untrained and trained individuals.


Sign in / Sign up

Export Citation Format

Share Document