Sketching in Knowledge Creation and Management

Author(s):  
Fernando Ferri ◽  
Patrizia Grifoni

A sketch is a schematic representation of an image containing a set of objects or concepts. When people need to express and communicate a new idea, they often sketch a rough picture to represent it. Drawing a sketch helps to develop and explore new ideas and enables useful reflection on an idea, elaborating possible alternatives and promoting its evolution. The development of different interaction and communication tools has produced new attention to more natural interaction and communication modalities, including sketching. Hand-drawn sketching is easy and intuitive to use to communicate with others, and human-computer interaction is also simplified.

Author(s):  
Fernando Ferri ◽  
Patrizia Grifoni

A sketch is a schematic representation of an image containing a set of objects or concepts. When people need to express and communicate a new idea, they often sketch a rough picture to represent it. Drawing a sketch helps to develop and explore new ideas and enables useful reflection on an idea, elaborating possible alternatives and promoting its evolution. The development of different interaction and communication tools has produced new attention to more natural interaction and communication modalities, including sketching. Hand-drawn sketching is easy and intuitive to use to communicate with others, and human-computer interaction is also simplified.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2690 ◽  
Author(s):  
Won-Du Chang

Eye movements generate electric signals, which a user can employ to control his/her environment and communicate with others. This paper presents a review of previous studies on such electric signals, that is, electrooculograms (EOGs), from the perspective of human–computer interaction (HCI). EOGs represent one of the easiest means to estimate eye movements by using a low-cost device, and have been often considered and utilized for HCI applications, such as to facilitate typing on a virtual keyboard, moving a mouse, or controlling a wheelchair. The objective of this study is to summarize the experimental procedures of previous studies and provide a guide for researchers interested in this field. In this work the basic characteristics of EOGs, associated measurements, and signal processing and pattern recognition algorithms are briefly reviewed, and various applications reported in the existing literature are listed. It is expected that EOGs will be a useful source of communication in virtual reality environments, and can act as a valuable communication tools for people with amyotrophic lateral sclerosis.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Lei Yu ◽  
Junyi Hou

Large-screen human-computer interaction technology is reflected in all aspects of daily life. The dynamic gesture tracking algorithm commonly used in recent large-screen interactive technologies demonstrates compelling results but suffers from accuracy and real-time problems. This paper systematically addresses these issues by a switching federated filter method that combines particle filtering and Mean Shifting algorithms based on a 3D sensor. Compared with several algorithms, the results show that the one-hand and two-hand large-screen gesture tracking based on the switched federated filtering algorithm work well, and there is no tracking failure and loss of target. Therefore, the switching federated tracking positioning algorithm can be well applied to the design of human-computer interaction system and provide new ideas for future human-computer interaction.


Author(s):  
George Tzanetakis

The playing of a musical instrument is one of the most skilled and complex interactions between a human and an artifact. Professional musicians spend a significant part of their lives initially learning their instruments and then perfecting their skills. The production, distribution and consumption of music has been profoundly transformed by digital technology. Today music is recorded and mixed using computers, distributed through online stores and streaming services, and heard on smartphones and portable music players. Computers have also been used to synthesize new sounds, generate music, and even create sound acoustically in the field of music robotics. Despite all these advances the way musicians interact with computers has remained relatively unchanged in the last 20-30 years. Most interaction with computers in the context of music making still occurs either using the standard mouse/keyboard/screen interaction that everyone is familiar with, or using special digital musical instruments and controllers such as keyboards, synthesizers and drum machines. The string, woodwind, and brass families of instruments do not have widely available digital counterparts and in the few cases that they do the digital version is nowhere as expressive as the acoustic one. It is possible to retrofit and augment existing acoustic instruments with digital sensors in order to create what are termed hyper-instruments. These hyper-instruments allow musicians to interact naturally with their instrument as they are accustomed to, while at the same time transmitting information about what they are playing to computing systems. This approach requires significant alterations to the acoustic instrument which is something many musicians are hesitant to do. In addition, hyper-instruments are typically one of a kind research prototypes making their wider adoption practically impossible. In the past few years researchers have started exploring the use of non-invasive and minimally invasive sensing technologies that address these two limitations by allowing acoustic instruments to be used without any modifications directly as digital controllers. This enables natural human-computer interaction with all the rich and delicate control of acoustic instruments, while retaining the wide array of possibilities that digital technology can provide. In this chapter, an overview of these efforts will be provided followed by some more detailed case studies from research that has been conducted by the author's group. This natural interaction blurs the boundaries between the virtual and physical world which is something that will increasingly happen in other aspects of human-computer interaction in addition to music. It also opens up new possibilities for computer-assisted music tutoring, cyber-physical ensembles, and assistive music technologies.


JURTEKSI ◽  
2019 ◽  
Vol 5 (1) ◽  
pp. 29-36
Author(s):  
I Komang Setia Buana

Abstrack: Diffable or the word that has  defenition is “ Different Abled People” the term for disabled people. One example of disabled is people who do not have hands, so to write even have to use their feet. Along with the increasingprogress of computer technology, the role of computer technology has also increased for the benefit of humans. One of them is the field of human and computer interaction ( IMK)  or also called Human Computer Interaction ( HCI) . Although computer technology equipment is accurate and reliable, but the interaction model that is carried out is not natural as humans interact with each other, the use of such equipment to operate it requires direct contact between user and the computer. For people who dissabilities who do not have hands, it will be difficult to do so. Computer vision based interaction techniques are candidates for natural interaction techniques. The human head can also be used to replace the function of a mouse that can be used to move the cursor up and down left or right for and to click on the mouse using the blik an eye. Detection using head movements has been widely applied including in the fields of entertainment, education, and security. The camera is a tool used to make head recognition. The camera is used as a sensor to detect head movements. Head motion detection is implemented by using opency phython. Keyword: difabel, head, webcame, opencv pythonAbstrak: Difabel  atau  kata  yang  memiliki  definisi “Different  Abled People” ini adalah sebutan bagi orang cacat. Salah satu contoh kaum difabel adalah orang yang tidak mempunyai tangan, sehingga untuk menulispun harus menggunakan kaki. Seiring meningkatnya kemajuan teknologi komputer, peranan teknologi komputer juga semakin meningkat yang digunakan untuk kepentingan manusia. salah satunya adalah bidang interaksi manusia dan komputer (IMK), atau sering disebut Human Computer Interaction (HCI). Keyboard, mouse, dan joystick merupakan salah satu perangkat keras yang sering digunakan untuk interaksi antara manusia dan komputer yang bersifat mekanis. Meskipun peralatan-peralatan tersebut akurat dan handal (reliable), tetapi model interaksi yang dilakukan tidak bersifat alami sebagaimana manusia berinteraksi dengan sesamanya, penggunaan peralatan-peralatan tersebut untuk mengoperasikannya membutuhkan adanya  kontak langsung antara user dengan komputer. Untuk kaum difabel yang tidak mempunyai tangan, akan susah melakukan hal tersebut. Teknik interaksi berbasis visi komputer menjadi kandidat teknik interaksi yang bersifat alami. kepala manusia bisa juga digunakan untuk menggantikan fungsi mouse yang bisa digunakan untuk menggerakan cursor keatas kebawah kekiri maupun kekanan dan untuk melakukan klik pada mouse menggunakan kedipan mata. Pendeteksian menggunakan gerakan kepala telah diaplikasikan secara luas diantaranya pada bidang hiburan, pendidikan serta keamanan. Kamera  (webcam) merupakan alat yang digunakan untuk melakukan pengenalan kepala. Kamera ini digunakan sebagai sensor untuk mendeteksi pergerakan kepala. Pendeteksian gerakan kepala diimplementasikan dengan menggunakan opencv python.


Author(s):  
Michael Weber ◽  
Marc Hermann

This chapter gives an overview of the broad range of interaction techniques for use in ubiquitous computing. It gives a short introduction to the fundamentals of human-computer interaction and the traditional user interfaces, surveys multi-scale output devices, gives a general idea of hands and eyes input, specializes them by merging the virtual and real world, and introduces attention and affection for enhancing the interaction with computers and especially with disappearing computers. The human-computer interaction techniques surveyed here help support Weiser’s idea of ubiquitous computing (1991) and calm technology (Weiser & Brown, 1996) and result in more natural interaction techniques than in use of purely graphical user interfaces. This chapter will thus first introduce the basic principles in human-computer interaction from a cognitive perspective, but aimed at computer scientists. The humancomputer interaction cycle brings us to a discussion of input and output devices and their characteristics being used within this cycle. The interrelation of the physical and virtual world as we see it in ubiquitous computing has its predecessors in the domain of virtual and augmented realities where specific hands and eyes interaction techniques and technologies have been developed. The next step will be attentive and affective user interfaces and the use of tangible objects being manipulated directly without using dedicated I/O devices.


2011 ◽  
Vol 219-220 ◽  
pp. 1317-1320 ◽  
Author(s):  
Guo Wei Gao ◽  
Xin Yu Duan

In spite of the growing focus on human-computer interaction design promoted by researchers and practitioners, there remains a large user population that is generally overlooked: people with physical disabilities. Most of HCI of disabled require wearing extra instruments, the HCI based on the camera is easy, efficient and comfortable. In this paper, we combined the applied and basic research, both drawing from psychological research and contributing new ideas to compare some HCI based on camera. We discuss user and products, and technology, highlighting challenges, open issues, and emerging applications for HCI based on camera of disabled.


Sign in / Sign up

Export Citation Format

Share Document