human computer interfaces
Recently Published Documents


TOTAL DOCUMENTS

356
(FIVE YEARS 51)

H-INDEX

23
(FIVE YEARS 1)

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
M. Thilagaraj ◽  
B. Dwarakanath ◽  
S. Ramkumar ◽  
K. Karthikeyan ◽  
A. Prabhu ◽  
...  

Human-computer interfaces (HCI) allow people to control electronic devices, such as computers, mouses, wheelchairs, and keyboards, by bypassing the biochannel without using motor nervous system signals. These signals permit communication between people and electronic-controllable devices. This communication is due to HCI, which facilitates lives of paralyzed patients who do not have any problems with their cognitive functioning. The major plan of this study is to test out the feasibility of nine states of HCI by using modern techniques to overcome the problem faced by the paralyzed. Analog Digital Instrument T26 with a five-electrode system was used in this method. Voluntarily twenty subjects participated in this study. The extracted signals were preprocessed by applying notch filter with a range of 50 Hz to remove the external interferences; the features were extracted by applying convolution theorem. Afterwards, extracted features were classified using Elman and distributed time delay neural network. Average classification accuracy with 90.82% and 90.56% was achieved using two network models. The accuracy of the classifier was analyzed by single-trial analysis and performances of the classifier were observed using bit transfer rate (BTR) for twenty subjects to check the feasibility of designing the HCI. The achieved results showed that the ERNN model has a greater potential to classify, identify, and recognize the EOG signal compared with distributed time delay network for most of the subjects. The control signal generated by classifiers was applied as control signals to navigate the assistive devices such as mouse, keyboard, and wheelchair activities for disabled people.


2021 ◽  
Vol 10 (1) ◽  
pp. 7
Author(s):  
Jose A. Amezquita-Garcia ◽  
Miguel E. Bravo-Zanoguera ◽  
Roberto L. Avitia ◽  
Marco A. Reyna ◽  
Daniel Cuevas-González

A classifier is commonly generated for multifunctional prostheses control or also as input devices in human–computer interfaces. The complementary use of the open-access biomechanical simulation software, OpenSim, is demonstrated for the hand-movement classification performance visualization. A classifier was created from a previously captured database, which has 15 finger movements that were acquired during synchronized hand-movement repetitions with an 8-electrode sensor array placed on the forearm; a 92.89% recognition based on a complete movement was obtained. The OpenSim’s upper limb wrist model is employed, with movement in each of the joints of the hand–fingers. Several hand-motion visualizations were then generated, for the ideal hand movements, and for the best and the worst (53.03%) reproduction, to perceive the classification error in a specific task movement. This demonstrates the usefulness of this simulation tool before applying the classifier to a multifunctional prosthesis.


2021 ◽  
Vol 5 (10) ◽  
pp. 64
Author(s):  
Miguel Angel Garcia-Ruiz ◽  
Bill Kapralos ◽  
Genaro Rebolledo-Mendez

This paper describes an overview of olfactory displays (human–computer interfaces that generate and diffuse an odor to a user to stimulate their sense of smell) that have been proposed and researched for supporting education and training. Past research has shown that olfaction (the sense of smell) can support memorization of information, stimulate information recall, and help immerse learners and trainees into educational virtual environments, as well as complement and/or supplement other human sensory channels for learning. This paper begins with an introduction to olfaction and olfactory displays, and a review of techniques for storing, generating and diffusing odors at the computer interface. The paper proceeds with a discussion on educational theories that support olfactory displays for education and training, and a literature review on olfactory displays that support learning and training. Finally, the paper summarizes the advantages and challenges regarding the development and application of olfactory displays for education and training.


2021 ◽  
Author(s):  
Maria Poli

Nowadays human activities are incoming at a digitalization stage. The introduction of information technology along with new forms of communication, influence a variety of forms of human action and focus mainly on the integration and the convergence of the digital and physical worlds. The use of more intelligent – electronic solutions, improves the lives of people around the world, according to studies carried out on the ingress of new smart technologies. Artificial and Ambient intelligence nowadays getting more and more attention about the development of smart, digital environments. The Smart Cities designed for All must aim to arrange the disparity in cities through smart technology, making cities both smart and accessible to a range of users regardless of their abilities or disabilities. The birth of “Artificial Intelligence” (AI) has facilitated the complex computations for reality simulation the new communication era of wireless 5G, all combined have given the hope for a new and better future, to reverse disability to empower the humans with more capabilities, to be faster than they can ever be, stronger than they can ever dream. This paper provides an overview of Ambient Intelligence and smart environments, as well as how technological advancements will benefit everyday usage by devices in common spaces such as homes or offices, and how they will interact and serve as a part of an intelligent ecosystem by bringing together resources such as networks, sensors, human-computer interfaces, pervasive computing, and so on.


2021 ◽  
Vol 11 (18) ◽  
pp. 8531
Author(s):  
Tim Murray-Browne ◽  
Panagiotis Tigas

Most Human–Computer Interfaces are built on the paradigm of manipulating abstract representations. This can be limiting when computers are used in artistic performance or as mediators of social connection, where we rely on qualities of embodied thinking: intuition, context, resonance, ambiguity and fluidity. We explore an alternative approach to designing interaction that we call the emergent interface: interaction leveraging unsupervised machine learning to replace designed abstractions with contextually derived emergent representations. The approach offers opportunities to create interfaces bespoke to a single individual, to continually evolve and adapt the interface in line with that individual’s needs and affordances, and to bridge more deeply with the complex and imprecise interaction that defines much of our non-digital communication. We explore this approach through artistic research rooted in music, dance and AI with the partially emergent system Sonified Body. The system maps the moving body into sound using an emergent representation of the body derived from a corpus of improvised movement from the first author. We explore this system in a residency with three dancers. We reflect on the broader implications and challenges of this alternative way of thinking about interaction, and how far it may help users avoid being limited by the assumptions of a system’s designer.


2021 ◽  
Vol 2 ◽  
Author(s):  
Aalap Herur-Raman ◽  
Neil D. Almeida ◽  
Walter Greenleaf ◽  
Dorian Williams ◽  
Allie Karshenas ◽  
...  

In recent years, the advancement of eXtended Reality (XR) technologies including Virtual and Augmented reality (VR and AR respectively) has created new human-computer interfaces that come increasingly closer to replicating natural human movements, interactions, and experiences. In medicine, there is a need for tools that accelerate learning and enhance the realism of training as medical procedures and responsibilities become increasingly complex and time constraints are placed on trainee work. XR and other novel simulation technologies are now being adapted for medical education and are enabling further interactivity, immersion, and safety in medical training. In this review, we investigate efforts to adopt XR into medical education curriculums and simulation labs to help trainees enhance their understanding of anatomy, practice empathetic communication, rehearse clinical procedures, and refine surgical skills. Furthermore, we discuss the current state of the field of XR technology and highlight the advantages of using virtual immersive teaching tools considering the COVID-19 pandemic. Finally, we lay out a vision for the next generation of medical simulation labs using XR devices summarizing the best practices from our and others’ experiences.


Author(s):  
Amrita Maguire ◽  
Dan Odell ◽  
Christy Harper ◽  
Michael Bartha ◽  
Scott Openshaw ◽  
...  

There are many challenges that researchers face when adapting from academic backgrounds to industry. How do we train newcomers to this field to focus on goals in context of their business’s needs? How do we ensure impact early in their career? How do we learn to look beyond the process, methods, mindset, and story-telling, to delivering on corporations’ anticipated needs? What are the challenges when mandating practitioners’ research to translate to actionable items? How do practitioners drive impact that brings the desired value to their corporations? How does one encourage user experience (UX) as an integral process within corporations’ development plans? This panel of practitioners will share the trials and tribulations they have encountered while successfully navigating their respective Human Factors and Ergonomics (HFE) careers. This panel represents peers with diverse experiences from careers in technology, product design, human-computer interfaces (HCI), medical devices, usability testing, and human factors research.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4810
Author(s):  
Cleiton Pons Ferreira ◽  
Carina Soledad González-González ◽  
Diana Francisca Adamatti

This article performs a Systematic Review of studies to answer the question: What are the researches related to the learning process with (Serious) Business Games using data collection techniques with Electroencephalogram or Eye tracking signals? The PRISMA declaration method was used to guide the search and inclusion of works related to the elaboration of this study. The 19 references resulting from the critical evaluation initially point to a gap in investigations into using these devices to monitor serious games for learning in organizational environments. An approximation with equivalent sensing studies in serious games for the contribution of skills and competencies indicates that continuous monitoring measures, such as mental state and eye fixation, proved to identify the players' attention levels effectively. Also, these studies showed effectiveness in the flow at different moments of the task, motivating and justifying the replication of these studies as a source of insights for the optimized design of business learning tools. This study is the first systematic review and consolidates the existing literature on user experience analysis of business simulation games supported by human-computer interfaces.


2021 ◽  
Vol 18 (3) ◽  
pp. 1-22
Author(s):  
Charlotte M. Reed ◽  
Hong Z. Tan ◽  
Yang Jiao ◽  
Zachary D. Perez ◽  
E. Courtenay Wilson

Stand-alone devices for tactile speech reception serve a need as communication aids for persons with profound sensory impairments as well as in applications such as human-computer interfaces and remote communication when the normal auditory and visual channels are compromised or overloaded. The current research is concerned with perceptual evaluations of a phoneme-based tactile speech communication device in which a unique tactile code was assigned to each of the 24 consonants and 15 vowels of English. The tactile phonemic display was conveyed through an array of 24 tactors that stimulated the dorsal and ventral surfaces of the forearm. Experiments examined the recognition of individual words as a function of the inter-phoneme interval (Study 1) and two-word phrases as a function of the inter-word interval (Study 2). Following an average training period of 4.3 hrs on phoneme and word recognition tasks, mean scores for the recognition of individual words in Study 1 ranged from 87.7% correct to 74.3% correct as the inter-phoneme interval decreased from 300 to 0 ms. In Study 2, following an average of 2.5 hours of training on the two-word phrase task, both words in the phrase were identified with an accuracy of 75% correct using an inter-word interval of 1 sec and an inter-phoneme interval of 150 ms. Effective transmission rates achieved on this task were estimated to be on the order of 30 to 35 words/min.


Sign in / Sign up

Export Citation Format

Share Document