computer input
Recently Published Documents


TOTAL DOCUMENTS

199
(FIVE YEARS 5)

H-INDEX

19
(FIVE YEARS 0)

Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3078
Author(s):  
Huanwei Wu ◽  
Yi Han ◽  
Yanyin Zhou ◽  
Xiangliang Zhang ◽  
Jibin Yin ◽  
...  

To improve the efficiency of computer input, extensive research has been conducted on hand movement in a spatial region. Most of it has focused on the technologies but not the users’ spatial controllability. To assess this, we analyze a users’ common operational area through partitioning, including a layered array of one dimension and a spatial region array of two dimensions. In addition, to determine the difference in spatial controllability between a sighted person and a visually impaired person, we designed two experiments: target selection under a visual and under a non-visual scenario. Furthermore, we explored two factors: the size and the position of the target. Results showed the following: the 5 × 5 target blocks, which were 60.8 mm × 48 mm, could be easily controlled by both the sighted and the visually impaired person; the sighted person could easily select the bottom-right area; however, for the visually impaired person, the easiest selected area was the upper right. Based on the results of the users’ spatial controllability, we propose two interaction techniques (non-visual selection and a spatial gesture recognition technique for surgery) and four spatial partitioning strategies for human-computer interaction designers, which can improve the users spatial controllability.


2021 ◽  
Vol 9 (1) ◽  
pp. 1-12
Author(s):  
Hongyun Huang ◽  
Lin Chen ◽  
Michael Chopp ◽  
Wise Young ◽  
John Robert Bach ◽  
...  

COVID-19 has been an emerging and rapidly evolving risk to people of the world in 2020. Facing this dangerous situation, many colleagues in Neurorestoratology did their best to avoid infection if themselves and their patients, and continued their work in the research areas described in the 2020 Yearbook of Neurorestoratology. Neurorestorative achievements and progress during 2020 includes recent findings on the pathogenesis of neurological diseases, neurorestorative mechanisms and clinical therapeutic achievements. Therapeutic progress during this year included advances in cell therapies, neurostimulation/neuromodulation, brain-computer interface (BCI), and pharmaceutical neurorestorative therapies, which improved neurological functions and quality of life for patients. Four clinical guidelines or standards of Neurorestoratology were published in 2020. Milestone examples include: 1) a multicenter randomized, double-blind, placebo-controlled study of olfactory ensheathing cell treatment of chronic stroke showed functional improvements; 2) patients after transhumeral amputation experienced increased sensory acuity and had improved effectiveness in work and other activities of daily life using a prosthesis; 3) a patient with amyotrophic lateral sclerosis used a steady-state visual evoked potential (SSVEP)-based BCI to achieve accurate and speedy computer input; 4) a patient with complete chronic spinal cord injury recovered both motor function and touch sensation with a BCI and restored ability to detect objects by touch and several sensorimotor functions. We hope these achievements motivate and encourage other scientists and physicians to increase neurorestorative research and its therapeutic applications.


Author(s):  
Kurt Manal ◽  
Benjamin Gillette ◽  
Ira Lockwood

Abstract Manual dexterity is key to engaging one’s environment and interfacing with technologies for communication and personal computing. Individuals with marginal or no dexterity are faced with obstacles and barriers limiting educational opportunity, workplace productivity, independent living and community participation. We have, developed an effective and intuitive Bluetooth tongue controller (a.k.a. “Mouth Mouse”) designed to give people with severe upper limb impairment effective control of computers, smart phones and tablets. The device is inherently portable requiring no external hardware or supporting software and thus can be used virtually anywhere. Preliminary testing has shown the Mouth Mouse to be an effective computer input device. In this paper we outline key design objectives and present preliminary data demonstrating the efficacy of the Mouth Mouse as a computer input device.


2017 ◽  
Vol 8 (2) ◽  
Author(s):  
Sharon Oviatt

Current graphical keyboard and mouse interfaces are better suited for handling mechanical tasks, like email and text editing, than they are at supporting focused problem solving or complex learning tasks. One reason is that graphical interfaces limit users’ ability to fluidly express content involving different representational systems (e.g., symbols, diagrams) as they think through steps during complex problem solutions. We asked: Can interfaces be designed that actively stimulate students’ ability to “think on paper,” including providing better support for both ideation and convergent problem solving? In this talk, we will summarize new research on the affordances of different types of interface (e.g., pen-based, keyboard-based), and how these basic computer input capabilities function to substantially facilitate or impede people’s ideational fluency. We also will show data on the relation between interface support for communicative fluency (i.e., both linguistic and non-linguistic forms) and ideational fluency. In addition, we’ll discuss the relation between interface support for active marking (i.e., both formal structures like diagrams, and informal ones such as “thinking marks”) and successful problem solving. Finally, we’ll present new data on interfaces that improve support for learning and performance in lower-performing populations, and we will discuss how these new directions in interface media could play a role in improving their education and minimizing the persistent achievement gap between low- versus high-performing groups 


Author(s):  
Adam S Mouloua ◽  
Mustapha Mouloua ◽  
Peter Hancock ◽  
Daniel McConnell

The present study examined computer user handedness on a motor task using Fitts’s Law. Results indicated that right-handed participants were significantly faster than the left-handed participants when performing the motor task as measured by the Index of Performance. This finding could be partially attributed to the mouse design that is inconsistent with differential user handedness. Conversely, this finding could also be partially attributed to the degree of training left-handed participants received relative to their right-handed counterparts. The right-handed users outperformed their counterpart left-handed users perhaps because of physical design biases or relative degree of training. The present findings have practical implications for computer input device such as game controllers, joysticks, or mice that are physically designed for right-handed users.


2017 ◽  
Vol 3 (1) ◽  
Author(s):  
L. Elizabeth Crawford ◽  
Dylan T. Vavra ◽  
Jonathan C. Corbin

Experimental psychology research commonly has participants respond to stimuli by pressing buttons or keys. Standard computer input devices constrain the range of motoric responses participants can make, even as the field advances theory about the importance of the motor system in cognitive and social information processing. Here we describe an inexpensive way to use an electromyographic (EMG) signal as a computer input device, enabling participants to control a computer by contracting muscles that are not usually used for that purpose, but which may be theoretically relevant. We tested this approach in a study of facial mimicry, a well-documented phenomenon in which viewing emotional faces elicits automatic activation of corresponding muscles in the face of the viewer. Participants viewed happy and angry faces and were instructed to indicate the emotion on each face as quickly as possible by either furrowing their brow or contracting their cheek. The mapping of motor response to judgment was counterbalanced, so that one block of trials required a congruent mapping (contract brow to respond “angry,” cheek to respond “happy”) and the other block required an incongruent mapping (brow for “happy,” cheek for “angry”). EMG sensors placed over the left corrugator supercilii muscle and left zygomaticus major muscle fed readings of muscle activation to a microcontroller, which sent a response to a computer when activation reached a pre-determined threshold. Response times were faster when the motor-response mapping was congruent than when it was incongruent, extending prior studies on facial mimicry. We discuss further applications of the method for research that seeks to expand the range of human-computer interaction beyond the button box.


Sign in / Sign up

Export Citation Format

Share Document