Ergonomic qualities of graphic user interfaces (GUI): state and evolution

Author(s):  
Ivan V. Stepanyan

More workers are involved into interaction with graphic user interfaces most part of the working shift. However, low ergonomic qualities or incorrect usage of graphic user interface could result in risk of unfavorable influence on workers’ health. The authors revealed and classified typical scenarios of graphic user interface usage. Various types of graphic user interface and operator occupations are characterized by various parameters of exertion, both biomechanical and psycho-physiological. Among main elements of graphic user interface are presence or absence of mouse or joystick, intuitive clearness, balanced palette, fixed position of graphic elements, comfort level, etc. Review of various graphic user interface and analysis of their characteristics demonstrated possibility of various occupational risk factors. Some disclosed ergonomic problems are connected with incorporation of graphic user interface into various information technologies and systems. The authors presented a role of ergonomic characteristics of graphic user interface for safe and effective work of operators, gave examples of algorithms to visualize large information volumes for easier comprehension and analysis. Correct usage of interactive means of computer visualization with competent design and observing ergonomic principles will optimize mental work in innovative activity and preserve operators’ health. Prospective issues in this sphere are ergonomic interfaces developed with consideration of information hygiene principles, big data analysis technology and automatically generated cognitive graphics.

Author(s):  
LORENZO SUSCA ◽  
FERRUCCIO MANDORLI ◽  
CATERINA RIZZI ◽  
UMBERTO CUGINI

The evolution of computer aided design (CAD) systems and related technologies has promoted the development of software for the automatic configuration of mechanical systems. This occurred with the introduction of knowledge aided engineering (KAE) systems that enable computers to support the designer during the decision-making process. This paper presents a knowledge-based application that allows the designer to automatically compute and evaluate mass properties of racing cars. The system is constituted by two main components: the computing core, which determines the car model, and the graphic user interface, because of which the system may be used also by nonprogrammers. The computing core creates the model of the car based on a tree structure, which contains all car subsystems (e.g., suspension and chassis). Different part–subpart relationships define the tree model and link an object (e.g., suspension) to its components (e.g., wishbones and wheel). The definition of independent parameters (including design variables) and relationships definition allows the model to configure itself by evaluating all properties related to dimension, position, mass, etc. The graphic user interface allows the end user to interact with the car model by editing independent design parameters. It visualizes the main outputs of the model, which consist in numeric data (mass, center of mass of both the car and its subsystems) and graphic elements (car and subsystems 3D representation).


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4258
Author(s):  
Alice Krestanova ◽  
Martin Cerny ◽  
Martin Augustynek

A tangible user interface or TUI connects physical objects and digital interfaces. It is more interactive and interesting for users than a classic graphic user interface. This article presents a descriptive overview of TUI’s real-world applications sorted into ten main application areas—teaching of traditional subjects, medicine and psychology, programming, database development, music and arts, modeling of 3D objects, modeling in architecture, literature and storytelling, adjustable TUI solutions, and commercial TUI smart toys. The paper focuses on TUI’s technical solutions and a description of technical constructions that influences the applicability of TUIs in the real world. Based on the review, the technical concept was divided into two main approaches: the sensory technical concept and technology based on a computer vision algorithm. The sensory technical concept is processed to use wireless technology, sensors, and feedback possibilities in TUI applications. The image processing approach is processed to a marker and markerless approach for object recognition, the use of cameras, and the use of computer vision platforms for TUI applications.


2019 ◽  
Vol 11 (1) ◽  
Author(s):  
Agatha Maisie Tjandra

(Simulasi Mitigasi Gunung Berapi- mitigation simulator volcanic eruption) is an application of serious game with story line by using virtual reality using head mounted display. There are three parts of SIMIGAPI based on the process of mitigation. The main focus of this paper is on the evacuation parts. In this part, user are given a mission to escape from volcanic ashes by walking through the virtual world and passing the pin points. Briefing are given by using text, and graphic elements using 3D graphic user interface. On the other hand, bad user interface may decrease the immersive purposes and easily children as user can be bored. This automatically can affect failed the process transferring information evacuation mitigation to user. This paper aim to explain about creating 3D user interface and observing user experience for education purposes on evacuation part of SIMIGAPI. This project use production method and quantitative questionnaire test to know user perspective about SIMIGAPI information by using GUI.


Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 162
Author(s):  
Soyeon Kim ◽  
René van Egmond ◽  
Riender Happee

In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios.


Author(s):  
Randall Spain ◽  
Jason Saville ◽  
Barry Lui ◽  
Donia Slack ◽  
Edward Hill ◽  
...  

Because advances in broadband capabilities will soon allow first responders to access and use many forms of data when responding to emergencies, it is becoming critically important to design heads-up displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue interaction errors. Virtual reality offers a unique medium for envisioning and testing user interface concepts in a realistic and controlled environment. In this paper, we describe a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters’ feedback and reactions to the VR scenario and the prototype intelligent user interface that presented them with task critical information through the VR headset. The paper concludes with lessons learned from our development process and a discussion of plans for future research.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-29
Author(s):  
Arthur Sluÿters ◽  
Jean Vanderdonckt ◽  
Radu-Daniel Vatavu

Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.


Author(s):  
Henry Larkin

Purpose – The purpose of this paper is to investigate the feasibility of creating a declarative user interface language suitable for rapid prototyping of mobile and Web apps. Moreover, this paper presents a new framework for creating responsive user interfaces using JavaScript. Design/methodology/approach – Very little existing research has been done in JavaScript-specific declarative user interface (UI) languages for mobile Web apps. This paper introduces a new framework, along with several case studies that create modern responsive designs programmatically. Findings – The fully implemented prototype verifies the feasibility of a JavaScript-based declarative user interface library. This paper demonstrates that existing solutions are unwieldy and cumbersome to dynamically create and adjust nodes within a visual syntax of program code. Originality/value – This paper presents the Guix.js platform, a declarative UI library for rapid development of Web-based mobile interfaces in JavaScript.


Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 521-527 ◽  
Author(s):  
Harsha Medicherla ◽  
Ali Sekmen

SUMMARYAn understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and capabilities of the people with whom it interacts so that it can adapt its behaviors for more efficient and friendly interaction. Advances in human– computer interaction technologies have been widely used in improving human–robot interaction (HRI). It is now possible to interact with robots via natural communication means such as speech. In this paper, an innovative approach for HRI via voice-controllable intelligent user interfaces is described. The design and implementation of such interfaces are described. The traditional approaches for human–robot user interface design are explained and the advantages of the proposed approach are presented. The designed intelligent user interface, which learns user preferences and capabilities in time, can be controlled with voice. The system was successfully implemented and tested on a Pioneer 3-AT mobile robot. 20 participants, who were assessed on spatial reasoning ability, directed the robot in spatial navigation tasks to evaluate the effectiveness of the voice control in HRI. Time to complete the task, number of steps, and errors were collected. Results indicated that spatial reasoning ability and voice-control were reliable predictors of efficiency of robot teleoperation. 75% of the subjects with high spatial reasoning ability preferred using voice-control over manual control. The effect of spatial reasoning ability in teleoperation with voice-control was lower compared to that of manual control.


Robotica ◽  
2011 ◽  
Vol 29 (6) ◽  
pp. 843-852 ◽  
Author(s):  
Wen-Tao Ma ◽  
Wei-Xin Yan ◽  
Zhuang Fu ◽  
Yan-Zheng Zhao

Cooking themselves is very important and difficult for elderly and disabled people in daily life. This paper presents a cooking robot for those people who are confined to wheelchairs. The robot can automatically load ingredients, cook Chinese dishes, take cooked foods out, deliver dishes to the table, self-clean, collect used ingredient box components, and so on. Its structure and interface is designed based on the barrier-free design principles. Elderly and disabled people can only click one button in the friendly Graphic User Interface of a Personal Digital Assistant (PDA) to launch the cooking processes, and several classic Chinese dishes would be placed in front of them one after another within few minutes. Experiments show that the robot can meet their special needs, and the involved aid activities are easy and effective for elderly and disabled people.


Sign in / Sign up

Export Citation Format

Share Document