The Effect of Experience-Based Tangible User Interface on Cognitive Load in Design Education

2020 ◽  
Vol 4 (2) ◽  
pp. 1-13
Author(s):  
Zahid Islam

Inclusion of tangible user interfaces can facilitate learning through contextual experience, interaction with the provided information, and epistemic actions, resulting in effecting learning in design education. The goal of this study is to investigate how tangible user interface (TUI) affects design learning through the cognitive load. Extended reality-based TUI and traditional desktop-based GUI were utilized to deliver the same information to two groups of students. The NASA TLX tool was used to measure students' perceived cognitive load after receiving information through the two modalities. Contemporary design pedagogy, the potential use of XR, design cognition, today's design learners experience-oriented lifestyle were combined to provide a theoretical framework to understand how information delivery modalities affect design learning. The results reveal that the use of XR-based TUIs decreases cognitive load resulting in enhanced experience and effective learning in design studios.

2003 ◽  
Vol 1 (2) ◽  
pp. 153-168 ◽  
Author(s):  
Taysheng Jeng ◽  
Chia-Hsun Lee

This paper presents an interactive CAD platform that uses a tangible user interface to visualize and modify 3D geometry through manipulation of physical artifacts. The tangible user interface attempts to move away from the commonly used non-intuitive desktop CAD environment to a 3D CAD environment that more accurately mimics traditional desktop drawing and pin-up situations. An important goal is to reduce the apparent complexity of CAD user interfaces and reduce the cognitive load on designers. Opportunities for extending tangible design media toward an interactive CAD platform are discussed.


Author(s):  
Wolfgang Beer

PurposeThe aim of this paper is to present an architecture and prototypical implementation of a context‐sensitive software system which combines the tangible user interface approach with a mobile augmented reality (AR) application.Design/methodology/approachThe work which is described within this paper is based on a creational approach, which means that a prototypical implementation is used to gather further research results. The prototypical approach allows performing ongoing tests concerning the accuracy and different context‐sensitive threshold functions.FindingsWithin this paper, the implementation and practical use of tangible user interfaces for outdoor selection of geographical objects is reported and discussed in detail.Research limitations/implicationsFurther research is necessary within the area of context‐sensitive dynamically changing threshold functions, which would allow improving the accuracy of the selected tangible user interface approach.Practical implicationsThe practical implication of using tangible user interfaces within outdoor applications should improve the usability of AR applications.Originality/valueDespite the fact that there exist a multitude of research results within the area of gesture recognition and AR applications, this research work focuses on the pointing gesture to select outdoor geographical objects.


Author(s):  
Manjit Singh Sidhu ◽  
Waleed Maqableh

Tangible User Interfaces (TUI) is an emerging human-machine interaction (HMI) style where significant number of new TUIs has been incorporated into the educational technology domain. This work presents the design of new user interface for technology-assisted problem solving (TAPS) packages for Engineering Mechanics subject at University Tenaga Nasional (UNITEN). In this study the TAPS packages were further enhanced by adopting TUI and compared to the previous TAPS packages. The study found additional benefits when using TUI whereby it provided human interface senses, such as haptics and tactile (touch) making it more natural to use.


Author(s):  
Mikael Wiberg

No matter if we think about interaction design as a design tradition aimed at giving form to the interaction with computational objects, or if we think about interaction design as being simply about user interface design it is hard to escape the fact that the user interface to a large extent defines the scene and the form of the interaction. Without adopting a fully deterministic perspective here it is still a fact that if the user interface is screen-based and graphical and the input modality is mouse-based, then it is likely that the form of that interaction, that is what the turn-taking looks like and what is demanded by the user, is very similar to other screen-based interfaces with similar input devices. However, the design space for the form of interaction is growing fast. While command-based interfaces and text-based interfaces sort of defined the whole design space in the 1970s, the development since then, including novel ways of bringing sensors, actuators, and smart materials to the user interface has certainly opened up for a broader design space for interaction design. But it is not only the range of materials that has been extended over the last few decades, but we have also moved through a number of form paradigms for interaction design. With this as a point of departure I will in this chapter reflect on how we have moved from early days of command-based user interfaces, via the use of metaphors in the design of graphical user interfaces (GUIs), towards ways of interacting with the computer via tangible user interfaces (TUIs). Further on, I will describe how this movement towards TUIs was a first step away from building user interfaces based on representations and metaphors and a first step towards material interactions.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4258
Author(s):  
Alice Krestanova ◽  
Martin Cerny ◽  
Martin Augustynek

A tangible user interface or TUI connects physical objects and digital interfaces. It is more interactive and interesting for users than a classic graphic user interface. This article presents a descriptive overview of TUI’s real-world applications sorted into ten main application areas—teaching of traditional subjects, medicine and psychology, programming, database development, music and arts, modeling of 3D objects, modeling in architecture, literature and storytelling, adjustable TUI solutions, and commercial TUI smart toys. The paper focuses on TUI’s technical solutions and a description of technical constructions that influences the applicability of TUIs in the real world. Based on the review, the technical concept was divided into two main approaches: the sensory technical concept and technology based on a computer vision algorithm. The sensory technical concept is processed to use wireless technology, sensors, and feedback possibilities in TUI applications. The image processing approach is processed to a marker and markerless approach for object recognition, the use of cameras, and the use of computer vision platforms for TUI applications.


Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 162
Author(s):  
Soyeon Kim ◽  
René van Egmond ◽  
Riender Happee

In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios.


Author(s):  
Ana Guerberof Arenas ◽  
Joss Moorkens ◽  
Sharon O’Brien

AbstractThis paper presents results of the effect of different translation modalities on users when working with the Microsoft Word user interface. An experimental study was set up with 84 Japanese, German, Spanish, and English native speakers working with Microsoft Word in three modalities: the published translated version, a machine translated (MT) version (with unedited MT strings incorporated into the MS Word interface) and the published English version. An eye-tracker measured the cognitive load and usability according to the ISO/TR 16982 guidelines: i.e., effectiveness, efficiency, and satisfaction followed by retrospective think-aloud protocol. The results show that the users’ effectiveness (number of tasks completed) does not significantly differ due to the translation modality. However, their efficiency (time for task completion) and self-reported satisfaction are significantly higher when working with the released product as opposed to the unedited MT version, especially when participants are less experienced. The eye-tracking results show that users experience a higher cognitive load when working with MT and with the human-translated versions as opposed to the English original. The results suggest that language and translation modality play a significant role in the usability of software products whether users complete the given tasks or not and even if they are unaware that MT was used to translate the interface.


Author(s):  
Randall Spain ◽  
Jason Saville ◽  
Barry Lui ◽  
Donia Slack ◽  
Edward Hill ◽  
...  

Because advances in broadband capabilities will soon allow first responders to access and use many forms of data when responding to emergencies, it is becoming critically important to design heads-up displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue interaction errors. Virtual reality offers a unique medium for envisioning and testing user interface concepts in a realistic and controlled environment. In this paper, we describe a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters’ feedback and reactions to the VR scenario and the prototype intelligent user interface that presented them with task critical information through the VR headset. The paper concludes with lessons learned from our development process and a discussion of plans for future research.


Sign in / Sign up

Export Citation Format

Share Document