Tangible Design Media: Toward An Interactive CAD Platform

2003 ◽  
Vol 1 (2) ◽  
pp. 153-168 ◽  
Author(s):  
Taysheng Jeng ◽  
Chia-Hsun Lee

This paper presents an interactive CAD platform that uses a tangible user interface to visualize and modify 3D geometry through manipulation of physical artifacts. The tangible user interface attempts to move away from the commonly used non-intuitive desktop CAD environment to a 3D CAD environment that more accurately mimics traditional desktop drawing and pin-up situations. An important goal is to reduce the apparent complexity of CAD user interfaces and reduce the cognitive load on designers. Opportunities for extending tangible design media toward an interactive CAD platform are discussed.

2020 ◽  
Vol 4 (2) ◽  
pp. 1-13
Author(s):  
Zahid Islam

Inclusion of tangible user interfaces can facilitate learning through contextual experience, interaction with the provided information, and epistemic actions, resulting in effecting learning in design education. The goal of this study is to investigate how tangible user interface (TUI) affects design learning through the cognitive load. Extended reality-based TUI and traditional desktop-based GUI were utilized to deliver the same information to two groups of students. The NASA TLX tool was used to measure students' perceived cognitive load after receiving information through the two modalities. Contemporary design pedagogy, the potential use of XR, design cognition, today's design learners experience-oriented lifestyle were combined to provide a theoretical framework to understand how information delivery modalities affect design learning. The results reveal that the use of XR-based TUIs decreases cognitive load resulting in enhanced experience and effective learning in design studios.


Author(s):  
Wolfgang Beer

PurposeThe aim of this paper is to present an architecture and prototypical implementation of a context‐sensitive software system which combines the tangible user interface approach with a mobile augmented reality (AR) application.Design/methodology/approachThe work which is described within this paper is based on a creational approach, which means that a prototypical implementation is used to gather further research results. The prototypical approach allows performing ongoing tests concerning the accuracy and different context‐sensitive threshold functions.FindingsWithin this paper, the implementation and practical use of tangible user interfaces for outdoor selection of geographical objects is reported and discussed in detail.Research limitations/implicationsFurther research is necessary within the area of context‐sensitive dynamically changing threshold functions, which would allow improving the accuracy of the selected tangible user interface approach.Practical implicationsThe practical implication of using tangible user interfaces within outdoor applications should improve the usability of AR applications.Originality/valueDespite the fact that there exist a multitude of research results within the area of gesture recognition and AR applications, this research work focuses on the pointing gesture to select outdoor geographical objects.


Author(s):  
Manjit Singh Sidhu ◽  
Waleed Maqableh

Tangible User Interfaces (TUI) is an emerging human-machine interaction (HMI) style where significant number of new TUIs has been incorporated into the educational technology domain. This work presents the design of new user interface for technology-assisted problem solving (TAPS) packages for Engineering Mechanics subject at University Tenaga Nasional (UNITEN). In this study the TAPS packages were further enhanced by adopting TUI and compared to the previous TAPS packages. The study found additional benefits when using TUI whereby it provided human interface senses, such as haptics and tactile (touch) making it more natural to use.


Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 162
Author(s):  
Soyeon Kim ◽  
René van Egmond ◽  
Riender Happee

In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios.


Author(s):  
Ana Guerberof Arenas ◽  
Joss Moorkens ◽  
Sharon O’Brien

AbstractThis paper presents results of the effect of different translation modalities on users when working with the Microsoft Word user interface. An experimental study was set up with 84 Japanese, German, Spanish, and English native speakers working with Microsoft Word in three modalities: the published translated version, a machine translated (MT) version (with unedited MT strings incorporated into the MS Word interface) and the published English version. An eye-tracker measured the cognitive load and usability according to the ISO/TR 16982 guidelines: i.e., effectiveness, efficiency, and satisfaction followed by retrospective think-aloud protocol. The results show that the users’ effectiveness (number of tasks completed) does not significantly differ due to the translation modality. However, their efficiency (time for task completion) and self-reported satisfaction are significantly higher when working with the released product as opposed to the unedited MT version, especially when participants are less experienced. The eye-tracking results show that users experience a higher cognitive load when working with MT and with the human-translated versions as opposed to the English original. The results suggest that language and translation modality play a significant role in the usability of software products whether users complete the given tasks or not and even if they are unaware that MT was used to translate the interface.


Author(s):  
Randall Spain ◽  
Jason Saville ◽  
Barry Lui ◽  
Donia Slack ◽  
Edward Hill ◽  
...  

Because advances in broadband capabilities will soon allow first responders to access and use many forms of data when responding to emergencies, it is becoming critically important to design heads-up displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue interaction errors. Virtual reality offers a unique medium for envisioning and testing user interface concepts in a realistic and controlled environment. In this paper, we describe a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters’ feedback and reactions to the VR scenario and the prototype intelligent user interface that presented them with task critical information through the VR headset. The paper concludes with lessons learned from our development process and a discussion of plans for future research.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-29
Author(s):  
Arthur Sluÿters ◽  
Jean Vanderdonckt ◽  
Radu-Daniel Vatavu

Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.


Author(s):  
Henry Larkin

Purpose – The purpose of this paper is to investigate the feasibility of creating a declarative user interface language suitable for rapid prototyping of mobile and Web apps. Moreover, this paper presents a new framework for creating responsive user interfaces using JavaScript. Design/methodology/approach – Very little existing research has been done in JavaScript-specific declarative user interface (UI) languages for mobile Web apps. This paper introduces a new framework, along with several case studies that create modern responsive designs programmatically. Findings – The fully implemented prototype verifies the feasibility of a JavaScript-based declarative user interface library. This paper demonstrates that existing solutions are unwieldy and cumbersome to dynamically create and adjust nodes within a visual syntax of program code. Originality/value – This paper presents the Guix.js platform, a declarative UI library for rapid development of Web-based mobile interfaces in JavaScript.


Sign in / Sign up

Export Citation Format

Share Document