tangible user interfaces
Recently Published Documents


TOTAL DOCUMENTS

179
(FIVE YEARS 40)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Samantha Speer ◽  
Emily Hamner ◽  
Michael Tasota ◽  
Lauren Zito ◽  
Sarah K. Byrne-Houser

Author(s):  
Sara Ahmed Sayed Ali ◽  
Neila Chettaoui ◽  
Ayman Atia ◽  
Med. Salim Bouhlel ◽  
Dalia Mohamed Abdel Mohaiman

2021 ◽  
Author(s):  
David Silcock

<p><b>CAAD research has frequently investigated the realm of public participation in large scale urban design re-development. Cultural shifts have created a need to involve the end-users in these activities. CAAD has been quick and plentiful to offer solutions, yet, the recurring problem lies with the lay-person not being able to interpret information effectively and be able to take part in the process of design proactively. To date, much-existing research predominantly focuses on the development of designs in urban settings using high tech devices that fundamentally require a high level of expertise, or an experienced 'guide', to help navigate or create within these environments.</b></p> <p>This thesis presents a novel application based on real-time-virtual-engines and XR. The research discusses the role that tangible user interfaces (TUI) can play in the engagement of the lay-person in the design process. In this project, we describe how the integration of interaction design (IxD) and augmented reality (AR) offer new opportunities due to the extending web of availability of barrier-free technologies to better include lay-persons as active participants in the design process.</p> <p>The AR-markers developed within this project provide an intuitive method of addressing specific issues relating to the engagement of lay-people in the process of urban design. E.g. the appropriate positioning of a house on a section of land and the understanding of translation from 2D to 3D representation. These obstacles are managed through interaction and manipulation of image-based targets encoded with ‘Vuforia’s’ ‘virtual buttons’ functionality. This method gives the lay-person the ability to cycle through different parametric design options with a degree of computational fluency not typical of the lay-person. A first-person viewer is encouraged in unison with this interaction, providing a means to shift from organising space in a ‘bird-eye’ view to experiencing it from a more familiar street-view perspective.</p> <p>In the early phases of this project, the focus was placed on establishing tools for layperson engagement in the process of urban design. The first iteration of the tool focused on a broadly scoped parametric augmented reality workflow, where the focus is placed on the transfer of information from an interactive parametric program to an AR environment. The second iteration saw this workflow simplified, utilising a smartphone application that allowed the easy transfer of data between these two platforms. A refinement of scope then occurred, followed by the final version of the tool which focussed purely on the development of an AR environment that allowed for accessible and proactive layperson participation. The later stages of this project involved more in-depth exploration of the capabilities of the tool, the testing of it in a more refined context, and critical reflection on the effectiveness of each passing phase. The project concludes with an overall critique and evaluation of the developed method based on criteria outlined in similar research projects, and a framework for future research to aid in the engagement of lay-people in urban design through participatory AR.</p>


2021 ◽  
Author(s):  
David Silcock

<p><b>CAAD research has frequently investigated the realm of public participation in large scale urban design re-development. Cultural shifts have created a need to involve the end-users in these activities. CAAD has been quick and plentiful to offer solutions, yet, the recurring problem lies with the lay-person not being able to interpret information effectively and be able to take part in the process of design proactively. To date, much-existing research predominantly focuses on the development of designs in urban settings using high tech devices that fundamentally require a high level of expertise, or an experienced 'guide', to help navigate or create within these environments.</b></p> <p>This thesis presents a novel application based on real-time-virtual-engines and XR. The research discusses the role that tangible user interfaces (TUI) can play in the engagement of the lay-person in the design process. In this project, we describe how the integration of interaction design (IxD) and augmented reality (AR) offer new opportunities due to the extending web of availability of barrier-free technologies to better include lay-persons as active participants in the design process.</p> <p>The AR-markers developed within this project provide an intuitive method of addressing specific issues relating to the engagement of lay-people in the process of urban design. E.g. the appropriate positioning of a house on a section of land and the understanding of translation from 2D to 3D representation. These obstacles are managed through interaction and manipulation of image-based targets encoded with ‘Vuforia’s’ ‘virtual buttons’ functionality. This method gives the lay-person the ability to cycle through different parametric design options with a degree of computational fluency not typical of the lay-person. A first-person viewer is encouraged in unison with this interaction, providing a means to shift from organising space in a ‘bird-eye’ view to experiencing it from a more familiar street-view perspective.</p> <p>In the early phases of this project, the focus was placed on establishing tools for layperson engagement in the process of urban design. The first iteration of the tool focused on a broadly scoped parametric augmented reality workflow, where the focus is placed on the transfer of information from an interactive parametric program to an AR environment. The second iteration saw this workflow simplified, utilising a smartphone application that allowed the easy transfer of data between these two platforms. A refinement of scope then occurred, followed by the final version of the tool which focussed purely on the development of an AR environment that allowed for accessible and proactive layperson participation. The later stages of this project involved more in-depth exploration of the capabilities of the tool, the testing of it in a more refined context, and critical reflection on the effectiveness of each passing phase. The project concludes with an overall critique and evaluation of the developed method based on criteria outlined in similar research projects, and a framework for future research to aid in the engagement of lay-people in urban design through participatory AR.</p>


2021 ◽  
Vol 11 (15) ◽  
pp. 6745
Author(s):  
José Francisco Díez-Pastor ◽  
Pedro Latorre-Carmona ◽  
José Luis Garrido-Labrador ◽  
José Miguel Ramírez-Sanz ◽  
Juan J. Rodríguez

Radar technology has evolved considerably in the last few decades. There are many areas where radar systems are applied, including air traffic control in airports, ocean surveillance, and research systems, to cite a few. Other types of sensors have recently appeared, which allow tracking sub-millimeter motion with high speed and accuracy rates. These millimeter-wave radars are giving rise to myriad new applications, from the recognition of the material close objects are made, to the recognition of hand gestures. They have also been recently used to identify how a person interacts with digital devices through the physical environment (Tangible User Interfaces, TUIs). In this case, the radar is used to detect the orientation, movement, or distance from the objects to the user’s hands or the digital device. This paper presents a thoughtful comparative analysis of different feature extraction techniques and classification strategies applied on a series of datasets that cover problems such as the identification of materials, element counting, or determining the orientation and distance of objects to the sensor. The results outperform previous works using these datasets, especially when the accuracy was lowest, showing the benefits feature extraction techniques have on classification performance.


2021 ◽  
pp. 073563312110272
Author(s):  
Neila Chettaoui ◽  
Ayman Atia ◽  
Med Salim Bouhlel

Embodied learning pedagogy highlights the interconnections between the brain, body, and the concrete environment. As a teaching method, it provides means of engaging the physical body in multimodal learning experiences to develop the students’ cognitive process. Based on this perspective, several research studies introduced different interaction modalities to support the implementation of an embodied learning environment. One such case is the use of tangible user interfaces and motion-based technologies. This paper evaluates the impacts of motion-based, tangible-based, and multimodal interaction merging between tangible interfaces and motion-based technology on improving students’ learning performance. A controlled study was performed at a primary school with 36 participants (aged 7 to 9), to evaluate the educational potential of embodied interaction modalities compared to tablet-based learning. The results highlighted a significant difference in the learning gains between all groups, as determined by one-way ANOVA [F (3,32) = 6.32, p = .017], in favor of the multimodal learning interface. Findings revealed that a multimodal learning interface supporting richer embodied interaction that took advantage of affording the power of body movements and manipulation of physical objects might improve students’ understanding of abstract concepts in educational contexts.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4258
Author(s):  
Alice Krestanova ◽  
Martin Cerny ◽  
Martin Augustynek

A tangible user interface or TUI connects physical objects and digital interfaces. It is more interactive and interesting for users than a classic graphic user interface. This article presents a descriptive overview of TUI’s real-world applications sorted into ten main application areas—teaching of traditional subjects, medicine and psychology, programming, database development, music and arts, modeling of 3D objects, modeling in architecture, literature and storytelling, adjustable TUI solutions, and commercial TUI smart toys. The paper focuses on TUI’s technical solutions and a description of technical constructions that influences the applicability of TUIs in the real world. Based on the review, the technical concept was divided into two main approaches: the sensory technical concept and technology based on a computer vision algorithm. The sensory technical concept is processed to use wireless technology, sensors, and feedback possibilities in TUI applications. The image processing approach is processed to a marker and markerless approach for object recognition, the use of cameras, and the use of computer vision platforms for TUI applications.


Sign in / Sign up

Export Citation Format

Share Document