From Touchpad to Smart Lens

2013 ◽  
Vol 5 (2) ◽  
pp. 1-20 ◽  
Author(s):  
Matthias Baldauf ◽  
Peter Fröhlich ◽  
Jasmin Buchta ◽  
Theresa Stürmer

Today’s smartphones provide the technical means to serve as interfaces for public displays in various ways. Even though recent research has identified several new approaches for mobile-display interaction, inter-technique comparisons of respective methods are scarce. The authors conducted an experimental user study on four currently relevant mobile-display interaction techniques (‘Touchpad’, ‘Pointer’, ‘Mini Video’, and ‘Smart Lens’) and learned that their suitability strongly depends on the task and use case at hand. The study results indicate that mobile-display interactions based on a traditional touchpad metaphor are time-consuming but highly accurate in standard target acquisition tasks. The direct interaction techniques Mini Video and Smart Lens had comparably good completion times, and especially Mini Video appeared to be best suited for complex visual manipulation tasks like drawing. Smartphone-based pointing turned out to be generally inferior to the other alternatives. Examples for the application of these differentiated results to real-world use cases are provided.

Author(s):  
Matthias Baldauf ◽  
Peter Fröhlich

Today's smartphones provide the technical means to serve as interfaces for public displays in various ways. Even though recent research has identified several approaches for mobile-display interaction, inter-technique comparisons of respective methods are scarce. In this chapter, the authors present an experimental user study on four currently relevant mobile-display interaction techniques (‘Touchpad', ‘Pointer', ‘Mini Video', and ‘Smart Lens'). The results indicate that mobile-display interactions based on a traditional touchpad metaphor are time-consuming but highly accurate in standard target acquisition tasks. The direct interaction techniques Mini Video and Smart Lens had comparably good completion times, and especially Mini Video appeared to be best suited for complex visual manipulation tasks like drawing. Smartphone-based pointing turned out to be generally inferior to the other alternatives. Finally, the authors introduce state-of-the-art browser-based remote controls as one promising way towards more serendipitous mobile interactions and outline future research directions.


Author(s):  
Anders Henrysson ◽  
Mark Ollila ◽  
Mark Billinghurst

Mobile phones are evolving into the ideal platform for Augmented Reality (AR). In this chapter we describe how augmented reality applications can be developed for mobile phones and the interaction metaphors that are ideally suited for this platform. Several sample applications are described which explore different interaction techniques. User study results show that moving the phone to interact with virtual content is an intuitive way to select and position virtual objects. A collaborative AR game is also presented with an evaluation study. Users preferred playing with the collaborative AR interface than with a non-AR interface and also found physical phone motion to be a very natural input method. This results discussed in this chapter should assist researchers in developing their own mobile phone based AR applications.


Author(s):  
Tuochao Chen ◽  
Yaxuan Li ◽  
Songyun Tao ◽  
Hyunchul Lim ◽  
Mose Sakashita ◽  
...  

Facial expressions are highly informative for computers to understand and interpret a person's mental and physical activities. However, continuously tracking facial expressions, especially when the user is in motion, is challenging. This paper presents NeckFace, a wearable sensing technology that can continuously track the full facial expressions using a neck-piece embedded with infrared (IR) cameras. A customized deep learning pipeline called NeckNet based on Resnet34 is developed to learn the captured infrared (IR) images of the chin and face and output 52 parameters representing the facial expressions. We demonstrated NeckFace on two common neck-mounted form factors: a necklace and a neckband (e.g., neck-mounted headphones), which was evaluated in a user study with 13 participants. The study results showed that NeckFace worked well when the participants were sitting, walking, or after remounting the device. We discuss the challenges and opportunities of using NeckFace in real-world applications.


2009 ◽  
pp. 984-997
Author(s):  
Anders Henrysson ◽  
Mark Ollila ◽  
Mark Billinghurst

Mobile phones are evolving into the ideal platform for augmented reality (AR). In this chapter, we describe how augmented reality applications can be developed for mobile phones and the interaction metaphors that are ideally suited for this platform. Several sample applications are described which explore different interaction techniques. User study results show that moving the phone to interact with virtual content is an intuitive way to select and position virtual objects. A collaborative AR game is also presented with an evaluation study. Users preferred playing with the collaborative AR interface than with a non-AR interface and also found physical phone motion to be a very natural input method. This results discussed in this chapter should assist researchers in developing their own mobile phone based AR applications.


Buildings ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 277
Author(s):  
Pierre Raimbaud ◽  
Ruding Lou ◽  
Florence Danglade ◽  
Pablo Figueroa ◽  
Jose Tiberio Hernandez ◽  
...  

Virtual reality (VR) is a computer-based technology that can be used by professionals of many different fields to simulate an environment with a high feeling of presence and immersion. Nonetheless, one main issue when designing such environments is to provide user interactions that are adapted to the tasks performed by the users. Thus, we propose here a task-centred methodology to design and evaluate these user interactions. Our methodology allows for the determination of user interaction designs based on previous VR studies, and for user evaluations based on a task-related computation of usability. Here, we applied it on the hazard identification case study, since VR can be used in a preventive approach to improve worksite safety. Once this task and its related user interactions were analysed with our methodology, we obtained two possible designs of interaction techniques for the worksite exploration subtask. About their usability evaluation, we proposed in this study to compare our task-centred evaluation approach to a non-task-centred one. Our hypothesis was that our approach could lead to different interpretations of user study results than a non-task-centred one. Our results confirmed our hypothesis by comparing weighted usability scores from our task-centred approach to unweighted ones for our two interaction techniques.


2021 ◽  
Author(s):  
Marius Fechter ◽  
Benjamin Schleich ◽  
Sandro Wartzack

AbstractVirtual and augmented reality allows the utilization of natural user interfaces, such as realistic finger interaction, even for purposes that were previously dominated by the WIMP paradigm. This new form of interaction is particularly suitable for applications involving manipulation tasks in 3D space, such as CAD assembly modeling. The objective of this paper is to evaluate the suitability of natural interaction for CAD assembly modeling in virtual reality. An advantage of the natural interaction compared to the conventional operation by computer mouse would indicate development potential for user interfaces of current CAD applications. Our approach bases on two main elements. Firstly, a novel natural user interface for realistic finger interaction enables the user to interact with virtual objects similar to physical ones. Secondly, an algorithm automatically detects constraints between CAD components based solely on their geometry and spatial location. In order to prove the usability of the natural CAD assembly modeling approach in comparison with the assembly procedure in current WIMP operated CAD software, we present a comparative user study. Results show that the VR method including natural finger interaction significantly outperforms the desktop-based CAD application in terms of efficiency and ease of use.


2021 ◽  
pp. 1-27 ◽  
Author(s):  
Brandon de la Cuesta ◽  
Naoki Egami ◽  
Kosuke Imai

Abstract Conjoint analysis has become popular among social scientists for measuring multidimensional preferences. When analyzing such experiments, researchers often focus on the average marginal component effect (AMCE), which represents the causal effect of a single profile attribute while averaging over the remaining attributes. What has been overlooked, however, is the fact that the AMCE critically relies upon the distribution of the other attributes used for the averaging. Although most experiments employ the uniform distribution, which equally weights each profile, both the actual distribution of profiles in the real world and the distribution of theoretical interest are often far from uniform. This mismatch can severely compromise the external validity of conjoint analysis. We empirically demonstrate that estimates of the AMCE can be substantially different when averaging over the target profile distribution instead of uniform. We propose new experimental designs and estimation methods that incorporate substantive knowledge about the profile distribution. We illustrate our methodology through two empirical applications, one using a real-world distribution and the other based on a counterfactual distribution motivated by a theoretical consideration. The proposed methodology is implemented through an open-source software package.


2021 ◽  
pp. 193229682098654
Author(s):  
Chanika Alahakoon ◽  
Malindu Fernando ◽  
Charith Galappaththy ◽  
Peter Lazzarini ◽  
Joseph V. Moxon ◽  
...  

Introduction: The inter and intra-observer reproducibility of measuring the Wound Ischemia foot Infection (WIfI) score is unknown. The aims of this study were to compare the reproducibility, completion times and ability to predict 30-day amputation of the WIfI, University of Texas Wound Classification System (UTWCS), Site, Ischemia, Neuropathy, Bacterial Infection and Depth (SINBAD) and Wagner classifications systems using photographs of diabetes-related foot ulcers. Methods: Three trained observers independently scored the diabetes-related foot ulcers of 45 participants on two separate occasions using photographs. The inter- and intra-observer reproducibility were calculated using Krippendorff’s α. The completion times were compared with Kruskal-Wallis and Dunn’s post-hoc tests. The ability of the scores to predict 30-day amputation rates were assessed using receiver operator characteristic curves and area under the curves. Results: There was excellent intra-observer agreement (α >0.900) and substantial agreement between observers (α=0.788) in WIfI scoring. There was moderate, substantial, or excellent agreement within the three observers (α>0.599 in all instances except one) and fair or moderate agreement between observers (α of UTWCS=0.306, α of SINBAD=0.516, α of Wagner=0.374) for the other three classification systems. The WIfI score took significantly longer ( P<.001) to complete compared to the other three scores (medians and inter quartile ranges of the WIfI, UTWCS, SINBAD, and Wagner being 1.00 [0.88-1.00], 0.75 [0.50-0.75], 0.50 [0.50-0.50], and 0.25 [0.25-0.50] minutes). None of the classifications were predictive of 30-day amputation ( P>.05 in all instances). Conclusion: The WIfI score can be completed with substantial agreement between trained observers but was not predictive of 30-day amputation.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Elisabeth Gibert-Sotelo ◽  
Isabel Pujol Payet

Abstract The interest in morphology and its interaction with the other grammatical components has increased in the last twenty years, with new approaches coming into stage so as to get more accurate analyses of the processes involved in morphological construal. This special issue is a valuable contribution to this field of study. It gathers a selection of five papers from the Morphology and Syntax workshop (University of Girona, July 2017) which, on the basis of Romance and Latin phenomena, discuss word structure and its decomposition into hierarchies of features. Even though the papers share a compositional view of lexical items, they adopt different formal theoretical approaches to the lexicon-syntax interface, thus showing the benefit of bearing in mind the possibilities that each framework provides. This introductory paper serves as a guide for the readers of this special collection and offers an overview of the topics dealt in each contribution.


Robotics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 68
Author(s):  
Lei Shi ◽  
Cosmin Copot ◽  
Steve Vanlanduit

In gaze-based Human-Robot Interaction (HRI), it is important to determine human visual intention for interacting with robots. One typical HRI interaction scenario is that a human selects an object by gaze and a robotic manipulator will pick up the object. In this work, we propose an approach, GazeEMD, that can be used to detect whether a human is looking at an object for HRI application. We use Earth Mover’s Distance (EMD) to measure the similarity between the hypothetical gazes at objects and the actual gazes. Then, the similarity score is used to determine if the human visual intention is on the object. We compare our approach with a fixation-based method and HitScan with a run length in the scenario of selecting daily objects by gaze. Our experimental results indicate that the GazeEMD approach has higher accuracy and is more robust to noises than the other approaches. Hence, the users can lessen cognitive load by using our approach in the real-world HRI scenario.


Sign in / Sign up

Export Citation Format

Share Document