scholarly journals Emerging ExG-based NUI Inputs in Extended Realities: A Bottom-up Survey

2021 ◽  
Vol 11 (2) ◽  
pp. 1-49
Author(s):  
Kirill A. Shatilov ◽  
Dimitris Chatzopoulos ◽  
Lik-Hang Lee ◽  
Pan Hui

Incremental and quantitative improvements of two-way interactions with e x tended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduce to the area of XR.

2021 ◽  
Vol 1 ◽  
pp. 283-292
Author(s):  
Jakob Harlan ◽  
Benjamin Schleich ◽  
Sandro Wartzack

AbstractThe increased availability of affordable virtual reality hardware in the last years boosted research and development of such systems for many fields of application. While extended reality systems are well established for visualization of product data, immersive authoring tools that can create and modify that data are yet to see widespread productive use. Making use of building blocks, we see the possibility that such tools allow quick expression of spatial concepts, even for non-expert users. Optical hand-tracking technology allows the implementation of this immersive modeling using natural user interfaces. Here the users manipulated the virtual objects with their bare hands. In this work, we present a systematic collection of natural interactions suited for immersive building-block-based modeling systems. The interactions are conceptually described and categorized by the task they fulfil.


Semantic Web ◽  
2021 ◽  
pp. 1-16
Author(s):  
Esko Ikkala ◽  
Eero Hyvönen ◽  
Heikki Rantala ◽  
Mikko Koho

This paper presents a new software framework, Sampo-UI, for developing user interfaces for semantic portals. The goal is to provide the end-user with multiple application perspectives to Linked Data knowledge graphs, and a two-step usage cycle based on faceted search combined with ready-to-use tooling for data analysis. For the software developer, the Sampo-UI framework makes it possible to create highly customizable, user-friendly, and responsive user interfaces using current state-of-the-art JavaScript libraries and data from SPARQL endpoints, while saving substantial coding effort. Sampo-UI is published on GitHub under the open MIT License and has been utilized in several internal and external projects. The framework has been used thus far in creating six published and five forth-coming portals, mostly related to the Cultural Heritage domain, that have had tens of thousands of end-users on the Web.


Author(s):  
Roman Bruch ◽  
Paul M. Scheikl ◽  
Ralf Mikut ◽  
Felix Loosli ◽  
Markus Reischl

Behavioral analysis of moving animals relies on a faithful recording and track analysis to extract relevant parameters of movement. To study group behavior and social interactions, often simultaneous analyses of individuals are required. To detect social interactions, for example to identify the leader of a group as opposed to followers, one needs an error-free segmentation of individual tracks throughout time. While automated tracking algorithms exist that are quick and easy to use, inevitable errors will occur during tracking. To solve this problem, we introduce a robust algorithm called epiTracker for segmentation and tracking of multiple animals in two-dimensional (2D) videos along with an easy-to-use correction method that allows one to obtain error-free segmentation. We have implemented two graphical user interfaces to allow user-friendly control of the functions. Using six labeled 2D datasets, the effort to obtain accurate labels is quantified and compared to alternative available software solutions. Both the labeled datasets and the software are publicly available.


2015 ◽  
Vol 25 (1) ◽  
pp. 17-34 ◽  
Author(s):  
Juan-Fernando Martin-SanJose ◽  
M.-Carmen Juan ◽  
Ramón Mollá ◽  
Roberto Vivó

2011 ◽  
Vol 464 ◽  
pp. 57-60
Author(s):  
Yong Zhang ◽  
Jun Fang Ni ◽  
Peng Liu

In accordance with the object-oriented programming, a system for 3D medical images of reconstruction and display has been designed and implemented. The overall software structure is established based on VC++6.0 and display technique of Open Graphics Library. The functional modules, such as acquisition of encoded 3D data, pre-process, reconstruction and display, are achieved by the design and implementation of customized classes. At last the software system provides user-friendly graphical user interfaces, highly efficient data processing and reconstruction, and rapid capability of graphic display.


Author(s):  
Shannon K. T. Bailey ◽  
Daphne E. Whitmer ◽  
Bradford L. Schroeder ◽  
Valerie K. Sims

Human-computer interfaces are changing to meet the evolving needs of users and overcome limitations of previous generations of computer systems. The current state of computers consists largely of graphical user interfaces (GUI) that incorporate windows, icons, menus, and pointers (WIMPs) as visual representations of computer interactions controlled via user input on a mouse and keyboard. Although this model of interface has dominated human-computer interaction for decades, WIMPs require an extra step between the user’s intent and the computer action, imposing both limitations on the interaction and introducing cognitive demands (van Dam, 1997). Alternatively, natural user interfaces (NUI) employ input methods such as speech, touch, and gesture commands. With NUIs, users can interact directly with the computer without using an intermediary device (e.g., mouse, keyboard). Using the body as an input device may be more “natural” because it allows the user to apply existing knowledge of how to interact with the world (Roupé, Bosch-Sijtsema, & Johansson, 2014). To utilize the potential of natural interfaces, research must first determine what interactions can be considered natural. For the purpose of this paper, we focus on the naturalness of gesture-based interfaces. The purpose of this study was to determine how people perform natural gesture-based computer actions. To answer this question, we first narrowed down potential gestures that would be considered natural for an action. In a previous study, participants ( n=17) were asked how they would gesture to interact with a computer to complete a series of actions. After narrowing down the potential natural gestures by calculating the most frequently performed gestures for each action, we asked participants ( n=188) to rate the naturalness of the gestures in the current study. Participants each watched 26 videos of gestures (3-5 seconds each) and were asked how natural or arbitrary they interpreted each gesture for the series of computer commands (e.g., move object left, shrink object, select object, etc.). The gestures in these videos included the 17 gestures that were most often performed in the previous study in which participants were asked what gesture they would naturally use to complete the computer actions. Nine gestures were also included that were created arbitrarily to act as a comparison to the natural gestures. By analyzing the ratings on a continuum from “Completely Arbitrary” to “Completely Natural,” we found that the natural gestures people produced in the first study were also interpreted as the intended action by this separate sample of participants. All the gestures that were rated as either “Mostly Natural” or “Completely Natural” by participants corresponded to how the object manipulation would be performed physically. For example, the gesture video that depicts a fist closing was rated as “natural” by participants for the action of “selecting an object.” All of the gestures that were created arbitrarily were interpreted as “arbitrary” when they did not correspond to the physical action. Determining how people naturally gesture computer commands and how people interpret those gestures is useful because it can inform the development of NUIs and contributes to the literature on what makes gestures seem “natural.”


2021 ◽  
Vol 17 (1) ◽  
pp. 247-255
Author(s):  
Konstantinos CHARISI ◽  
Andreas TSIGOPOULOS ◽  
Spyridon KINTZIOS ◽  
Vassilis PAPATAXIARHIS

Abstract. The paper aims to introduce the ARESIBO project to a greater but targeted audience and outline its main scope and achievements. ARESIBO stands for “Augmented Reality Enriched Situation awareness for Border security”. In the recent years, border security has become one of the highest political priorities in EU and needs the support of every Member State. ARESIBO project is developed under HORIZON 2020 EC Research and Innovation program and it is the joint effort of 20 participant entities from 11 countries. Scientific excellence and technological innovation are top priorities as ARESIBO enhances the current state-of-the-art through technological breakthroughs in Mobile Augmented Reality and Wearables, Robust and Secure Telecommunications, Robots swarming technique and Planning of Context-Aware Autonomous Missions, and Artificial Intelligence (AI), in order to implement user-friendly tools for border and coast guards. The system aims to improve the cognitive capabilities and the perception of border guards through intuitive user interfaces that will help them acquire an improved situation awareness by filtering the huge amount of available information from multiple sources. Ultimately, it will help them respond faster and more effectively when a critical situation occurs.


Sign in / Sign up

Export Citation Format

Share Document