information visualizations
Recently Published Documents


TOTAL DOCUMENTS

84
(FIVE YEARS 17)

H-INDEX

17
(FIVE YEARS 2)

2021 ◽  
Vol 5 (12) ◽  
pp. 81
Author(s):  
Ismo Rakkolainen ◽  
Ahmed Farooq ◽  
Jari Kangas ◽  
Jaakko Hakulinen ◽  
Jussi Rantala ◽  
...  

When designing extended reality (XR) applications, it is important to consider multimodal interaction techniques, which employ several human senses simultaneously. Multimodal interaction can transform how people communicate remotely, practice for tasks, entertain themselves, process information visualizations, and make decisions based on the provided information. This scoping review summarized recent advances in multimodal interaction technologies for head-mounted display-based (HMD) XR systems. Our purpose was to provide a succinct, yet clear, insightful, and structured overview of emerging, underused multimodal technologies beyond standard video and audio for XR interaction, and to find research gaps. The review aimed to help XR practitioners to apply multimodal interaction techniques and interaction researchers to direct future efforts towards relevant issues on multimodal XR. We conclude with our perspective on promising research avenues for multimodal interaction technologies.


2021 ◽  
Vol 11 (2) ◽  
pp. 1-25
Author(s):  
Moritz Spiller ◽  
Ying-Hsang Liu ◽  
Md Zakir Hossain ◽  
Tom Gedeon ◽  
Julia Geissler ◽  
...  

Information visualizations are an efficient means to support the users in understanding large amounts of complex, interconnected data; user comprehension, however, depends on individual factors such as their cognitive abilities. The research literature provides evidence that user-adaptive information visualizations positively impact the users’ performance in visualization tasks. This study attempts to contribute toward the development of a computational model to predict the users’ success in visual search tasks from eye gaze data and thereby drive such user-adaptive systems. State-of-the-art deep learning models for time series classification have been trained on sequential eye gaze data obtained from 40 study participants’ interaction with a circular and an organizational graph. The results suggest that such models yield higher accuracy than a baseline classifier and previously used models for this purpose. In particular, a Multivariate Long Short Term Memory Fully Convolutional Network shows encouraging performance for its use in online user-adaptive systems. Given this finding, such a computational model can infer the users’ need for support during interaction with a graph and trigger appropriate interventions in user-adaptive information visualization systems. This facilitates the design of such systems since further interaction data like mouse clicks is not required.


AIDS Care ◽  
2021 ◽  
pp. 1-7
Author(s):  
Samantha Stonbraker ◽  
Gabriella Flynn ◽  
Maureen George ◽  
Silvia Cunto-Amesty ◽  
Carmela Alcántara ◽  
...  

2020 ◽  
Vol 88 ◽  
pp. 103173
Author(s):  
Joseph K. Nuamah ◽  
Younho Seong ◽  
Steven Jiang ◽  
Eui Park ◽  
Daniel Mountjoy

Sign in / Sign up

Export Citation Format

Share Document