Visual Search Target Inference in Natural Interaction Settings with Machine Learning

Author(s):  
Michael Barz ◽  
Sven Stauden ◽  
Daniel Sonntag
2020 ◽  
pp. 1096-1117
Author(s):  
Rodrigo Ibañez ◽  
Alvaro Soria ◽  
Alfredo Raul Teyseyre ◽  
Luis Berdun ◽  
Marcelo Ricardo Campo

Progress and technological innovation achieved in recent years, particularly in the area of entertainment and games, have promoted the creation of more natural and intuitive human-computer interfaces. For example, natural interaction devices such as Microsoft Kinect allow users to explore a more expressive way of human-computer communication by recognizing body gestures. In this context, several Supervised Machine Learning techniques have been proposed to recognize gestures. However, scarce research works have focused on a comparative study of the behavior of these techniques. Therefore, this chapter presents an evaluation of 4 Machine Learning techniques by using the Microsoft Research Cambridge (MSRC-12) Kinect gesture dataset, which involves 30 people performing 12 different gestures. Accuracy was evaluated with different techniques obtaining correct-recognition rates close to 100% in some results. Briefly, the experiments performed in this chapter are likely to provide new insights into the application of Machine Learning technique to facilitate the task of gesture recognition.


Author(s):  
Rodrigo Ibañez ◽  
Alvaro Soria ◽  
Alfredo Raul Teyseyre ◽  
Luis Berdun ◽  
Marcelo Ricardo Campo

Progress and technological innovation achieved in recent years, particularly in the area of entertainment and games, have promoted the creation of more natural and intuitive human-computer interfaces. For example, natural interaction devices such as Microsoft Kinect allow users to explore a more expressive way of human-computer communication by recognizing body gestures. In this context, several Supervised Machine Learning techniques have been proposed to recognize gestures. However, scarce research works have focused on a comparative study of the behavior of these techniques. Therefore, this chapter presents an evaluation of 4 Machine Learning techniques by using the Microsoft Research Cambridge (MSRC-12) Kinect gesture dataset, which involves 30 people performing 12 different gestures. Accuracy was evaluated with different techniques obtaining correct-recognition rates close to 100% in some results. Briefly, the experiments performed in this chapter are likely to provide new insights into the application of Machine Learning technique to facilitate the task of gesture recognition.


2021 ◽  
Vol 11 (5) ◽  
pp. 2015
Author(s):  
Marc Kurz ◽  
Robert Gstoettner ◽  
Erik Sonnleitner

Since electronic components are constantly getting smaller and smaller, sensors and logic boards can be fitted into smaller enclosures. This miniaturization lead to the development of smart rings containing motion sensors. These sensors of smart rings can be used to recognize hand/finger gestures enabling natural interaction. Unlike vision-based systems, wearable systems do not require a special infrastructure to operate in. Smart rings are highly mobile and are able to communicate wirelessly with various devices. They could potentially be used as a touchless user interface for countless applications, possibly leading to new developments in many areas of computer science and human–computer interaction. Specifically, the accelerometer and gyroscope sensors of a custom-built smart ring and of a smartwatch are used to train multiple machine learning models. The accuracy of the models is compared to evaluate whether smart rings or smartwatches are better suited for gesture recognition tasks. All the real-time data processing to predict 12 different gesture classes is done on a smartphone, which communicates wirelessly with the smart ring and the smartwatch. The system achieves accuracy scores of up to 98.8%, utilizing different machine learning models. Each machine learning model is trained with multiple different feature vectors in order to find optimal features for the gesture recognition task. A minimum accuracy threshold of 92% was derived from related research, to prove that the proposed system is able to compete with state-of-the-art solutions.


Author(s):  
Joshua P Gallaher ◽  
Alexander J. Kamrud ◽  
Brett J. Borghetti

A commonly known cognitive bias is a confirmation bias: the overweighting of evidence supporting a hy- pothesis and underweighting evidence countering that hypothesis. Due to high-stress and fast-paced opera- tions, military decisions can be affected by confirmation bias. One military decision task prone to confirma- tion bias is a visual search. During a visual search, the operator scans an environment to locate a specific target. If confirmation bias causes the operator to scan the wrong portion of the environment first, the search is inefficient. This study has two primary goals: 1) detect inefficient visual search using machine learning and Electroencephalography (EEG) signals, and 2) apply various mitigation techniques in an effort to im- prove the efficiency of searches. Early findings are presented showing how machine learning models can use EEG signals to detect when a person might be performing an inefficient visual search. Four mitigation techniques were evaluated: a nudge which indirectly slows search speed, a hint on how to search efficiently, an explanation for why the participant was receiving a nudge, and instructions to instruct the participant to search efficiently. These mitigation techniques are evaluated, revealing the most effective mitigations found to be the nudge and hint techniques.


2021 ◽  
Vol 21 (3) ◽  
pp. 1-19
Author(s):  
Lucia Cascone ◽  
Aniello Castiglione ◽  
Michele Nappi ◽  
Fabio Narducci ◽  
Ignazio Passero

Social robots adopt an emotional touch to interact with users inducing and transmitting humanlike emotions. Natural interaction with humans needs to be in real time and well grounded on the full availability of information on the environment. These robots base their way of communicating on direct interaction (touch, listening, view), supported by a range of sensors on the surrounding environment that provide a radially central and partial knowledge on it. Over the past few years, social robots have been demonstrated to implement different features, going from biometric applications to the fusion of machine learning environmental information collected on the edge. This article aims at describing the experiences performed and still ongoing and characterizes a simulation environment developed for the social robot Pepper that aims to foresee the new scenarios and benefits that tactile connectivity will enable.


2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.


Sign in / Sign up

Export Citation Format

Share Document