Seamless Interfacing

Author(s):  
Stephan Puls ◽  
Heinz Wörn

Intuitive means of human-machine interaction are needed in order to facilitate seamless human-robot cooperation. Knowledge about human posture, whereabouts, and performed actions allows interpretation of the situation. Thus, expectations towards system behavior can be inferred. This work demonstrates a system in an industrial setting that combines all this information in order to achieve situation awareness. The continuous human action recognition is based on hierarchical Hidden Markov Models. For identifying and predicting human location, an approach based on potential functions is presented. The recognition results and spatial information are used in combination with a Description Logics-based reasoning system for modeling semantic interrelations, dependencies, and situations.

Author(s):  
KANCHAN GAIKWAD ◽  
VAIBHAV NARAWADE

Visual investigation of human activities related to the detection, tracking and recognition of people, and, more generally, the perceptive of human activities, from image sequences. Recognizing human activities from image sequences is an active area of research in computer vision. Human activity recognition (HAR) research has been on the rise because of the rapid technological development of the image-capturing software and hardware. In this paper, we propose a new approach for human action recognition from video. Here we use the HMM algorithm for recognition of activity from video. The result of this method is good as compare to other method. It consume less time as compare with other method. This result can be useful in future development as an automatic human machine interaction.


Inventions ◽  
2020 ◽  
Vol 5 (3) ◽  
pp. 49
Author(s):  
Nusrat Tasnim ◽  
Md. Mahbubul Islam ◽  
Joong-Hwan Baek

Human action recognition has turned into one of the most attractive and demanding fields of research in computer vision and pattern recognition for facilitating easy, smart, and comfortable ways of human-machine interaction. With the witnessing of massive improvements to research in recent years, several methods have been suggested for the discrimination of different types of human actions using color, depth, inertial, and skeleton information. Despite having several action identification methods using different modalities, classifying human actions using skeleton joints information in 3-dimensional space is still a challenging problem. In this paper, we conceive an efficacious method for action recognition using 3D skeleton data. First, large-scale 3D skeleton joints information was analyzed and accomplished some meaningful pre-processing. Then, a simple straight-forward deep convolutional neural network (DCNN) was designed for the classification of the desired actions in order to evaluate the effectiveness and embonpoint of the proposed system. We also conducted prior DCNN models such as ResNet18 and MobileNetV2, which outperform existing systems using human skeleton joints information.


2019 ◽  
Vol 5 (10) ◽  
pp. 82 ◽  
Author(s):  
Mahmoud Al-Faris ◽  
John Chiverton ◽  
Yanyan Yang ◽  
David Ndzi

Human action recognition (HAR) is an important yet challenging task. This paper presents a novel method. First, fuzzy weight functions are used in computations of depth motion maps (DMMs). Multiple length motion information is also used. These features are referred to as fuzzy weighted multi-resolution DMMs (FWMDMMs). This formulation allows for various aspects of individual actions to be emphasized. It also helps to characterise the importance of the temporal dimension. This is important to help overcome, e.g., variations in time over which a single type of action might be performed. A deep convolutional neural network (CNN) motion model is created and trained to extract discriminative and compact features. Transfer learning is also used to extract spatial information from RGB and depth data using the AlexNet network. Different late fusion techniques are then investigated to fuse the deep motion model with the spatial network. The result is a spatial temporal HAR model. The developed approach is capable of recognising both human action and human–object interaction. Three public domain datasets are used to evaluate the proposed solution. The experimental results demonstrate the robustness of this approach compared with state-of-the art algorithms.


Author(s):  
M. Favorskaya ◽  
D. Novikov ◽  
Y. Savitskaya

Human activity is a persistent subject of interest in the last decade. On the one hand, video sequences provide a huge volume of motion information in order to recognize the human active actions. On the other hand, the spatial information about static human poses is valuable for human action recognition. Poselets were introduced as latent variables representing a configuration for mutual locations of body parts and allowing different views of description. In current research, some modifications of Speeded-Up Robust Features (SURF) invariant to affine geometrical transforms and illumination changes were tested. First, a grid of rectangles is imposed on object of interest in a still image. Second, sparse descriptor based on Gauge-SURF (G-SURF) invariant to color/lighting changes is constructed for each rectangle separately. A common Spatial POselet Descriptor (SPOD) aggregates the SPODs of rectangles with following random forest classification in order to receive fast classification results. The proposed approach was tested on samples from PASCAL Visual Object Classes (VOC) Dataset and Challenge 2010 providing accuracy 61-68% for all possible 3D poses locations and 82-86% for front poses locations regarding to nine action categories.


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0243829
Author(s):  
Fatemeh Ziaeetabar ◽  
Jennifer Pomp ◽  
Stefan Pfeiffer ◽  
Nadiya El-Sourani ◽  
Ricarda I. Schubotz ◽  
...  

Predicting other people’s upcoming action is key to successful social interactions. Previous studies have started to disentangle the various sources of information that action observers exploit, including objects, movements, contextual cues and features regarding the acting person’s identity. We here focus on the role of static and dynamic inter-object spatial relations that change during an action. We designed a virtual reality setup and tested recognition speed for ten different manipulation actions. Importantly, all objects had been abstracted by emulating them with cubes such that participants could not infer an action using object information. Instead, participants had to rely only on the limited information that comes from the changes in the spatial relations between the cubes. In spite of these constraints, participants were able to predict actions in, on average, less than 64% of the action’s duration. Furthermore, we employed a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of different types of spatial relations: (a) objects’ touching/untouching, (b) static spatial relations between objects and (c) dynamic spatial relations between objects during an action. Assuming the eSEC as an underlying model, we show, using information theoretical analysis, that humans mostly rely on a mixed-cue strategy when predicting actions. Machine-based action prediction is able to produce faster decisions based on individual cues. We argue that human strategy, though slower, may be particularly beneficial for prediction of natural and more complex actions with more variable or partial sources of information. Our findings contribute to the understanding of how individuals afford inferring observed actions’ goals even before full goal accomplishment, and may open new avenues for building robots for conflict-free human-robot cooperation.


Sign in / Sign up

Export Citation Format

Share Document