human actions recognition
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 6)

H-INDEX

4
(FIVE YEARS 2)

2022 ◽  
Vol 9 (1) ◽  
Author(s):  
Débora Pereira ◽  
Yuri De Pra ◽  
Emidio Tiberi ◽  
Vito Monaco ◽  
Paolo Dario ◽  
...  

AbstractThis paper presents a multivariate dataset of 2866 food flipping movements, performed by 4 chefs and 5 home cooks, with different grilled food and two utensils (spatula and tweezers). The 3D trajectories of strategic points in the utensils were tracked using optoelectronic motion capture. The pinching force of the tweezers, the bending force and torsion torque of the spatula were also recorded, as well as videos and the subject gaze. These data were collected using a custom experimental setup that allowed the execution of flipping movements with freshly cooked food, without having the sensors near the dangerous cooking area. Complementary, the 2D position of food was computed from the videos. The action of flipping food is, indeed, gaining the attention of both researchers and manufacturers of foodservice technology. The reported dataset contains valuable measurements (1) to characterize and model flipping movements as performed by humans, (2) to develop bio-inspired methods to control a cooking robot, or (3) to study new algorithms for human actions recognition.


Author(s):  
Bogdan Alexandru Radulescu ◽  
Victorita Radulescu

Abstract Action Recognition is a domain that gains interest along with the development of specific motion capture equipment, hardware and power of processing. Its many applications in domains such as national security and behavior analysis make it even more popular among the scientific community, especially considering the ascending trend of machine learning methods. Nowadays approaches necessary to solve real life problems through human actions recognition became more interesting. To solve this problem are mainly two approaches when attempting to build a classifier, either using RGB images or sensor data, or where possible a combination of these two. Both methods have advantages and disadvantages and domains of utilization in real life problems, solvable through actions recognition. Using RGB input makes it possible to adopt a classifier on almost any infrastructure without specialized equipment, whereas combining video with sensor data provides a higher accuracy, albeit at a higher cost. Neural networks and especially convolutional neural networks are the starting point for human action recognition. By their nature, they can recognize very well spatial and temporal features, making them ideal for RGB images or sequences of RGB images. In the present paper is proposed the convolutional neural network architecture based on 2D kernels. Its structure, along with metrics measuring the performance, advantages and disadvantages are here illustrated. This solution based on 2D convolutions is fast, but has lower performance compared to other known solutions. The main problem when dealing with videos is the context extraction from a sequence of frames. Video classification using 2D Convolutional Layers is realized either by the most significant frame or by frame to frame, applying a probability distribution over the partial classes to obtain the final prediction. To classify actions, especially when differences between them are subtle, and consists of only a small part of the overall image is difficult. When classifying via the key frames, the total accuracy obtained is around 10%. The other approach, classifying each frame individually, proved to be too computationally expensive with negligible gains.


Author(s):  
Viacheslav V. Voronin ◽  
Marina Zhdanova ◽  
Evgenii Semenishchev ◽  
Aleksander Zelensky ◽  
Olga Tokareva

2019 ◽  
Vol 56 ◽  
pp. 223-232 ◽  
Author(s):  
Fernando Itano ◽  
Ricardo Pires ◽  
Miguel Angelo de Abreu de Sousa ◽  
Emilio Del-Moral-Hernandez

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 52532-52541 ◽  
Author(s):  
Benyue Su ◽  
Huang Wu ◽  
Min Sheng ◽  
Chuansheng Shen

Sign in / Sign up

Export Citation Format

Share Document