A 127mW 1.63TOPS sparse spatio-temporal cognitive SoC for action classification and motion tracking in videos

Author(s):  
Ching-En Lee ◽  
Thomas Chen ◽  
Zhengya Zhang
Author(s):  
A. Elhayek ◽  
C. Stoll ◽  
N. Hasler ◽  
K. I. Kim ◽  
H. Seidel ◽  
...  

2021 ◽  
Vol 11 (12) ◽  
pp. 5605
Author(s):  
Jose S. Velázquez ◽  
Arsenio M. Iznaga-Benítez ◽  
Amanda Robau-Porrúa ◽  
Francisco L. Sáez-Gutiérrez ◽  
Francisco Cavas

Gait is influenced by many factors, but one of the most prominent ones is shoe heel height. Optical motion tracking technology is widely used to analyze high-heeled gait, but it normally involves several high-quality cameras and licensed software, so clinics and researchers with low budgets cannot afford them. This article presents a simple, effective technique to measure the rotation angles on the sagittal plane of the ankle (tibiotalar) and toe (metatarsophalangeal) joints when no shoes (0 cm heel) and high-heeled shoes (2, 6 and 10 cm heels) are worn. The foot’s position was determined by a set of equations based on its geometry and video analysis techniques with free software (Tracker). An evaluation of the spatio-temporal variables confirmed observations from previous studies: increasing heel heights reduces gait cycle length and speed but does not change cadence. The range of movement at the tibiotalar joint progressively narrowed from 28° when no heel height was worn to 9° when a 10 cm heel was used, and these reductions ranged from 30° to 5° for metatarsophalangeal joints, respectively. This aligns with other authors’ previous studies, and confirms that the proposed method accurately measures kinematic ankle–foot set changes when wearing high heels.


Author(s):  
M. N. Al-Berry ◽  
Mohammed A.-M. Salem ◽  
H. M. Ebeid ◽  
A. S. Hussein ◽  
Mohamed F. Tolba

Human action recognition is a very active field in computer vision. Many important applications depend on accurate human action recognition, which is based on accurate representation of the actions. These applications include surveillance, athletic performance analysis, driver assistance, robotics, and human-centered computing. This chapter presents a thorough review of the field, concentrating the recent action representation methods that use spatio-temporal information. In addition, the authors propose a stationary wavelet-based representation of natural human actions in realistic videos. The proposed representation utilizes the 3D Stationary Wavelet Transform to encode the directional multi-scale spatio-temporal characteristics of the motion available in a frame sequence. It was tested using the Weizmann, and KTH datasets, and produced good preliminary results while having reasonable computational complexity when compared to existing state–of–the–art methods.


Sign in / Sign up

Export Citation Format

Share Document