ATOMIC HUMAN ACTION SEGMENTATION AND RECOGNITION USING A SPATIO-TEMPORAL PROBABILISTIC FRAMEWORK

2007 ◽  
Vol 01 (02) ◽  
pp. 205-220
Author(s):  
DUAN-YU CHEN ◽  
HONG-YUAN MARK LIAO ◽  
SHENG-WEN SHIH

In this paper, a framework of automatic human action segmentation and recognition in continuous action sequences is proposed. A star figure enclosed by a bounding convex polygon is used to effectively represent the extremities of the silhouette of a human body. The human action, thus, is recorded as a sequence of the star-figure's parameters, which is used for action modeling. To model human actions in a compact manner while characterizing their spatio-temporal distributions, the star-figure's parameters are represented by Gaussian mixture models (GMM). In addition, to address the intrinsic nature of temporal variations in a continuous action sequence, we transform the time sequence of star-like figure parameters into frequency domain by discrete cosine transform (DCT) and use only the first few coefficients to represent different temporal patterns with significant discriminating power. The performance shows that the proposed framework can recognize continuous human actions in an efficient way.

2021 ◽  
Author(s):  
Gabriel Borrageiro ◽  
Nick Firoozye ◽  
Paolo Barucca

We conduct a detailed experiment on major cash fx pairs, accurately accounting for transaction and funding costs. These sources of profit and loss, including the price trends that occur in the currency markets, are made available to our recurrent reinforcement learner via a quadratic utility, which learns to target a position directly. We improve upon earlier work, by casting the problem of learning to target a risk position, in an online learning context. This online learning occurs sequentially in time, but also in the form of transfer learning. We transfer the output of radial basis function hidden processing units, whose means, covariances and overall size are determined by Gaussian mixture models, to the recurrent reinforcement learner and baseline momentum trader. Thus the intrinsic nature of the feature space is learnt and made available to the upstream models. The recurrent reinforcement learning trader achieves an annualised portfolio information ratio of 0.52 with compound return of 9.3\%, net of execution and funding cost, over a 7 year test set. This is despite forcing the model to trade at the close of the trading day 5pm EST, when trading costs are statistically the most expensive. These results are comparable with the momentum baseline trader, reflecting the low interest differential environment since the the 2008 financial crisis, and very obvious currency trends since then. The recurrent reinforcement learner does nevertheless maintain an important advantage, in that the model's weights can be adapted to reflect the different sources of profit and loss variation. This is demonstrated visually by a USDRUB trading agent, who learns to target different positions, that reflect trading in the absence or presence of cost.<br>


Author(s):  
Yaqing Hou ◽  
Hua Yu ◽  
Dongsheng Zhou ◽  
Pengfei Wang ◽  
Hongwei Ge ◽  
...  

AbstractIn the study of human action recognition, two-stream networks have made excellent progress recently. However, there remain challenges in distinguishing similar human actions in videos. This paper proposes a novel local-aware spatio-temporal attention network with multi-stage feature fusion based on compact bilinear pooling for human action recognition. To elaborate, taking two-stream networks as our essential backbones, the spatial network first employs multiple spatial transformer networks in a parallel manner to locate the discriminative regions related to human actions. Then, we perform feature fusion between the local and global features to enhance the human action representation. Furthermore, the output of the spatial network and the temporal information are fused at a particular layer to learn the pixel-wise correspondences. After that, we bring together three outputs to generate the global descriptors of human actions. To verify the efficacy of the proposed approach, comparison experiments are conducted with the traditional hand-engineered IDT algorithms, the classical machine learning methods (i.e., SVM) and the state-of-the-art deep learning methods (i.e., spatio-temporal multiplier networks). According to the results, our approach is reported to obtain the best performance among existing works, with the accuracy of 95.3% and 72.9% on UCF101 and HMDB51, respectively. The experimental results thus demonstrate the superiority and significance of the proposed architecture in solving the task of human action recognition.


2021 ◽  
Author(s):  
Gabriel Borrageiro ◽  
Nick Firoozye ◽  
Paolo Barucca

We conduct a detailed experiment on major cash fx pairs, accurately accounting for transaction and funding costs. These sources of profit and loss, including the price trends that occur in the currency markets, are made available to our recurrent reinforcement learner via a quadratic utility, which learns to target a position directly. We improve upon earlier work, by casting the problem of learning to target a risk position, in an online learning context. This online learning occurs sequentially in time, but also in the form of transfer learning. We transfer the output of radial basis function hidden processing units, whose means, covariances and overall size are determined by Gaussian mixture models, to the recurrent reinforcement learner and baseline momentum trader. Thus the intrinsic nature of the feature space is learnt and made available to the upstream models. The recurrent reinforcement learning trader achieves an annualised portfolio information ratio of 0.52 with compound return of 9.3\%, net of execution and funding cost, over a 7 year test set. This is despite forcing the model to trade at the close of the trading day 5pm EST, when trading costs are statistically the most expensive. These results are comparable with the momentum baseline trader, reflecting the low interest differential environment since the the 2008 financial crisis, and very obvious currency trends since then. The recurrent reinforcement learner does nevertheless maintain an important advantage, in that the model's weights can be adapted to reflect the different sources of profit and loss variation. This is demonstrated visually by a USDRUB trading agent, who learns to target different positions, that reflect trading in the absence or presence of cost.<br>


2021 ◽  
Author(s):  
Gabriel Borrageiro ◽  
Nick Firoozye ◽  
Paolo Barucca

We conduct a detailed experiment on major cash fx pairs, accurately accounting for transaction and funding costs. These sources of profit and loss, including the price trends that occur in the currency markets, are made available to our recurrent reinforcement learner via a quadratic utility, which learns to target a position directly. We improve upon earlier work, by casting the problem of learning to target a risk position, in an online learning context. This online learning occurs sequentially in time, but also in the form of transfer learning. We transfer the output of radial basis function hidden processing units, whose means, covariances and overall size are determined by Gaussian mixture models, to the recurrent reinforcement learner and baseline momentum trader. Thus the intrinsic nature of the feature space is learnt and made available to the upstream models. The recurrent reinforcement learning trader achieves an annualised portfolio information ratio of 0.52 with compound return of 9.3\%, net of execution and funding cost, over a 7 year test set. This is despite forcing the model to trade at the close of the trading day 5pm EST, when trading costs are statistically the most expensive. These results are comparable with the momentum baseline trader, reflecting the low interest differential environment since the the 2008 financial crisis, and very obvious currency trends since then. The recurrent reinforcement learner does nevertheless maintain an important advantage, in that the model's weights can be adapted to reflect the different sources of profit and loss variation. This is demonstrated visually by a USDRUB trading agent, who learns to target different positions, that reflect trading in the absence or presence of cost.<br>


Author(s):  
M. Naveenkumar ◽  
S. Domnic

Skeleton-based action recognition has become popular with the recent developments in sensor technology and fast pose estimation algorithms. The existing research works have attempted to address the action recognition problem by considering either spatial or temporal dynamics of the actions. But, both the features (spatial and temporal) would contribute to solve the problem. In this paper, we address the action recognition problem using 3D skeleton data by introducing eight Joint Distance Maps, referred to as Spatio Temporal Joint Distance Maps (ST-JDMs), to capture spatio temporal variations from skeleton data for action recognition. Among these, four maps are defined in spatial domain and remaining four are in temporal domain. After construction of ST-JDMs from an action sequence, they are encoded into color images. This representation enables us to fine-tune the Convolutional Neural Network (CNN) for action classification. The empirical results on the two datasets, UTD MHAD and NTU RGB+D, show that ST-JDMs outperforms the other state-of-the-art skeleton-based approaches by achieving recognition accuracies 91.63% and 80.16%, respectively.


2021 ◽  
Author(s):  
Gabriel Borrageiro ◽  
Nick Firoozye ◽  
Paolo Barucca

We conduct a detailed experiment on major cash fx pairs, accurately accounting for transaction and funding costs. These sources of profit and loss, including the price trends that occur in the currency markets, are made available to our recurrent reinforcement learner via a quadratic utility, which learns to target a position directly. We improve upon earlier work, by casting the problem of learning to target a risk position, in an online learning context. This online learning occurs sequentially in time, but also in the form of transfer learning. We transfer the output of radial basis function hidden processing units, whose means, covariances and overall size are determined by Gaussian mixture models, to the recurrent reinforcement learner and baseline momentum trader. Thus the intrinsic nature of the feature space is learnt and made available to the upstream models. The recurrent reinforcement learning trader achieves an annualised portfolio information ratio of 0.52 with compound return of 9.3\%, net of execution and funding cost, over a 7 year test set. This is despite forcing the model to trade at the close of the trading day 5pm EST, when trading costs are statistically the most expensive. These results are comparable with the momentum baseline trader, reflecting the low interest differential environment since the the 2008 financial crisis, and very obvious currency trends since then. The recurrent reinforcement learner does nevertheless maintain an important advantage, in that the model's weights can be adapted to reflect the different sources of profit and loss variation. This is demonstrated visually by a USDRUB trading agent, who learns to target different positions, that reflect trading in the absence or presence of cost.<br>


2002 ◽  
Vol 4 (1) ◽  
pp. 130-141
Author(s):  
Abdullah Muhammad al-Shami

In Islamic law judgements on any human action are usually evaluated in terms of the intention involved. Accordingly, the rules of substantive issues have to be accommodated under the basic principles of Islamic jurisprudence. The understanding of these principles by the juristic scholar is highly rewarding because it will lead the muftī to the right path in deriving legal opinions from the original sources. The basic principle of Islamic jurisprudence, which stipulates that ‘all actions depend on intentions,’ has played an important role in the construction of Islamic jurisprudence. Moreover, this rule has a special place in the theory of Islamic legal contract. So what is the effect of intention in the validity of human actions and legal contracts? It is known that pure intention has significant effects on spiritual worship and legal contracts of transaction. It also gives guidance for earning rewards from Almighty Allah. This article concentrates on the effect of intention in perpetual worship, the concept of action and intention in Islamic legal works, the kind of contract with all its components, and the jurists' views on the effects of intention in human action and legal contract along with their discussion and counter-arguments.


Sign in / Sign up

Export Citation Format

Share Document