Person re-identification based on gait via Part View Transformation Model under variable covariate conditions

Author(s):  
Imen Chtourou ◽  
Emna Fendri ◽  
Mohamed Hammami
Author(s):  
Redouane Esbai ◽  
Fouad Elotmani ◽  
Fatima Zahra Belkadi

<span>The growth of application architectures in all areas (e.g. Astrology, Meteorology, E-commerce, social network, etc.) has resulted in an exponential increase in data volumes, now measured in Petabytes. Managing these volumes of data has become a problem that relational databases are no longer able to handle because of the acidity properties. In response to this scaling up, new concepts have emerged such as NoSQL. In this paper, we show how to design and apply transformation rules to migrate from an SQL relational database to a Big Data solution within NoSQL. For this, we use the Model Driven Architecture (MDA) and the transformation languages like as MOF 2.0 QVT (Meta-Object Facility 2.0 Query-View-Transformation) and Acceleo which define the meta-models for the development of transformation model. The transformation rules defined in this work can generate, from the class diagram, a CQL code for creation column-oriented NoSQL database.</span>


Author(s):  
Yasushi Makihara ◽  
Ryusuke Sagawa ◽  
Yasuhiro Mukaigawa ◽  
Tomio Echigo ◽  
Yasushi Yagi

Sensors ◽  
2012 ◽  
Vol 12 (4) ◽  
pp. 4431-4446 ◽  
Author(s):  
Chien-Chuan Lin ◽  
Ming-Shi Wang

Author(s):  
MAODI HU ◽  
YUNHONG WANG ◽  
ZHAOXIANG ZHANG

Considering it is difficult to guarantee that at least one continuous complete gait cycle is captured in real applications, we address the multi-view gait recognition problem with short probe sequences. With unified multi-view population hidden markov models (umvpHMMs), the gait pattern is represented as fixed-length multi-view stances. By incorporating the multi-stance dynamics, the well-known view transformation model (VTM) is extended into a multi-linear projection model in a four-order tensor space, so that a view-independent stance-independent identity vector (VSIV) can be extracted. The main advantage is that the proposed VSIV is stable for each subject regardless of the camera location or the sequence length. Experiments show that our algorithm achieves encouraging performance for cross-view gait recognition even with short probe sequences.


Sign in / Sign up

Export Citation Format

Share Document