composite feature
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 11)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
Robert Worden

Bayesian formulations of learning imply that whenever the evidence for a correlation between events in an animal’s habitat is sufficient, the correlation is learned. This implies that regularities can be learnt rapidly, from small numbers of learning examples. This speed of learning gives maximum possible fitness, and no faster learning is possible. There is evidence in many domains that animals and people can learn at nearly Bayesian optimal speeds. These domains include associative conditioning, and the more complex domains of navigation and language. There are computational models of learning which learn at near-Bayesian speeds in complex domains, and which can scale well – to learn thousands of pieces of knowledge (i.e., relations and associations). These are not neural net models. They can be defined in computational terms, as algorithms and data structures at David Marr’s [1] Level Two. Their key data structures are composite feature structures, which are graphs of multiple linked nodes. This leads to the hypothesis that animal learning results not from deep neural nets (which typically require thousands of training exam-ples), but from neural implementations of the Level Two models of fast learning; and that neu-rons provide the facilities needed to implement those models at Marr’s Level Three. The required facilities include feature structures, dynamic binding, one-shot memory for many feature struc-tures, pattern-based associative retrieval, unification and generalization of feature structures. These may be supported by multiplexing of data and metadata in the same neural fibres.


2020 ◽  
Vol 8 (6) ◽  
pp. 1556-1566

Human Action Recognition is a key research direction and also a trending topic in several fields like machine learning, computer vision and other fields. The main objective of this research is to recognize the human action in image of video. However, the existing approaches have many limitations like low recognition accuracy and non-robustness. Hence, this paper focused to develop a novel and robust Human Action Recognition framework. In this framework, we proposed a new feature extraction technique based on the Gabor Transform and Dual Tree Complex Wavelet Transform. These two feature extraction techniques helps in the extraction of perfect discriminative features by which the actions present in the image or video are correctly recognized. Later, the proposed framework accomplished the Support Vector Machine algorithm as a classifier. Simulation experiments are conducted over two standard datasets such as KTH and Weizmann. Experimental results reveal that the proposed framework achieves better performance compared to state-of-art recognition methods.


Steganographic tools available in the internet and other commercial steganographic tools are preferred than customized steganographic tools developed from scratch by unlawful groups. Hence a clue regarding the steganographic tool deployed in the covert communication process can save time for the steganalyst in the crucial active steganalysis phase. Signature analysis can lead to success in targeted steganalysis but tool detection needs to be taken forward from a point with a suspicious stego image in hand with no additional details available. In such scenarios, statistical steganalysis comes to rescue but with issues to be addressed like huge dimensionality of feature sets and complex ensemble classifiers. This work accomplishes tool detection with a specific composite feature set identified to distinguish one stego tool from the others with a weighted decision function to enhance the role of the specific feature set when it votes for a particular class. A tool detection accuracy of 85.25% has been achieved simultaneously addressing feature set dimensionality and complexity of ensemble classifiers and a comparison with a benchmark procedure has been made


2019 ◽  
Vol 9 (1) ◽  
pp. 3807-3813
Author(s):  
A. Alsubari ◽  
S. A. Hannan ◽  
M. Alzahrani ◽  
R. J. Ramteke

Palm-print and iris biometric traits fusion are implemented in this paper. The region of interest (ROI) of a palm is extracted by using the valley detection algorithm and the ROI of an iris is extracted based on the neighbor-pixels value algorithm (NPVA). Statistical local binary pattern (SLBP) is applied to extract the local features of palm and iris. For enhancing the palm features, a combination of histogram of oriented gradient (HOG) and discrete cosine transform (DCT) is applied. Gabor-Zernike moment is used to extract the iris features. This experimentation was carried out in two modes: verification and identification. The Euclidean distance is used in the verification system. In the identification system, the fuzzy-based classifier was proposed along with built-in classification functions in MATLAB. CASIA datasets of palm and iris were used in this research work. The proposed system accuracy was found to be satisfactory.


Sign in / Sign up

Export Citation Format

Share Document