Non-linear data structure extraction using simple hebbian networks

1995 ◽  
Vol 72 (6) ◽  
pp. 533-541 ◽  
Author(s):  
Colin Fyfe ◽  
Roland Baddeley
1995 ◽  
Vol 72 (6) ◽  
pp. 533-541 ◽  
Author(s):  
Colin Fyfe ◽  
Roland Baddeley

2005 ◽  
Vol 12 (5) ◽  
pp. 661-670 ◽  
Author(s):  
S. S. P. Rattan ◽  
B. G. Ruessink ◽  
W. W. Hsieh

Abstract. Complex principal component analysis (CPCA) is a useful linear method for dimensionality reduction of data sets characterized by propagating patterns, where the CPCA modes are linear functions of the complex principal component (CPC), consisting of an amplitude and a phase. The use of non-linear methods, such as the neural-network based circular non-linear principal component analysis (NLPCA.cir) and the recently developed non-linear complex principal component analysis (NLCPCA), may provide a more accurate description of data in case the lower-dimensional structure is non-linear. NLPCA.cir extracts non-linear phase information without amplitude variability, while NLCPCA is capable of extracting both. NLCPCA can thus be viewed as a non-linear generalization of CPCA. In this article, NLCPCA is applied to bathymetry data from the sandy barred beaches at Egmond aan Zee (Netherlands), the Hasaki coast (Japan) and Duck (North Carolina, USA) to examine how effective this new method is in comparison to CPCA and NLPCA.cir in representing propagating phenomena. At Duck, the underlying low-dimensional data structure is found to have linear phase and amplitude variability only and, accordingly, CPCA performs as well as NLCPCA. At Egmond, the reduced data structure contains non-linear spatial patterns (asymmetric bar/trough shapes) without much temporal amplitude variability and, consequently, is about equally well modelled by NLCPCA and NLPCA.cir. Finally, at Hasaki, the data structure displays not only non-linear spatial variability but also considerably temporal amplitude variability, and NLCPCA outperforms both CPCA and NLPCA.cir. Because it is difficult to know the structure of data in advance as to which one of the three models should be used, the generalized NLCPCA model can be used in each situation.


Author(s):  
Nur Adila Azram Et.al

The progression of scientific data from various laboratory instruments is increasing these days. As different laboratory instruments hold different structures and formats of data, it became a concern in the management and analysis of data because of the heterogeneity of data structure and format. This paper offered a metadata structure to standardize the laboratory instruments' -produced scientific data to attain a standard structure and format. This paper contains explanation regarding the methodology and the use of proposed metadata structure, before summarizing the implementation and its related result analysis. The proposed metadata structure extraction shows promising results based on conducted evaluation and validation.


Author(s):  
Danilo Machado Lawinscky da Silva ◽  
Fabri´cio Nogueira Correˆa ◽  
Breno Pinheiro Jacob

The objective of this work is to present the implementation of a contact model that represents, during a nonlinear dynamic analysis of floating offshore systems, the contact of lines with the platform, as well as the contact involving different lines and, eventually, involving two different platforms in the same model. Traditional contact models consider for instance a generalized scalar element, consisting of two nodes linked by a non-linear gap spring. In this work, the contact model is geometrically defined by volumes that cannot interpenetrate. A penetration stiffness can be defined for each volume; lateral friction can also be considered by this model. An appropriate data structure is used to define the volumes and guarantee the efficiency of the algorithm by an optimized search. The application of the presented contact model is demonstrated by case studies of actual applications for offshore systems: pipelines in S-Lay installation operations, where the contact is complex, specified only in some points of the ramp and stinger; offloading floating hoses that may collide with the hull of the ship, and catlines in lift operations.


2018 ◽  
Vol 10 (6) ◽  
pp. 1
Author(s):  
Rodrigo Nobre Fernandez ◽  
Felipe Garcia Ribeiro ◽  
Jean Marcel Del Ponte Duarte

This study investigates the effects of software piracy on economic growth around the world for the years 2000 to 2014, using panel data structure with fixed effects to capture this relationship, plus year dummies. Our findings suggest, in general, that software piracy has a negative impact on growth and that this relationship seems to be non-linear.


Author(s):  
J. Henrard ◽  
J. -L. Hainaut ◽  
J. -M. Hick ◽  
D. Roland ◽  
V. Englebert

Image Forgery is an illegal activity in the society as per cyber laws. There are various types of forgeries in which forgery on images is considered as an illegal activity. Image forgery may take place in different ways. One way for doing forgery on images is copy and move forgery which may result in loss of image integrity or authenticity. There are number of popular detection techniques exist such as SIFT, SURF etc., but have high complexity in detection of forgery. Here we have proposed a method to detect the forgery on images which results in loss of integrity or authenticity. In our proposed method we have used descriptor matching using Trie Data Structure The descriptor matching method of implementation using Trie data structure made the complexity of the problem to reduce to O (n log n). Using Key points approach we can verify the integrity of the image. Extracting the features with key points approach is computational expensive task. But there is KAZE method which overcomes this situation. KAZE’s method of using non-linear diffusion filtering requires it to solve a series of PDEs. This cannot be done analytically forcing KAZE to use a numerical method called an AOS scheme to solve the PDEs. However, this process is computationally costly and therefore an accelerated version of KAZE was created. The Accelerated KAZE or AKAZE which creates non-linear scale space through Fast Explicit Diffusion for reduce the complexity in extracting the features.


Sign in / Sign up

Export Citation Format

Share Document