scholarly journals Human latent-state generalization through prototype learning with discriminative attention

2021 ◽  
Author(s):  
Warren Woodrich Pettine ◽  
Dhruva V. Raman ◽  
A. David Redish ◽  
John D. Murray

People cannot access the latent causes giving rise to experience. How then do they approximate the high-dimensional feature space of the external world with lower-dimensional internal models that generalize to novel examples or contexts? Here, we developed and tested a theoretical framework that internally identifies states by feature regularity (i.e., prototype states) and selectively attends to features according to their informativeness for discriminating between likely states. To test theoretical predictions, we developed experimental tasks where human subjects first learn through reward-feedback internal models of latent states governing actions associated with multi-feature stimuli. We then analyzed subjects’ response patterns to novel examples and contexts. These combined theoretical and experimental results reveal that the human ability to generalize actions involves the formation of prototype states with flexible deployment of top-down attention to discriminative features. These cognitive strategies underlie the human ability to generalize learned latent states in high-dimensional environments.

Author(s):  
Qing Zhang ◽  
Heng Li ◽  
Xiaolong Zhang ◽  
Haifeng Wang

To achieve a more desirable fault diagnosis accuracy by applying multi-domain features of vibration signals, it is significative and challenging to refine the most representative and intrinsic feature components from the original high dimensional feature space. A novel dimensionality reduction method for fault diagnosis is proposed based on local Fisher discriminant analysis (LFDA) which takes both label information and local geometric structure of the high dimensional features into consideration. Multi-kernel trick is introduced into the LFDA to improve its performance in dealing with the nonlinearity of mapping high dimensional feature space into a lower one. To obtain an optimal diagnosis accuracy by the reduced features of low dimensionality, binary particle swarm optimization (BPSO) algorithm is utilized to search for the most appropriate parameters of kernels and K-nearest neighbor (kNN) recognition model. Samples with labels are used to train the optimal multi-kernel LFDA and kNN (OMKLFDA-kNN) fault diagnosis model to obtain the optimal transformation matrix. Consequently, the trained fault diagnosis model implements the recognition of machinery health condition with the most representative feature space of vibration signals. A bearing fault diagnosis experiment is conducted to verify the effectiveness of proposed diagnostic approach. Performance comparison with some other methods are investigated, and the improvement for fault diagnosis of the proposed method are confirmed in different aspects.


2018 ◽  
Vol 7 (4.5) ◽  
pp. 159
Author(s):  
Vaibhav A. Hiwase ◽  
Dr. Avinash J Agrawa

The growth of life insurance has been mainly depending on the risk of insured people. These risks are unevenly distributed among the people which can be captured from different characteristics and lifestyle. These unknown distribution needs to be analyzed from        historical data and use for underwriting and policy-making in life insurance industry. Traditionally risk is calculated from selected     features known as risk factors but today it becomes important to know these risk factors in high dimensional feature space. Clustering in high dimensional feature is a challenging task mainly because of the curse of dimensionality and noisy features. Hence the use of data mining and machine learning techniques should experiment to see some interesting pattern and behaviour. This will help life insurance company to protect from financial loss to the insured person and company as well. This paper focuses on analyzing hidden correlation among features and use it for risk calculation of an individual customer.  


2021 ◽  
Vol 50 (1) ◽  
pp. 138-152
Author(s):  
Mujeeb Ur Rehman ◽  
Dost Muhammad Khan

Recently, anomaly detection has acquired a realistic response from data mining scientists as a graph of its reputation has increased smoothly in various practical domains like product marketing, fraud detection, medical diagnosis, fault detection and so many other fields. High dimensional data subjected to outlier detection poses exceptional challenges for data mining experts and it is because of natural problems of the curse of dimensionality and resemblance of distant and adjoining points. Traditional algorithms and techniques were experimented on full feature space regarding outlier detection. Customary methodologies concentrate largely on low dimensional data and hence show ineffectiveness while discovering anomalies in a data set comprised of a high number of dimensions. It becomes a very difficult and tiresome job to dig out anomalies present in high dimensional data set when all subsets of projections need to be explored. All data points in high dimensional data behave like similar observations because of its intrinsic feature i.e., the distance between observations approaches to zero as the number of dimensions extends towards infinity. This research work proposes a novel technique that explores deviation among all data points and embeds its findings inside well established density-based techniques. This is a state of art technique as it gives a new breadth of research towards resolving inherent problems of high dimensional data where outliers reside within clusters having different densities. A high dimensional dataset from UCI Machine Learning Repository is chosen to test the proposed technique and then its results are compared with that of density-based techniques to evaluate its efficiency.


2021 ◽  
Vol 15 (5) ◽  
pp. 356-371
Author(s):  
Cláudio M. F. Leite ◽  
Carlos E. Campos ◽  
Crislaine R. Couto ◽  
Herbert Ugrinowitsch

Interacting with the environment requires a remarkable ability to control, learn, and adapt motor skills to ever-changing conditions. The intriguing complexity involved in the process of controlling, learning, and adapting motor skills has led to the development of many theoretical approaches to explain and investigate motor behavior. This paper will present a theoretical approach built upon the top-down mode of motor control that shows substantial internal coherence and has a large and growing body of empirical evidence: The Internal Models. The Internal Models are representations of the external world within the CNS, which learn to predict this external world, simulate behaviors based on sensory inputs, and transform these predictions into motor actions. We present the Internal Models’ background based on two main structures, Inverse and Forward models, explain how they work, and present some applicability.


2021 ◽  
Vol 9 ◽  
Author(s):  
Xiangwan Fu ◽  
Mingzhu Tang ◽  
Dongqun Xu ◽  
Jun Yang ◽  
Donglin Chen ◽  
...  

Aiming at the problem of difficulties in modeling the nonlinear relation in the steam coal dataset, this article proposes a forecasting method for the price of steam coal based on robust regularized kernel regression and empirical mode decomposition. By selecting the polynomial kernel function, the robust loss function and L2 regular term to construct a robust regularized kernel regression model are used. The polynomial kernel function does not depend on the kernel parameters and can mine the global rules in the dataset so that improves the forecasting stability of the kernel model. This method maps the features to the high-dimensional space by using the polynomial kernel function to transform the nonlinear law in the original feature space into linear law in the high-dimensional space and helps learn the linear law in the high-dimensional feature space by using the linear model. The Huber loss function is selected to reduce the influence of abnormal noise in the dataset on the model performance, and the L2 regular term is used to reduce the risk of model overfitting. We use the combined model based on empirical mode decomposition (EMD) and auto regressive integrated moving average (ARIMA) model to compensate for the error of robust regularized kernel regression model, thus making up for the limitations of the single forecasting model. Finally, we use the steam coal dataset to verify the proposed model and such model has an optimal evaluation index value compared to other contrast models after the model performance is evaluated as per the evaluation index such as RMSE, MAE, and mean absolute percentage error.


2020 ◽  
pp. 584-618
Author(s):  
Dariusz Jacek Jakóbczak

The method of Probabilistic Features Combination (PFC) enables interpolation and modeling of high-dimensional N data using features' combinations and different coefficients γ: polynomial, sinusoidal, cosinusoidal, tangent, cotangent, logarithmic, exponential, arc sin, arc cos, arc tan, arc cot or power function. Functions for γ calculations are chosen individually at each data modeling and it is treated as N-dimensional probability distribution function: γ depends on initial requirements and features' specifications. PFC method leads to data interpolation as handwriting or signature identification and image retrieval via discrete set of feature vectors in N-dimensional feature space. So PFC method makes possible the combination of two important problems: interpolation and modeling in a matter of image retrieval or writer identification. Main features of PFC method are: PFC interpolation develops a linear interpolation in multidimensional feature spaces into other functions as N-dimensional probability distribution functions.


2020 ◽  
Vol 20 ◽  
pp. S207
Author(s):  
Muharrem Muftuoglu ◽  
Po Yee Mak ◽  
Vivian Ruvolo ◽  
Yuki Nishida ◽  
Peter Ruvolo ◽  
...  

Author(s):  
Colin Ware ◽  
Roland Arsenault

Objective: The objective was to evaluate the use of a spatially aware handheld chart display in a comparison with a track-up fixed display configuration and to investigate how cognitive strategies vary when performing the task of matching chart symbols with environmental features under different display geometries and task constraints. Background: Small-screen devices containing both accelerometers and magnetometers support the development of spatially aware handheld maps. These can be designed so that symbols representing targets in the external world appear in a perspective view determined by the orientation of the device. Method: A panoramic display was used to simulate a marine environment. The task involved matching targets in the scene to symbols on simulated chart displays. In Experiment 1, a spatially aware handheld chart display was compared to a fixed track-up chart display. In Experiment 2, a gaze monitoring system was added and the distance between the chart display and the scene viewpoint was varied. Results: All respondents were faster with the handheld device. Novices were much more accurate with the handheld device. People allocated their gaze very differently if they had to move between a map display and a view of the environment. Conclusion: There may be important benefits to spatially aware handheld displays in reducing errors relating to common navigation tasks. Application: Both the difficulty of spatial transformations and the allocation of attention should be considered in the design of chart displays.


Sign in / Sign up

Export Citation Format

Share Document