Incidentally Activated Knowledge and Stereotype Based Judgments: A Consideration of Primed Construct-Target Attribute Match

2000 ◽  
Vol 18 (4) ◽  
pp. 377-399 ◽  
Author(s):  
Olivier Corneille ◽  
Theresa K. Vescio ◽  
Charles M. Judd
Keyword(s):  
2019 ◽  
Vol 2019 ◽  
pp. 1-17
Author(s):  
Pelin Yıldırım ◽  
Ulaş K. Birant ◽  
Derya Birant

Learning the latent patterns of historical data in an efficient way to model the behaviour of a system is a major need for making right decisions. For this purpose, machine learning solution has already begun its promising marks in transportation as well as in many areas such as marketing, finance, education, and health. However, many classification algorithms in the literature assume that the target attribute values in the datasets are unordered, so they lose inherent order between the class values. To overcome the problem, this study proposes a novel ensemble-based ordinal classification (EBOC) approach which suggests bagging and boosting (AdaBoost algorithm) methods as a solution for ordinal classification problem in transportation sector. This article also compares the proposed EBOC approach with ordinal class classifier and traditional tree-based classification algorithms (i.e., C4.5 decision tree, RandomTree, and REPTree) in terms of accuracy. The results indicate that the proposed EBOC approach achieves better classification performance than the conventional solutions.


2012 ◽  
Vol 532-533 ◽  
pp. 1046-1050
Author(s):  
Xiao Dong Wu ◽  
Wei Min Li ◽  
Lin Zhang

From the need of antagonising the hypersonic near-space target (HNST), a multi-attribute evaluation method of HNST threat based on RAG-TOPSIS is proposed; the target attribute weights are dealt with using RAG; the grey state for people to understand the selection of attributes in the traditional TOPSIS is avoided and the confirming of the weights of target attributes is more scientific. The multi-attribute evaluation model of HNST’s threat is established, then the rationality and effectiveness of the method is verified by an example.


1995 ◽  
Vol 58 (1) ◽  
pp. 39-48 ◽  
Author(s):  
A. VAN LOEY ◽  
L. LUDIKHUYZE ◽  
M. HENDRICKX ◽  
S. DE CORDT ◽  
P. TOBBACK

The allowed difference in z-value between a single component time/temperature integrator (SCTTI) and target attribute to measure the impact of a thermal process with a given accuracy was examined theoretically. For isothermal heating profiles, the issue and the degree of over- or underestimation of the actual process-value can be predicted as a function of z-value difference and reference temperature. The closer the processing temperature approaches reference temperature, the larger the allowed difference in z-value. As target attributes are characterized by a higher z-value, the allowed z-value difference between SCTTI and target attribute increases. For non-isothermal heating profiles, over- or underestimation depends in addition on the temperature history of the product. In order to obtain a safe estimate of the impact value, whatever the shape of the time/temperature profile, a new approach is suggested based on a SCTTI with a z-value below the target z-value in combination with the use of a reference temperature above or equal to the maximum processing temperature.


2020 ◽  
Author(s):  
Hugo Manuel Proença ◽  
Peter Grünwald ◽  
Thomas Bäck ◽  
Matthijs van Leeuwen

The task of subgroup discovery (SD) is to find interpretable descriptions of subsets of a dataset that stand out with respect to a target attribute. To address the problem of mining large numbers of redundant subgroups, subgroup set discovery (SSD) has been proposed. State-of-the-art SSD methods have their limitations though, as they typically heavily rely on heuristics and/or user-chosen hyperparameters. We propose a dispersion-aware problem formulation for subgroup set discovery that is based on the minimum description length (MDL) principle and subgroup lists. We argue that the best subgroup list is the one that best summarizes the data given the overall distribution of the target. We restrict our focus to a single numeric target variable and show that our formalization coincides with an existing quality measure when finding a single subgroup, but that---in addition---it allows to trade off subgroup quality with the complexity of the subgroup. We next propose SSD++, a heuristic algorithm for which we empirically demonstrate that it returns outstanding subgroup lists: non-redundant sets of compact subgroups that stand out by having strongly deviating means and small spread.


2019 ◽  
Vol 56 (12) ◽  
pp. 122901
Author(s):  
谢若晗 Ruohan Xie ◽  
何思远 Siyuan He ◽  
朱国强 Guoqiang Zhu ◽  
张云华 Yunhua Zhang

2020 ◽  
Author(s):  
Peder Mortvedt Isager

This article suggests a modification to the conception of test validity put forward by Borsboom, Mellenberghand van Heerden (2004). According to the original definition, a test is only valid if test outcomes are causedby variation in the target attribute. According to the d-connection definition of test validity, a test is validfor measuring an attribute if (a) the attribute exists, and (b) variation in the attribute is d-connected tovariation in the measurement outcomes. In other words, a test is valid whenever test outcomes inform useither about what has happened to the target attribute in the past, or about what will happen to the targetattribute in the future. Thus, the d-connection definition expands the number of scenarios in which a test canbe considered valid. Defining test validity as d-connection between target and measured attribute situatesthe validity concept squarely within the structural causal modeling framework of Pearl (2009).


Author(s):  
I.A. Borisova ◽  
O.A. Kutnenko

The problem of outliers detection is one of the important problems in Data Mining of biomedical datasets particularly in case when there could be misclassified objects, caused by diagnostic pitfalls on a stage of a data collection. Occurrence of such objects complicates and slows down dataset processing, distorts and corrupts detected regularities, reduces their accuracy score. We propose the censoring algorithm which could detect misclassified objects after which they are either removed from the dataset or the class attribute of such objects is corrected. Correction procedure keeps the volume of the analyzed dataset as big as it is possible. Such quality is very useful in case of small datasets analysis, when every bit of information can be important. The base concept in the presented work is a measure of similarity of objects with its surroundings. To evaluate the local similarity of the object with its closest neighbors the ternary relative measure called the function of rival similarity (FRiS-function) is used. Mean of similarity values of all objects in the dataset gives us a notion of a class’s separability, how close objects from the same class are to each other and how far they are from the objects of the different classes (with the different diagnosis) in the attribute space. It is supposed misclassified objects are more similar to objects from rival classes, than their own class, so their elimination from the dataset, or the target attribute correction should increase data separability value. The procedure of filtering-correcting of misclassified objects is based on the observation of changes in the evaluation of data separability calculated before and after making corrections to the dataset. The censoring process continues until the inflection point of the separability function is reached. The proposed algorithm was tested on a wide range of model tasks of different complexity. Also it was tested on biomedical tasks such as Pima Indians Diabetes data set, Breast Cancer data set and Parkinson data set. On these tasks the censoring algorithm showed high misclassification sensitivity. Accuracy score increasing and data set volume preservation after censoring procedure proved our base assumptions and the effectiveness of the algorithm.


Sign in / Sign up

Export Citation Format

Share Document