target attributes
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 6)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Vol 2006 (1) ◽  
pp. 012032
Author(s):  
Jianan Wang ◽  
Bo Wu ◽  
Zhaojun Wang ◽  
Nana Yao ◽  
Jiajun Wu ◽  
...  

2020 ◽  
Vol 24 (6) ◽  
pp. 1403-1439
Author(s):  
Marvin Meeng ◽  
Harm de Vries ◽  
Peter Flach ◽  
Siegfried Nijssen ◽  
Arno Knobbe

Subgroup Discovery is a supervised, exploratory data mining paradigm that aims to identify subsets of a dataset that show interesting behaviour with respect to some designated target attribute. The way in which such distributional differences are quantified varies with the target attribute type. This work concerns continuous targets, which are important in many practical applications. For such targets, differences are often quantified using z-score and similar measures that compare simple statistics such as the mean and variance of the subset and the data. However, most distributions are not fully determined by their mean and variance alone. As a result, measures of distributional difference solely based on such simple statistics will miss potentially interesting subgroups. This work proposes methods to recognise distributional differences in a much broader sense. To this end, density estimation is performed using histogram and kernel density estimation techniques. In the spirit of Exceptional Model Mining, the proposed methods are extended to deal with multiple continuous target attributes, such that comparisons are not restricted to univariate distributions, but are available for joint distributions of any dimensionality. The methods can be incorporated easily into existing Subgroup Discovery frameworks, so no new frameworks are developed.


2020 ◽  
Vol 12 (23) ◽  
pp. 3863
Author(s):  
Chenwei Wang ◽  
Jifang Pei ◽  
Zhiyong Wang ◽  
Yulin Huang ◽  
Junjie Wu ◽  
...  

With the recent advances of deep learning, automatic target recognition (ATR) of synthetic aperture radar (SAR) has achieved superior performance. By not being limited to the target category, the SAR ATR system could benefit from the simultaneous extraction of multifarious target attributes. In this paper, we propose a new multi-task learning approach for SAR ATR, which could obtain the accurate category and precise shape of the targets simultaneously. By introducing deep learning theory into multi-task learning, we first propose a novel multi-task deep learning framework with two main structures: encoder and decoder. The encoder is constructed to extract sufficient image features in different scales for the decoder, while the decoder is a tasks-specific structure which employs these extracted features adaptively and optimally to meet the different feature demands of the recognition and segmentation. Therefore, the proposed framework has the ability to achieve superior recognition and segmentation performance. Based on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, experimental results show the superiority of the proposed framework in terms of recognition and segmentation.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Hui Liu ◽  
Jingqing Jiang ◽  
Yaowei Hou ◽  
Jie Song

Cities in the big data era hold the massive urban data to create valuable information and digitally enhanced services. Sources of urban data are generally categorized as one of the three types: official, social, and sensorial, which are from the government and enterprises, social networks of citizens, and the sensor network. These types typically differ significantly from each other but are consolidated together for the smart urban services. Based on the sophisticated consolidation approaches, we argue that a new challenge, fragment complexity that represents a well-integrated data has appropriate but fragmentary schema and difficult to be queried, is ignored in the state-of-art urban data management. Comparing with predefined and rigid schema, fragmentary schema means a dataset contains millions of attributes but nonorthogonally distributed among tables, and of course, values of these attributes are even massive. As far as a query is concerned, locating where these attributes are being stored is the first encountered problem, while traditional value-based query optimization has no contributions. To address this problem, we propose an index on massive attributes as an attributes-oriented optimization, namely, attribute index. Attribute index is a secondary index for locating files in which the target attributes are stored. It contains three parts: ATree for searching keys, DTree for locating keys among files, and ADLinks as a mapping table between ATree and DTree. In this paper, the index architecture, logical structure and algorithms, the implementation details, the creation process, the integration to the existing key-value store, and the urban application scenario are described. Experiments show that, in comparison with B + -Tree, LSM-Tree, and AVL-Tree, the query time of ATree is 1.1x, 1.5x, and 1.2x faster, respectively. Finally, we integrate our proposition with HBase, namely, UrbanBase, whose query performance is 1.3x faster than the original HBase.


2020 ◽  
Vol 62 (11) ◽  
pp. 4223-4253
Author(s):  
Panagiotis Mandros ◽  
Mario Boley ◽  
Jilles Vreeken

Abstract We consider the task of discovering functional dependencies in data for target attributes of interest. To solve it, we have to answer two questions: How do we quantify the dependency in a model-agnostic and interpretable way as well as reliably against sample size and dimensionality biases? How can we efficiently discover the exact or $$\alpha $$ α -approximate top-k dependencies? We address the first question by adopting information-theoretic notions. Specifically, we consider the mutual information score, for which we propose a reliable estimator that enables robust optimization in high-dimensional data. To address the second question, we then systematically explore the algorithmic implications of using this measure for optimization. We show the problem is NP-hard and justify worst-case exponential-time as well as heuristic search methods. We propose two bounding functions for the estimator, which we use as pruning criteria in branch-and-bound search to efficiently mine dependencies with approximation guarantees. Empirical evaluation shows that the derived estimator has desirable statistical properties, the bounding functions lead to effective exact and greedy search algorithms, and when combined, qualitative experiments show the framework indeed discovers highly informative dependencies.


2018 ◽  
Author(s):  
Robert Klassen ◽  
Lisa Kim

The purpose of this article is to report three phases of development and administration of an online situational judgment test (SJT) designed to screen candidates for selection into an initial teacher education (ITE) program. In Phase 1 the development of the content of the test is described. Phase 2 reports the online administration of a prototype SJT to 3341 applicants as part of the process designed to select candidates for an intensive day-long assessment center. Phase 3 reports the administration of a revised version of the SJT to 587 participants. Results showe that the revised SJT was internally reliable and significantly related to other screening methods and to assessment center outcomes. High scorers on the SJT scored significantly higher at the assessment center. There were no significant SES differences on the SJT, but females scored significantly higher than males. The factor structure of the test was unidimensional and did not cleanly reflect the six target attributes which underpinned the development of test content. We suggest that the teacher selection SJT could be a reliable, valid, and efficient screening tool for entrance into ITE.


2018 ◽  
Vol 2018 ◽  
pp. 1-18 ◽  
Author(s):  
Gesu Li ◽  
Zhipeng Cai ◽  
Guisheng Yin ◽  
Zaobo He ◽  
Madhuri Siddula

The recommender system is mainly used in the e-commerce platform. With the development of the Internet, social networks and e-commerce networks have broken each other’s boundaries. Users also post information about their favorite movies or books on social networks. With the enhancement of people’s privacy awareness, the personal information of many users released publicly is limited. In the absence of items rating and knowing some user information, we propose a novel recommendation method. This method provides a list of recommendations for target attributes based on community detection and known user attributes and links. Considering the recommendation list and published user information that may be exploited by the attacker to infer other sensitive information of users and threaten users’ privacy, we propose the CDAI (Infer Attributes based on Community Detection) method, which finds a balance between utility and privacy and provides users with safer recommendations.


2018 ◽  
Vol 24 (4) ◽  
pp. 492-499 ◽  
Author(s):  
Won Seok Lee ◽  
Joon-Kyu Lee ◽  
Joonho Moon

The main purpose of this study is to investigate the attributes of capsule hotels preferred by individuals. To this end, a choice experiment (CE) was adopted; a CE is a systematic method used to determine individual preferences with regard to goods and services. A well-known advantage of CEs is their ability to capture a pecuniary value for target attributes in the form of marginal willingness to pay (MWTP). By comparing the sizes of MWTPs, we can recognize the order of preference among attributes. Amazon Mechanical Turk was used to collect the study data. We examined the magnitudes of the degree of preferences for “additional services provided,” “accessibility,” and “price.” The findings indicate that price is negatively associated with capsule hotel choice, whereas accessibility and service are positively associated with capsule hotel choice.


2016 ◽  
Vol 2016 ◽  
pp. 1-18
Author(s):  
Fan Deng ◽  
Li-Yong Zhang ◽  
Bo-Yu Zhou ◽  
Jia-Wei Zhang ◽  
Hong-Yang Cao

If there are lots of redundancies in the policies loaded on the policy decision point (PDP) in the authorization access control model, the system will occupy more resources in operation and consumes plenty of evaluation time and storage space. In order to detect and eliminate policy redundancies and then improve evaluation performance of the PDP, aredundancy related to combining algorithmsdetecting and eliminating engine is proposed in this paper. This engine cannot only detect and eliminate theredundancy related to combining algorithms, but also evaluate access requests. AResource Brick Wallis constructed by the engine according to the resource attribute of a policy’s target attributes. By theResource Brick Walland the policy/rule combining algorithms, three theorems for detectingredundancies related to combining algorithmsare proposed. A comparison of the evaluation performance of theredundancy related to combining algorithmsdetecting and eliminating engine with that of Sun PDP is made. Experimental results show that the evaluation performance of the PDP can be prominently improved by eliminating theredundancy related to combining algorithms.


Sign in / Sign up

Export Citation Format

Share Document