frequent items
Recently Published Documents


TOTAL DOCUMENTS

189
(FIVE YEARS 29)

H-INDEX

18
(FIVE YEARS 2)

2021 ◽  
Vol 9 (2) ◽  
pp. 208-215
Author(s):  
Luky Fabrianto ◽  
Novianti Madhona Faizah ◽  
Johan Hendri Prasetyo ◽  
Bobby Suryo Prakoso ◽  
Gani Wiharso

The popular data mining methods to find the relationship between an item and another item is the association rule method using A Priori algorithm, this method is precise to generate a pattern of relationship rules between the types of items sold based on sales data. Support values ​​on frequent items and confidence in the rules obtained can be an actionable insight that can be follow up by minimarket managers, cooperatives and etc. The categorization of product types in minimarkets is much while the total number of transactions in one year is also very large, but the number of types of items sold in a transaction is very few, thus the threshold value cannot be high. In this study, the association rule method was carried out per event or certain period related to Muslim holidays, the highest rule was obtained is Makanan ringan => Sembako with 46% confidence and 16% support which occurred in the month of Ramadan.


2021 ◽  
Author(s):  
Aopeng Xu ◽  
Tao Xu ◽  
Xiaqing Ma ◽  
Zixiang Zhang

Author(s):  
P. Naresh ◽  
R. Suguna

According to recent statistics, there was drastic growth in online business sector where more number of customers intends to purchase items. Due to these retailers accumulates huge volumes of data from day to day operations and engrossed in analyzing the data to watch the behavior of customers at items which strengthen the business promotions and catalog management. It reveals the customer interestingness and frequent items from large data. To carry out this there was known algorithms present which deals with static and dynamic data. Some of them are lag time and memory consuming and involves unnecessary process. This paper intents to implement an efficient incremental pre ordered coded tree (IPOC) generation for data updates and applies frequent item set generation algorithm on the tree. While incremental generation of tree, new data items will link to previous nodes in tree by increasing its support count. This removes the lagging issues in existing algorithms and does not need to mine from scratch and also reduces the time, memory consumption by the use of nodeset data structure. The results of proposed method was observed and analyzed with existing methods. The anticipated method shows improved results by means of generated items, time and memory.


Author(s):  
Weiyi Liu ◽  
Kun Yue ◽  
Jianyu Li ◽  
Jie Li ◽  
Jin Li ◽  
...  

2021 ◽  
Vol 12 ◽  
Author(s):  
Chunhua Peng ◽  
Caizhen Yue ◽  
Andrew Avitt ◽  
Youguo Chen

The Zimbardo Time Perspective Inventory (ZTPI) is one of the most well-known and widely used measures of time perspective. Various short versions were proposed to resolve the psychometric problems of the ZTPI. The present study conducted a systematic review to obtain 25 short versions, calculated the frequency of each item of the ZTPI in short versions, and hypothesized that the more frequent the item is, the more robust it becomes. The hypothesis was tested by assessing the structural validity and internal consistency of short forms with high, medium, and low frequent items in Chinese samples (575 children, 407 undergraduates, and 411 older adults). Structural validity and internal consistency analyses showed that the form with more frequent items had better psychometric properties; item frequencies were positively correlated with factor loadings. The results suggest that the systematic review is an effective approach to identify the robust items of the ZTPI. This approach is general and can be the basis to improve the psychometric properties of scales in social science.


2021 ◽  
Author(s):  
Hao Wang ◽  
Huan Wang

Abstract Differential privacy has made a significant progress in numerical data preserving. Compared with numerical data, non-numerical data (e.g. entity object) are also widely applied in intelligent processing tasks. But non-numerical data may reveal more user’s privacy. Recently, researchers attempt to take advantage of the exponential mechanism of differential privacy to solve this challenge. Nonetheless, exponential mechanism has a drawback in correlated data protection, which can not achieve expected privacy degree. To remedy this issue, in this paper, an effective correlated non-numerical data release mechanism is proposed by defining the notion of Correlation-Indistinguishability and designing a correlated exponential mechanism to realize Correlation-Indistinguishability in practice. Inspired by the concept of indistinguishability, Correlation-Indistinguishability can guarantee the correlations of the probability distributions between the output distribution and original data the same to an adversary. In addition, we would rather let two Gaussian white samples pass through a designed filter, to realize the definition of Correlation-Indistinguishability, than using independent exponential variables. Experimental evaluation demonstrates that our mechanism outperforms current schemes in terms of security and utility for frequent items mining.


2021 ◽  
Vol 9 (2) ◽  
pp. 885-893
Author(s):  
Divvela Srinivasa Rao, Et. al.

In the decision making process the Data Analytics plays an important role. The  Insights that are obtained from pattern analysis gives many benefits like cost cutting,  good revenue, and better competitive advantage. On the other hand the patterns of frequent itemsets that are hidden consume more time for extraction when data increases over time.  However less memory consumption is required for mining  the  patterns of frequent itemsets because of  heavy computation. Therefore, an algorithm required  must be efficient for mining the  patterns of the frequent itemsets that are hidden which takes less memory with short run time. This paper presents a review of different algorithms for finding Frequent Patterns so that a more efficient algorithm for finding frequent items sets can be developed.


Rheumatology ◽  
2021 ◽  
Vol 60 (Supplement_1) ◽  
Author(s):  
Lucy M Carter ◽  
Caroline Gordon ◽  
Chee-Seng Yee ◽  
Ian Bruce ◽  
David A Isenberg ◽  
...  

Abstract Background/Aims  BILAG-2004 index is required to prescribe and monitor biologics in SLE. It is more comprehensive and responsive than the SLEDAI and widely used in clinical trials. However, it may be time-consuming and does require training for accurate use. The original format requires a separate index form, glossary and scoring algorithm. Further, the eventual scores from A (highly active) to E (no disease involvement) which are required to make treatment decisions, can be difficult to calculate during in routine clinical practice.The Easy-BILAG project aimed to develop and validate a simplified tool to score the original BILAG-2004 index more rapidly and with fewer errors, for use in routine clinical care. Methods  The BILAG group identified four areas to address: (i) many items must be scored, but most are rare; (ii) glossary definitions are not always followed; (iii) the final score is not easily calculated at the time of assessment; (iv) training is time-consuming. Data from the BILAG-Biologics Registry were used to measure the frequency of each of 97 BILAG-2004 items in an active SLE population. These data and a series of prototypes were used to design a new tool for simplified scoring of the BILAG-2004 index - the “Easy-BILAG”. This instrument content was tested using exemplar paper cases. A validation study was then designed to test the Easy-BILAG compared to the standard BILAG-2004 scoring method for completion time and accuracy. Results  2395 assessments from the BILAG-BR were analysed. There was marked variation in item frequency. The 7 most frequent items were each present in more than 20% of records: arthralgia (72%), mild skin eruption (47%), moderate arthritis (38%), mild mucosal ulceration (34%), mild alopecia (34%), pleurisy / pericarditis (22%). 16 more items were scored in 5-20% of assessments; 36 items in 1-5% of assessments, and 25 items in < 1% of assessments. The Easy-BILAG was designed to capture items scoring >5% in a rapid single-page assessment. Items are arranged in a logical sequence of clinical assessment. An abridged glossary definition is cited immediately adjacent to each item. A new colour-coding system directs clinicians instantly to the overall A-E score for each domain (colour-blindness compatible). This single page assessment covered 68% of all assessments of biologic-treated patients. The remaining items are scored on a back page only in cases where necessary, as indicated by screening questions on the main page. The overall accuracy and usability of the Easy-BILAG template is now undergoing a validation against test series of standardized case vignettes by a sample of consultants and specialty trainees with a range of experience across England and Wales. Conclusion  Easy-BILAG allows rapid scoring of BILAG-2004 in routine clinical practice. Following completion of validation, it will be made widely available to clinicians. Disclosure  L.M. Carter: None. C. Gordon: None. C. Yee: None. I. Bruce: None. D.A. Isenberg: None. S. Skeoch: None. E.M. Vital: None.


2021 ◽  
Vol 6 (1) ◽  
pp. 514
Author(s):  
Maggie Baird

This paper presents a phonological learner that derives frequency effects – the propensity of more frequent items undergo deletion and reduction processes at higher rates. The model is a bidirectional Maximum Entropy grammar which has two distinct learning steps, one mapping from UR to SR, and another mapping back from SR to UR using Bayesian inference. The model is tested on the case of t/d deletion in English and correctly derives the frequency-based pattern of deletion without access to surface patterns. 


Sign in / Sign up

Export Citation Format

Share Document