dynamic databases
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 11)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Meng Han ◽  
Ni Zhang ◽  
Le Wang ◽  
Xiaojuan Li ◽  
Haodong Cheng

Author(s):  
Jimmy Ming-Tai Wu ◽  
Qian Teng ◽  
Shahab Tayeb ◽  
Jerry Chun-Wei Lin

AbstractThe high average-utility itemset mining (HAUIM) was established to provide a fair measure instead of genetic high-utility itemset mining (HUIM) for revealing the satisfied and interesting patterns. In practical applications, the database is dynamically changed when insertion/deletion operations are performed on databases. Several works were designed to handle the insertion process but fewer studies focused on processing the deletion process for knowledge maintenance. In this paper, we then develop a PRE-HAUI-DEL algorithm that utilizes the pre-large concept on HAUIM for handling transaction deletion in the dynamic databases. The pre-large concept is served as the buffer on HAUIM that reduces the number of database scans while the database is updated particularly in transaction deletion. Two upper-bound values are also established here to reduce the unpromising candidates early which can speed up the computational cost. From the experimental results, the designed PRE-HAUI-DEL algorithm is well performed compared to the Apriori-like model in terms of runtime, memory, and scalability in dynamic databases.


Author(s):  
Komallapalli Kaushik ◽  
P. Krishna Reddy ◽  
Anirban Mondal ◽  
Akhil Ralla
Keyword(s):  

Author(s):  
Mikhail Vasilevich Lyakhovets ◽  
Georgiy Valentinovich Makarov ◽  
Alexandr Sergeevich Salamatin

The article is devoted to questions of synthesis of full-scale - model realizations of data series on the basis of natural data for modeling of controllable and uncontrollable influences at research of operating and projected control systems, and also in training systems of computer training. The possibility of formation of model effects on the basis of joint use of multivariate dynamic databases and natural data simulator is shown. Dynamic databases store information that characterizes the typical representative situations of systems in the form of special functions - generating functions. Multiple variability of dynamic databases is determined by the type of the selected generating function, the methods of obtaining parameters (coefficients) of this function, as well as the selected accuracy of approximation. The situation models recovered by generating functions are used as basic components (trends) in the formation of the resulting full-scale - model implementations and are input into the natural data simulator. The data simulator allows for each variant of initial natural data to form an implementation of the perturbation signal with given statistical properties on a given simulation interval limited by the initial natural implementation. This is achieved with the help of a two-circuit structure, where the first circuit is responsible for evaluation and cor-rection of initial properties of the natural signal, and the second - for iterative correction of deviations of properties of the final implementation from the specified ones. The resulting realizations reflect the properties of their full-scale components, which are difficult to describe by analytical models, and are supplemented by model values, allowing in increments to correct the properties to the specified ones. The given approach allows to form set of variants of course of processes on the basis of one situation with different set degree of uncertainty and conditions of functioning.


2020 ◽  
Vol 111 ◽  
pp. 143-158
Author(s):  
Jongseong Kim ◽  
Unil Yun ◽  
Eunchul Yoon ◽  
Jerry Chun-Wei Lin ◽  
Philippe Fournier-Viger

2020 ◽  
Vol 50 (11) ◽  
pp. 3788-3807
Author(s):  
Jerry Chun-Wei Lin ◽  
Matin Pirouz ◽  
Youcef Djenouri ◽  
Chien-Fu Cheng ◽  
Usman Ahmed

Abstract High-utility itemset mining (HUIM) is considered as an emerging approach to detect the high-utility patterns from databases. Most existing algorithms of HUIM only consider the itemset utility regardless of the length. This limitation raises the utility as a result of a growing itemset size. High average-utility itemset mining (HAUIM) considers the size of the itemset, thus providing a more balanced scale to measure the average-utility for decision-making. Several algorithms were presented to efficiently mine the set of high average-utility itemsets (HAUIs) but most of them focus on handling static databases. In the past, a fast-updated (FUP)-based algorithm was developed to efficiently handle the incremental problem but it still has to re-scan the database when the itemset in the original database is small but there is a high average-utility upper-bound itemset (HAUUBI) in the newly inserted transactions. In this paper, an efficient framework called PRE-HAUIMI for transaction insertion in dynamic databases is developed, which relies on the average-utility-list (AUL) structures. Moreover, we apply the pre-large concept on HAUIM. A pre-large concept is used to speed up the mining performance, which can ensure that if the total utility in the newly inserted transaction is within the safety bound, the small itemsets in the original database could not be the large ones after the database is updated. This, in turn, reduces the recurring database scans and obtains the correct HAUIs. Experiments demonstrate that the PRE-HAUIMI outperforms the state-of-the-art batch mode HAUI-Miner, and the state-of-the-art incremental IHAUPM and FUP-based algorithms in terms of runtime, memory, number of assessed patterns and scalability.


2020 ◽  
Vol 103 ◽  
pp. 58-78 ◽  
Author(s):  
Unil Yun ◽  
Hyoju Nam ◽  
Jongseong Kim ◽  
Heonho Kim ◽  
Yoonji Baek ◽  
...  

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 140122-140144 ◽  
Author(s):  
Xiang Li ◽  
Jiaxuan Li ◽  
Philippe Fournier-Viger ◽  
M. Saqib Nawaz ◽  
Jie Yao ◽  
...  
Keyword(s):  

The multi-factor organizational and economic model developed in the Research and Design center "City Development" is based on a detailed analysis of renovation processes of (new construction–resettlement–demolition) on the scale of the city as a whole and individual quarters; specially formed dynamic databases; synthesis of factors affecting the organization of renovation; comparison of costs and results for economic estimates. The model is designed to solve key problems of optimization and planning of construction, such as determining the total duration of renovation of buildings in Moscow on the specified parameters; calculation of city-wide planned indicators of input–demolition– resettlement by year; determining the order of inclusion of quarters in the renovation process according to various criteria, taking into account the availability of launch sites; determination of structural characteristics of renovation for each quarter (the volume of new construction for resettlement, for the purchase of additional housing by migrants, for migrants from neighboring neighborhoods, the volume of new construction for sale at the real estate market); calculation of economic and financial indicators of the investment renovation project. To solve each problem, special algorithms have been developed to transform the planning from a set of random solutions in a clear calculation of the organizational and economic model.


Sign in / Sign up

Export Citation Format

Share Document