scholarly journals Promote Replica Management based on Data Mining Techniques

2018 ◽  
Vol 7 (4.19) ◽  
pp. 838
Author(s):  
Rafah M. Almuttairi ◽  
Mahdi S. Almhanna ◽  
Mohammed Q. Mohammed ◽  
Saif Q Muhamed

The data grid technique evolved largely in sharing the data in multiple geographical stations across different sites to improve the data access and increases the speed of transmission  data. The performance and the availability of the resources is taken into account, when a total of sites holding a copy of files, there is a considerable benefit in selecting the best set of replica sites to be cooperated for increasing data transfer job. In this paper, new selecrtion strategy is proposed to reduce the total transfer time of required files. Pincer-Search algorithm is used to explore the common characteristics of sites to select uncongested  replica sites.  

Author(s):  
S. K. Saravanan ◽  
G. N. K. Suresh Babu

In contemporary days the more secured data transfer occurs almost through internet. At same duration the risk also augments in secure data transfer. Having the rise and also light progressiveness in e – commerce, the usage of credit card (CC) online transactions has been also dramatically augmenting. The CC (credit card) usage for a safety balance transfer has been a time requirement. Credit-card fraud finding is the most significant thing like fraudsters that are augmenting every day. The intention of this survey has been assaying regarding the issues associated with credit card deception behavior utilizing data-mining methodologies. Data mining has been a clear procedure which takes data like input and also proffers throughput in the models forms or patterns forms. This investigation is very beneficial for any credit card supplier for choosing a suitable solution for their issue and for the researchers for having a comprehensive assessment of the literature in this field.


Author(s):  
А.В. МИРОШНИЧЕНКО ◽  
И.А. ТАТАРЧУК ◽  
С.С. ШАВРИН ◽  
Э.Я. ФАЛЬКОВ

Внедрение стандартов цифровой радиосвязи в гражданской авиации происходит практически без взаимодействия с международными организациями по стандартизации в области телекоммуникаций. При этом цифровая связь используется в первую очередь для обеспечения безопасности полетов воздушных судов. По радиоканалу в вещательном режиме каждое воздушное судно передает информацию о своем местоположении, обеспечивая таким образом ситуационную осведомленность экипажей других судов и диспетчеров. Поскольку число пассажирских и грузовых судов растет, а кроме того, в последнее время многократно возросло число беспилотных судов,которые должны быть интегрированы в общее воздушное пространство, то назрела необходимость рассмотреть существующие стандарты цифровой авиационной связи и провести сравнительный анализ их параметров. В данной работе выполнен сравнительный анализ физического и канального уровней стандартов VDL mode 4 и 090ES,а также представлены критерии качества передачи данных с помощью технологии автоматического зависимого наблюдения-вещания. Сравнение проведено по результатам моделирования работы стандартов в условиях высокой загруженности воздушного пространства. Digital communication standards implementation in civil aviation is now performed practically without collaboration with international telecommunications standardization organizations. At the same time, digital communication is primarily intended to ensure the safety of aircraft flights. Each aircraft transmits its position report messages over a radio communication channel in a broadcast mode, thus providing situational awareness for other aircrafts and the air traffic control staff. Since the number of passenger and cargo aircrafts grows, and in addition, the number of unmanned aircraft that must be integrated into the common airspace has recently multiplied, it is time to consider the existing digital aviation communication standards and perform a comparative analysis of their parameters. In the article, a comparative analysis of the physical and link levels of the VDL mode 4 and 1090ES standards was carried out. The ADS-B data transfer quality evaluation criteria are proposed. The VDL mode 4 and 1090ES standards modeling results in conditions of high airspace congestion are compared.


Stroke ◽  
2017 ◽  
Vol 48 (suppl_1) ◽  
Author(s):  
Tzu-Ching Wu ◽  
Navdeep Sangha ◽  
Feryal N Elorr ◽  
Edgar Olivas ◽  
Christy M Ankrom ◽  
...  

Background: The transfer process for patients with large vessel occlusions from a community hospital to an intra-arterial therapy (IAT)-capable center often involves multiple teams of physicians and administrative personnel, leading to delays in care. Objective We compared time metrics for spoke drip-and-ship telemedicine (TM) patients transferred for IAT to comprehensive stroke centers (CSC) in two different health systems: Kaiser Permanente (KP) with an integrated health care system of spokes and a 50 mile range using ambulances for transfer vs UTHealth (UTH), where patients are transferred by helicopter from varying health systems ranging up to 200 miles from the hub. Methods: We retrospectively identified patients in the KP and UTH networks transferred from TM spokes to the CSC (KP—6 spokes and UTH -17 spokes). From 9/15 to 4/16, a total of 79 TM patients (KP-28 patients, UTH-51 patients) were transferred to the respective hubs for evaluation of IAT. Baseline clinical data, transfer, and IAT metrics were abstracted. Results: On average, it takes ~90 minutes for a TM patient to arrive at the CSC hub once accepted by the transfer center. Patients in the KP Network arrive at the hub faster than UTH patients, but IAT metrics/outcomes are comparable. Over 50% of the patients did not undergo IAT on hub arrival mostly due to lack of clot on CTA (20/45) or symptom improvement (9/45). Conclusion: In two large, yet different TM networks, the transfer time from spoke to hub needs to be shortened. Areas for improvement include spoke arrival to transfer acceptance and transfer acceptance to hub arrival. A prospective study is underway to develop best practice time parameters for this complex process of identifying and transferring patients eligible for IAT.


Biometrics ◽  
2017 ◽  
pp. 1543-1561 ◽  
Author(s):  
Mrutyunjaya Panda ◽  
Aboul Ella Hassanien ◽  
Ajith Abraham

Evolutionary harmony search algorithm is used for its capability in finding solution space both locally and globally. In contrast, Wavelet based feature selection, for its ability to provide localized frequency information about a function of a signal, makes it a promising one for efficient classification. Research in this direction states that wavelet based neural network may be trapped to fall in a local minima whereas fuzzy harmony search based algorithm effectively addresses that problem and able to get a near optimal solution. In this, a hybrid wavelet based radial basis function (RBF) neural network (WRBF) and feature subset harmony search based fuzzy discernibility classifier (HSFD) approaches are proposed as a data mining technique for image segmentation based classification. In this paper, the authors use Lena RGB image; Magnetic resonance image (MR) and Computed Tomography (CT) Image for analysis. It is observed from the obtained simulation results that Wavelet based RBF neural network outperforms the harmony search based fuzzy discernibility classifiers.


2019 ◽  
Vol 29 (1) ◽  
pp. 1441-1452 ◽  
Author(s):  
G.K. Shailaja ◽  
C.V. Guru Rao

Abstract Privacy-preserving data mining (PPDM) is a novel approach that has emerged in the market to take care of privacy issues. The intention of PPDM is to build up data-mining techniques without raising the risk of mishandling of the data exploited to generate those schemes. The conventional works include numerous techniques, most of which employ some form of transformation on the original data to guarantee privacy preservation. However, these schemes are quite multifaceted and memory intensive, thus leading to restricted exploitation of these methods. Hence, this paper intends to develop a novel PPDM technique, which involves two phases, namely, data sanitization and data restoration. Initially, the association rules are extracted from the database before proceeding with the two phases. In both the sanitization and restoration processes, key extraction plays a major role, which is selected optimally using Opposition Intensity-based Cuckoo Search Algorithm, which is the modified format of Cuckoo Search Algorithm. Here, four research issues, such as hiding failure rate, information preservation rate, and false rule generation, and degree of modification are minimized using the adopted sanitization and restoration processes.


Author(s):  
Antonio Congiusta ◽  
Domenico Talia ◽  
Paolo Trunfio

Knowledge discovery is a compute and data intensive process that allows for finding patterns, trends, and models in large datasets. The Grid can be effectively exploited for deploying knowledge discovery applications because of the high-performance it can offer and its distributed infrastructure. For effective use of Grids in knowledge discovery, the development of middleware is critical to support data management, data transfer, data mining and knowledge representation. To such purpose, we designed the Knowledge Grid, a high-level environment providing for Grid-based knowledge discovery tools and services. Such services allow users to create and manage complex knowledge discovery applications, composed as workflows that integrate data sources and data mining tools provided as distributed Grid services. This chapter describes the Knowledge Grid architecture and describes how its components can be used to design and implement distributed knowledge discovery applications. Then, the chapter describes how the Knowledge Grid services can be made accessible using the Open Grid Services Architecture (OGSA) model.


Author(s):  
Shyue-Liang Wang ◽  
Ju-Wen Shen ◽  
Tuzng-Pei Hong

Mining functional dependencies (FDs) from databases has been identified as an important database analysis technique. It has received considerable research interest in recent years. However, most current data mining techniques for determining functional dependencies deal only with crisp databases. Although various forms of fuzzy functional dependencies (FFDs) have been proposed for fuzzy databases, they emphasized conceptual viewpoints and only a few mining algorithms are given. In this research, we propose methods to validate and incrementally search for FFDs from similarity-based fuzzy relational databases. For a given pair of attributes, the validation of FFDs is based on fuzzy projection and fuzzy selection operations. In addition, the property that FFDs are monotonic in the sense that r1 ? r2 implies FDa(r1) ? FDa(r2) is shown. An incremental search algorithm for FFDs based on this property is then presented. Experimental results showing the behavior of the search algorithm are discussed.


Author(s):  
Hai Wang ◽  
Shouhong Wang

Survey is one of the common data acquisition methods for data mining (Brin, Rastogi & Shim, 2003). In data mining one can rarely find a survey data set that contains complete entries of each observation for all of the variables. Commonly, surveys and questionnaires are often only partially completed by respondents. The possible reasons for incomplete data could be numerous, including negligence, deliberate avoidance for privacy, ambiguity of the survey question, and aversion. The extent of damage of missing data is unknown when it is virtually impossible to return the survey or questionnaires to the data source for completion, but is one of the most important parts of knowledge for data mining to discover. In fact, missing data is an important debatable issue in the knowledge engineering field (Tseng, Wang, & Lee, 2003).


2012 ◽  
Vol 500 ◽  
pp. 598-602
Author(s):  
Jun Ma ◽  
Dong Dong Zhang

Since the remote sensing data are multi-resources and massive, the common data mining algorithm cannot effectively discover the knowledge what people want to know. However, spatial association rule can solve the problem of inefficiency in remote sensing data mining. This paper gives an algorithm to compute the frequent item sets though a method like calculating vectors inner-product. And the algorithm will introduce pruning in the whole running. It reduces the time and resources consumption effectively


Sign in / Sign up

Export Citation Format

Share Document