data backup
Recently Published Documents


TOTAL DOCUMENTS

128
(FIVE YEARS 33)

H-INDEX

9
(FIVE YEARS 1)

Data Security ◽  
2022 ◽  
pp. 61-64
Author(s):  
Thomas H. Lenhard
Keyword(s):  

2021 ◽  
pp. 19-26
Author(s):  
Yana Chumburidze ◽  
◽  
Tatiana Omelchenko ◽  

Data loss as a result of threats or natural disasters can lead not only to huge financial losses, but also damage the reputation of the company. The most effective way to protect data from loss is backup. The purpose of the study is to select the most appropriate method of data backup and develop a software tool based on it. We discussed the main methods of data backup such as full backup, incremental backup, differential backup, reverse incremental backup and synthetic backup. We identified the following criteria to determine the most appropriate backup method: backup speed, restore speed, backup repository, reliability, network workload, redundancy. We performed a comparative analysis based on the selected criteria to reveal that the most appropriate method of data backup is reverse incremental backup. A functional model, architecture and interface of the software tool have been designed. The main purpose of the software tool is to implement the method of reverse incremental backup to prevent information loss. The conformity of the backup data obtained as a result of performing a reverse backup to the current state of the system is considered to be the achievement of the goal. We conducted a series of experiments that showed that the backup copy corresponds to the current state of the system.


2021 ◽  
Vol 10 (5) ◽  
pp. 2707-2715
Author(s):  
Prakai Nadee ◽  
Preecha Somwang

Data communication and computer networks have enormously grown in every aspect of businesses. Computer networks are being used to offer instantaneous access to information in online libraries around the world. The popularity and importance of data communication has produced a strong demand in all sectors job for people with more computer networking expertise. Companies need workers to plan, use and manage the database system aspects of security. The security policy must apply data stored in a computer system as well as information transfer a network. This paper aimed to define computer data backup policies of the Incremental backup by using Unison synchronization as a file-synchronization tool and load balancing file synchronization management (LFSM) for traffic management. The policy is to be able to perform a full backup only at first as a one time from obtaining a copy of the data. The easiest aspect of value to assess is replacement for restoring the data from changes only and processing the correct information. As a result, the new synchronization technique was able to improve the performance of data backup and computer security system.


2021 ◽  
Vol 18 (2) ◽  
pp. 216
Author(s):  
Kadek Surya Mahedy

In the Ganesha Education University library information system, there is a database stored on the information system server. This database can be accessed online by the academic community of Universitas Pendidikan Ganesha, some technical factors sometimes cause the database system to suffer damage or loss of data, this can be caused by hardware or software, one way to secure the data is to implement several system methods. data backup. This research aims to back up the library information system database and back up the library information system web files. This system developed using the prototyping method with several stages namely: 1. Gathering requirements 2. Building prototyping (model) 3. Evaluation of prototyping 4. Impersonating the system 5. Performing system testing 6. Evaluation of the system 7. System implementation and system usage. The result of this research are a system database in full automatic back up and back up the library information system web files by mirror back up


2021 ◽  
Vol 2 (2) ◽  
pp. 1-16
Author(s):  
Ru Yang ◽  
Yuhui Deng ◽  
Yi Zhou ◽  
Ping Huang

Restoring data is the main purpose of data backup in storage systems. The fragmentation issue, caused by physically scattering logically continuous data across a variety of disk locations, poses a negative impact on the restoring performance of a deduplication system. Rewriting algorithms are used to alleviate the fragmentation problem by improving the restoring speed of a deduplication system. However, rewriting methods give birth to a big sacrifice in terms of deduplication ratio, leading to a huge storage space waste. Furthermore, traditional backup approaches treat file metadata and chunk metadata as the same, which causes frequent on-disk metadata accesses. In this article, we start by analyzing storage characteristics of backup metadata. An intriguing finding shows that with 10 million files, the file metadata merely takes up approximately 340 MB. Motivated by this finding, we propose a Classified-Metadata based Restoring method (CMR) that classifies backup metadata into file metadata and chunk metadata . Because the file metadata merely takes up a meager amount of space, CMR maintains all file metadata in memory, whereas chunk metadata are aggressively prefetched to memory in a greedy manner. A deduplication system with CMR in place exhibits three salient features: (i) It avoids rewriting algorithms’ additional overhead by reducing the number of disk reads in a restoring process, (ii) it increases the restoring throughput without sacrificing the deduplication ratio, and (iii) it thoroughly leverages the hardware resources to boost the restoring performance. To quantitatively evaluate the performance of CMR, we compare our CMR against two state-of-the-art approaches, namely, a history-aware rewriting method (HAR) and a context-based rewriting scheme (CAP). The experimental results show that compared to HAR and CAP, CMR reduces the restoring time by 27.2% and 29.3%, respectively. Moreover, the deduplication ratio is improved by 1.91% and 4.36%, respectively.


Author(s):  
Mohammad M. Alshammari ◽  
Ali A. Alwan ◽  
Azlin Nordin ◽  
Abedallah Zaid Abualkishik

Cloud computing has become a desirable choice to store and share large amounts of data among several users. The two main concerns with cloud storage are data recovery and cost of storage. This article discusses the issue of data recovery in case of a disaster in a multi-cloud environment. This research proposes a preventive approach for data backup and recovery aiming at minimizing the number of replicas and ensuring high data reliability during disasters. This approach named Preventive Disaster Recovery Plan with Minimum Replica (PDRPMR) aims at reducing the number of replications in the cloud without compromising the data reliability. PDRPMR means preventive action checking of the availability of replicas and monitoring of denial of service attacks to maintain data reliability. Several experiments were conducted to evaluate the effectiveness of PDRPMR and the results demonstrated that the storage space used one-third to two-thirds compared to typical 3-replicas replication strategies.


Sign in / Sign up

Export Citation Format

Share Document