data centres
Recently Published Documents


TOTAL DOCUMENTS

576
(FIVE YEARS 219)

H-INDEX

21
(FIVE YEARS 5)

2022 ◽  
Author(s):  
Arash Mahboubi ◽  
Keyvan Ansari ◽  
Seyit Camtepe ◽  
Jarek Duda ◽  
Paweł Morawiecki ◽  
...  

Unwanted data encryption, such as ransomware attacks, continues to be a significant cybersecurity threat. Ransomware is a preferred weapon of cybercriminals who target small to large organizations' computer systems and data centres. It is malicious software that infects a victim's computer system and encrypts all its valuable data files. The victim needs to pay a ransom, often in cryptocurrency, in return for a decryption key. Many solutions use methods, including the inspection of file signatures, runtime process behaviors, API calls, and network traffic, to detect ransomware code. However, unwanted data encryption is still a top threat. This paper presents the first immunity solution, called the digital immunity module (DIM). DIM focuses on protecting valuable business-related data files from unwanted encryption rather than detecting malicious codes or processes. We show that methods such as file entropy and fuzzy hashing can be effectively used to sense unwanted encryption on a protected file, triggering our novel source coding method to paralyze the malicious manipulation of data such as ransomware encryption. Specifically, maliciously encrypted data blocks consume exponentially larger space and longer writing time on the DIM-protected file system. As a result, DIM creates enough time for system/human intervention and forensics analysis. Unlike the existing solutions, DIM protects the data regardless of ransomware families and variants. Additionally, DIM can defend against simultaneously active multiple ransomware, including the most recent hard to detect and stop fileless ones. We tested our solution on 39 ransomware families, including the most recent ransomware attacks. DIM successfully defended our sample file dataset (1335 pdf, jpg, and tiff files) against those ransomware attacks with zero file loss.


2022 ◽  
Author(s):  
Arash Mahboubi ◽  
Keyvan Ansari ◽  
Seyit Camtepe ◽  
Jarek Duda ◽  
Paweł Morawiecki ◽  
...  

Unwanted data encryption, such as ransomware attacks, continues to be a significant cybersecurity threat. Ransomware is a preferred weapon of cybercriminals who target small to large organizations' computer systems and data centres. It is malicious software that infects a victim's computer system and encrypts all its valuable data files. The victim needs to pay a ransom, often in cryptocurrency, in return for a decryption key. Many solutions use methods, including the inspection of file signatures, runtime process behaviors, API calls, and network traffic, to detect ransomware code. However, unwanted data encryption is still a top threat. This paper presents the first immunity solution, called the digital immunity module (DIM). DIM focuses on protecting valuable business-related data files from unwanted encryption rather than detecting malicious codes or processes. We show that methods such as file entropy and fuzzy hashing can be effectively used to sense unwanted encryption on a protected file, triggering our novel source coding method to paralyze the malicious manipulation of data such as ransomware encryption. Specifically, maliciously encrypted data blocks consume exponentially larger space and longer writing time on the DIM-protected file system. As a result, DIM creates enough time for system/human intervention and forensics analysis. Unlike the existing solutions, DIM protects the data regardless of ransomware families and variants. Additionally, DIM can defend against simultaneously active multiple ransomware, including the most recent hard to detect and stop fileless ones. We tested our solution on 39 ransomware families, including the most recent ransomware attacks. DIM successfully defended our sample file dataset (1335 pdf, jpg, and tiff files) against those ransomware attacks with zero file loss.


The investigation of characteristics of access and use of resources in different distributed environments in the network space is aimed at determining optimal levels for the basic parameters of the supported processes. On the other hand, with the development of the possibilities of the digital space and the significant change in the level of informatization of the society, it is necessary to take the necessary measures to ensure secure access to information resources and in particular to the profiles of personal data. In this respect, the purpose of the article is to propose an organization of heterogeneous environment with resources stored in different places (own memories and cloud data centres). A general architecture and functionality of the main sub-systems are presented. Deterministic model investigation by using Petri Net apparatus based on preliminary formalization is provided to analyse the effectiveness of the processes for regulated and secure access to resources.


Author(s):  
Shesagiri Taminana ◽  
◽  
Lalitha Bhaskari ◽  
Arwa Mashat ◽  
Dragan Pamučar ◽  
...  

With the Present days increasing demand for the higher performance with the application developers have started considering cloud computing and cloud-based data centres as one of the prime options for hosting the application. Number of parallel research outcomes have for making a data centre secure, the data centre infrastructure must go through the auditing process. During the auditing process, auditors can access VMs, applications and data deployed on the virtual machines. The downside of the data in the VMs can be highly sensitive and during the process of audits, it is highly complex to permits based on the requests and can increase the total time taken to complete the tasks. Henceforth, the demand for the selective and adaptive auditing is the need of the current research. However, these outcomes are criticised for higher time complexity and less accuracy. Thus, this work proposes a predictive method for analysing the characteristics of the VM applications and the characteristics from the auditors and finally granting the access to the virtual machine by building a predictive regression model. The proposed algorithm demonstrates 50% of less time complexity to the other parallel research for making the cloud-based application development industry a safer and faster place.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8212
Author(s):  
Andrei-Alin Corodescu ◽  
Nikolay Nikolov ◽  
Akif Quddus Khan ◽  
Ahmet Soylu ◽  
Mihhail Matskin ◽  
...  

The emergence of the edge computing paradigm has shifted data processing from centralised infrastructures to heterogeneous and geographically distributed infrastructures. Therefore, data processing solutions must consider data locality to reduce the performance penalties from data transfers among remote data centres. Existing big data processing solutions provide limited support for handling data locality and are inefficient in processing small and frequent events specific to the edge environments. This article proposes a novel architecture and a proof-of-concept implementation for software container-centric big data workflow orchestration that puts data locality at the forefront. The proposed solution considers the available data locality information, leverages long-lived containers to execute workflow steps, and handles the interaction with different data sources through containers. We compare the proposed solution with Argo workflows and demonstrate a significant performance improvement in the execution speed for processing the same data units. Finally, we carry out experiments with the proposed solution under different configurations and analyze individual aspects affecting the performance of the overall solution.


2021 ◽  
Vol 22 (4) ◽  
pp. 463-468
Author(s):  
Adrian Spataru

This article surveys the literature in search of systems and components that use Blockchain or Smart Contracts to manage computational resources, store data, and execute services using the Cloud paradigm. This paradigm has extended from warehouse-scale data centres to the edge of the network and in between, giving rise to the domains of Edge and Fog Computing. The Cloud Continuum encompasses the three fields and focuses on the management of applications composed of connected services that span from one end to the other of the computational spectrum. Several components that are commanded by Smart Contracts are identified and compared concerning their functionality. Two important research directions are the experimental evaluation of the identified platforms and the identification of standards that can accelerate the adoption of Blockchain-based Fog platforms.


2021 ◽  
Author(s):  
Salam Ismaeel

<div>Increasing power efficiency is one of the most important operational factors for any data centre providers. In this context, one of the most useful approaches is to reduce the number of utilized Physical Machines (PMs) through optimal distribution and re-allocation of Virtual Machines (VMs) without affecting the Quality of Service (QoS). Dynamic VMs provisioning makes use of monitoring tools, historical data, prediction techniques, as well as placement algorithms to improve VMs allocation and migration. Consequently, the efficiency of the data centre energy consumption increases.</div><div>In this thesis, we propose an efficient real-time dynamic provisioning framework to reduce energy in heterogeneous data centres. This framework consists of an efficient workload preprocessing, systematic VMs clustering, a multivariate prediction, and an optimal Virtual Machine Placement (VMP) algorithm. Additionally, it takes into consideration VM and user behaviours along with the existing state of PMs. The proposed framework consists of a pipeline successive subsystems. These subsystems could be used separately or combined to improve accuracy, efficiency, and speed of workload clustering, prediction and provisioning purposes.<br></div><div>The pre-processing and clustering subsystems uses current state and historical workload data to create efficient VMs clusters. Efficient VMs clustering include less consumption resources, faster computing and improved accuracy. A modified multivariate Extreme Learning Machine (ELM)-based predictor is used to forecast the number of VMs in each cluster for the subsequent period. The prediction subsystem takes users’ behaviour into consideration to exclude unpredictable VMs requests.<br></div><div>The placement subsystem is a multi-objective placement algorithm based on a novel Machine Condition Index (MCI). MCI represents a group of weighted components that is inclusive of data centre network, PMs, storage, power system and facilities used in any data centre. In this study it will be used to measure the extent to which PM is deemed suitable for handling the new and/or consolidated VM in large scale heterogeneous data centres. It is an efficient tool for comparing server energy consumption used to augment the efficiency and manageability of data centre resources.</div><div> The proposed framework components separately are tested and evaluated with both synthetic and realistic data traces. Simulation results show that proposed subsystems can achieve efficient results as compared to existing algorithms. <br></div>


2021 ◽  
Author(s):  
Salam Ismaeel

<div>Increasing power efficiency is one of the most important operational factors for any data centre providers. In this context, one of the most useful approaches is to reduce the number of utilized Physical Machines (PMs) through optimal distribution and re-allocation of Virtual Machines (VMs) without affecting the Quality of Service (QoS). Dynamic VMs provisioning makes use of monitoring tools, historical data, prediction techniques, as well as placement algorithms to improve VMs allocation and migration. Consequently, the efficiency of the data centre energy consumption increases.</div><div>In this thesis, we propose an efficient real-time dynamic provisioning framework to reduce energy in heterogeneous data centres. This framework consists of an efficient workload preprocessing, systematic VMs clustering, a multivariate prediction, and an optimal Virtual Machine Placement (VMP) algorithm. Additionally, it takes into consideration VM and user behaviours along with the existing state of PMs. The proposed framework consists of a pipeline successive subsystems. These subsystems could be used separately or combined to improve accuracy, efficiency, and speed of workload clustering, prediction and provisioning purposes.<br></div><div>The pre-processing and clustering subsystems uses current state and historical workload data to create efficient VMs clusters. Efficient VMs clustering include less consumption resources, faster computing and improved accuracy. A modified multivariate Extreme Learning Machine (ELM)-based predictor is used to forecast the number of VMs in each cluster for the subsequent period. The prediction subsystem takes users’ behaviour into consideration to exclude unpredictable VMs requests.<br></div><div>The placement subsystem is a multi-objective placement algorithm based on a novel Machine Condition Index (MCI). MCI represents a group of weighted components that is inclusive of data centre network, PMs, storage, power system and facilities used in any data centre. In this study it will be used to measure the extent to which PM is deemed suitable for handling the new and/or consolidated VM in large scale heterogeneous data centres. It is an efficient tool for comparing server energy consumption used to augment the efficiency and manageability of data centre resources.</div><div> The proposed framework components separately are tested and evaluated with both synthetic and realistic data traces. Simulation results show that proposed subsystems can achieve efficient results as compared to existing algorithms. <br></div>


Sign in / Sign up

Export Citation Format

Share Document