Machine Learning based Presaging Technique for Multi-user Utility Pattern Rooted Cloud Service Negotiation for Providing Efficient Service

Author(s):  
Bhavan Kumar ◽  
Ayngaran Krishnamurhty ◽  
R.M Mohan
Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 395
Author(s):  
Héctor D. Menéndez ◽  
David Clark ◽  
Earl T. Barr

Malware detection is in a coevolutionary arms race where the attackers and defenders are constantly seeking advantage. This arms race is asymmetric: detection is harder and more expensive than evasion. White hats must be conservative to avoid false positives when searching for malicious behaviour. We seek to redress this imbalance. Most of the time, black hats need only make incremental changes to evade them. On occasion, white hats make a disruptive move and find a new technique that forces black hats to work harder. Examples include system calls, signatures and machine learning. We present a method, called Hothouse, that combines simulation and search to accelerate the white hat’s ability to counter the black hat’s incremental moves, thereby forcing black hats to perform disruptive moves more often. To realise Hothouse, we evolve EEE, an entropy-based polymorphic packer for Windows executables. Playing the role of a black hat, EEE uses evolutionary computation to disrupt the creation of malware signatures. We enter EEE into the detection arms race with VirusTotal, the most prominent cloud service for running anti-virus tools on software. During our 6 month study, we continually improved EEE in response to VirusTotal, eventually learning a packer that produces packed malware whose evasiveness goes from an initial 51.8% median to 19.6%. We report both how well VirusTotal learns to detect EEE-packed binaries and how well VirusTotal forgets in order to reduce false positives. VirusTotal’s tools learn and forget fast, actually in about 3 days. We also show where VirusTotal focuses its detection efforts, by analysing EEE’s variants.


2021 ◽  
Vol 17 (4) ◽  
pp. 75-88
Author(s):  
Padmaja Kadiri ◽  
Seshadri Ravala

Security threats are unforeseen attacks to the services provided by the cloud service provider. Depending on the type of attack, the cloud service and its associated features will be unavailable. The mitigation time is an integral part of attack recovery. This research paper explores the different parameters that will aid in predicting the mitigation time after an attack on cloud services. Further, the paper presents machine learning models that can predict the mitigation time. The paper presents the kernel-based machine learning models that can predict the average mitigation time during security attacks. The analysis of the results shows that the kernel-based models show 87% accuracy in predicting the mitigation time. Furthermore, the paper explores the performance of the kernel-based machine learning models based on the regression-based predictive models. The regression model is used as a benchmark model to analyze the performance of the machine learning-based predictive models in the prediction of mitigation time in the wake of an attack.


Procedia CIRP ◽  
2019 ◽  
Vol 86 ◽  
pp. 185-191
Author(s):  
M. Schreiber ◽  
J. Klöber-Koch ◽  
J. Bömelburg-Zacharias ◽  
S. Braunreuther ◽  
G. Reinhart

The tradition of moving applications, data to be consumed by the applications and the data generated by the applications is increasing and the increase is due to the advantages of cloud computing. The advantages of cloud computing are catered to the application owners, application consumers and at the same time to the cloud datacentre owners or the cloud service providers also. Since IT tasks are vital for business progression, it for the most part incorporates repetitive or reinforcement segments and framework for power supply, data correspondences associations, natural controls and different security gadgets. An extensive data centre is a mechanical scale task utilizing as much power as a community. The primary advantage of pushing the applications on the cloud-based data centres are low infrastructure maintenance with significant cost reduction for the application owners and the high profitability for the data centre cloud service providers. During the application migration to the cloud data centres, the data and few components of the application become exposed to certain users. Also, the applications, which are hosted on the cloud data centres must comply with the certain standards for being accepted by various application consumers. In order to achieve the standard certifications, the applications and the data must be audited by various auditing companies. Few of the cases, the auditors are hired by the data centre owners and few of times, the auditors are engaged by application consumers. Nonetheless, in both situations, the auditors are third party and the risk of exposing business logics in the applications and the data always persists. Nevertheless, the auditor being a third-party user, the data exposure is a high risk. Also, in a data centre environment, it is highly difficult to ensure isolation of the data from different auditors, who may not be have the right to audit the data. Significant number of researches have attempted to provide a generic solution to this problem. However, the solutions are highly criticized by the research community for making generic assumptions during the permission verification process. Henceforth, this work produces a novel machine learning based algorithm to assign or grant audit access permissions to specific auditors in a random situation without other approvals based on the characteristics of the virtual machine, in which the application and the data is deployed, and the auditing user entity. The results of the proposed algorithm are highly satisfactory and demonstrates nearly 99% accuracy on data characteristics analysis, nearly 98% accuracy on user characteristics analysis and 100% accuracy on secure auditor selection process


Cloud computing has evolved over a decade and thrived to the state that Information Technology cannot survive without cloud. It has evolved in such a way that it is embedded in every one’s life. Though cloud has become integral part of every new software, still there are gaps in security features and customers keenly look into multiple aspects of cloud service provider features before choosing their appropriate CSP. More than cost and features Cloud Service Provider should gain trust of the clients to do more business. There are lot of factors involved to gain confidence of client such as transparency, cost, security etc. Also how flexible the provider can help in deploying for various light weight devices such as Internet of Things (IoT) also matters where in IoT is emerging industry. Concurrently Cloud Service capability to work in collaborative fashion and other machine learning features will also take into account by clients to choose their respective best fit Cloud Service Provider.


2021 ◽  
Author(s):  
Mohanasundaram R ◽  
Rishikesh Y Mule ◽  
Gowrison Gengavel ◽  
Muhammad Rukunuddin Ghalib ◽  
Achyut Shankar ◽  
...  

Abstract Surveillance system is a method of securing resources and loss of lives against fire, gas leakage, intruder, earthquake, and weather. In today’s time, people own home, farm, factory, office etc. It has become more crucial to monitor everything for securing resources and loss of lives against fire, gas leakage, intruder, earthquake. As a part of surveillance, monitoring weather is also essential. Climate change and agriculture are interrelated processes, Today's sophisticated commercial farming like weather monitoring, suffers from a lack of precision, which results huge loss in farm. Monitoring residential and commercial arenas throughout is an efficient technique to decrease personal and property losses due to fire, gas leakage, earthquake catastrophes. Internet of Things make it possible and can be implemented separately for each thing or site. But it is very difficult to monitor each site and have centralized access of it across the world. This arises the need of heterogenous system which will monitor all IoTs and perform decision making accordingly. IoT itself a large-scale thing. For single IoT application, sensors used are more in number. These sensors generate thousands of records for an instance of time, some of those are valuable and some requires just analysis. This huge amount of data on servers requires better data processing and analytics. Maintenance is also a critical task. Cloud extends these functionalities but storing all the data on cloud entail users to pay tremendous cost to the cloud service providers. This problem is catered by “CoTsurF” framework. This paper presents novel and cost effective “CoTsurF” framework, CoT-enabled robust Surveillance system using fog machine learning, a Proof-Of-Concept implementation of heterogenous and robust surveillance system based on internet of things and cloud computing by leveraging a groundbreaking concept of Fog machine learning that is Fog Computing and machine learning in Cloud of Things.


2022 ◽  
Vol 2161 (1) ◽  
pp. 012020
Author(s):  
Sohit Kummar ◽  
Asutosh Mohanty ◽  
Jyotsna ◽  
Sudeshna Chakraborty

Abstract Coronavirus (Covid-19) pandemic has impacted the whole world and has forced health emergencies internationally. The contact of this pandemic has been fallen over almost all the development sectors. A lot of precautionary measures have been taken to control the Covid-19 spread, where wearing a face mask is an essential precaution. Wearing a face mask correctly has been essential in controlling the Covid-19 transmission. Moreover, this research aims to detect the face mask with fine-grained wearing states: face with the correct mask and face without mask. Our work has two challenging tasks due to two main reasons firstly the presence of augmented data set available in the online market and the training of large datasets. This paper represents a mobile application for face mask detection. The fully automated Machine Learning Cloud service known as Google Cloud ML API is used for training the model in TensorFlow file format. This paper highlights the efficiency of the ML model. Additionally, this paper examines the advancement of the cloud technology used for machine learning over the traditional coding methods.


2022 ◽  
Author(s):  
Zhiheng Zhong ◽  
Minxian Xu ◽  
Maria Alejandra Rodriguez ◽  
Chengzhong Xu ◽  
Rajkumar Buyya

Containerization is a lightweight application virtualization technology, providing high environmental consistency, operating system distribution portability, and resource isolation. Existing mainstream cloud service providers have prevalently adopted container technologies in their distributed system infrastructures for automated application management. To handle the automation of deployment, maintenance, autoscaling, and networking of containerized applications, container orchestration is proposed as an essential research problem. However, the highly dynamic and diverse feature of cloud workloads and environments considerably raises the complexity of orchestration mechanisms. Machine learning algorithms are accordingly employed by container orchestration systems for behavior modelling and prediction of multi-dimensional performance metrics. Such insights could further improve the quality of resource provisioning decisions in response to the changing workloads under complex environments. In this paper, we present a comprehensive literature review of existing machine learning-based container orchestration approaches. Detailed taxonomies are proposed to classify the current researches by their common features. Moreover, the evolution of machine learning-based container orchestration technologies from the year 2016 to 2021 has been designed based on objectives and metrics. A comparative analysis of the reviewed techniques is conducted according to the proposed taxonomies, with emphasis on their key characteristics. Finally, various open research challenges and potential future directions are highlighted.


Sign in / Sign up

Export Citation Format

Share Document