scholarly journals Automated quality assurance as an intelligent cloud service using machine learning

Procedia CIRP ◽  
2019 ◽  
Vol 86 ◽  
pp. 185-191
Author(s):  
M. Schreiber ◽  
J. Klöber-Koch ◽  
J. Bömelburg-Zacharias ◽  
S. Braunreuther ◽  
G. Reinhart
2021 ◽  
Vol 3 (2) ◽  
pp. 392-413
Author(s):  
Stefan Studer ◽  
Thanh Binh Bui ◽  
Christian Drescher ◽  
Alexander Hanuschkin ◽  
Ludwig Winkler ◽  
...  

Machine learning is an established and frequently used technique in industry and academia, but a standard process model to improve success and efficiency of machine learning applications is still missing. Project organizations and machine learning practitioners face manifold challenges and risks when developing machine learning applications and have a need for guidance to meet business expectations. This paper therefore proposes a process model for the development of machine learning applications, covering six phases from defining the scope to maintaining the deployed machine learning application. Business and data understanding are executed simultaneously in the first phase, as both have considerable impact on the feasibility of the project. The next phases are comprised of data preparation, modeling, evaluation, and deployment. Special focus is applied to the last phase, as a model running in changing real-time environments requires close monitoring and maintenance to reduce the risk of performance degradation over time. With each task of the process, this work proposes quality assurance methodology that is suitable to address challenges in machine learning development that are identified in the form of risks. The methodology is drawn from practical experience and scientific literature, and has proven to be general and stable. The process model expands on CRISP-DM, a data mining process model that enjoys strong industry support, but fails to address machine learning specific tasks. The presented work proposes an industry- and application-neutral process model tailored for machine learning applications with a focus on technical tasks for quality assurance.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 395
Author(s):  
Héctor D. Menéndez ◽  
David Clark ◽  
Earl T. Barr

Malware detection is in a coevolutionary arms race where the attackers and defenders are constantly seeking advantage. This arms race is asymmetric: detection is harder and more expensive than evasion. White hats must be conservative to avoid false positives when searching for malicious behaviour. We seek to redress this imbalance. Most of the time, black hats need only make incremental changes to evade them. On occasion, white hats make a disruptive move and find a new technique that forces black hats to work harder. Examples include system calls, signatures and machine learning. We present a method, called Hothouse, that combines simulation and search to accelerate the white hat’s ability to counter the black hat’s incremental moves, thereby forcing black hats to perform disruptive moves more often. To realise Hothouse, we evolve EEE, an entropy-based polymorphic packer for Windows executables. Playing the role of a black hat, EEE uses evolutionary computation to disrupt the creation of malware signatures. We enter EEE into the detection arms race with VirusTotal, the most prominent cloud service for running anti-virus tools on software. During our 6 month study, we continually improved EEE in response to VirusTotal, eventually learning a packer that produces packed malware whose evasiveness goes from an initial 51.8% median to 19.6%. We report both how well VirusTotal learns to detect EEE-packed binaries and how well VirusTotal forgets in order to reduce false positives. VirusTotal’s tools learn and forget fast, actually in about 3 days. We also show where VirusTotal focuses its detection efforts, by analysing EEE’s variants.


2021 ◽  
Vol 17 (4) ◽  
pp. 75-88
Author(s):  
Padmaja Kadiri ◽  
Seshadri Ravala

Security threats are unforeseen attacks to the services provided by the cloud service provider. Depending on the type of attack, the cloud service and its associated features will be unavailable. The mitigation time is an integral part of attack recovery. This research paper explores the different parameters that will aid in predicting the mitigation time after an attack on cloud services. Further, the paper presents machine learning models that can predict the mitigation time. The paper presents the kernel-based machine learning models that can predict the average mitigation time during security attacks. The analysis of the results shows that the kernel-based models show 87% accuracy in predicting the mitigation time. Furthermore, the paper explores the performance of the kernel-based machine learning models based on the regression-based predictive models. The regression model is used as a benchmark model to analyze the performance of the machine learning-based predictive models in the prediction of mitigation time in the wake of an attack.


The tradition of moving applications, data to be consumed by the applications and the data generated by the applications is increasing and the increase is due to the advantages of cloud computing. The advantages of cloud computing are catered to the application owners, application consumers and at the same time to the cloud datacentre owners or the cloud service providers also. Since IT tasks are vital for business progression, it for the most part incorporates repetitive or reinforcement segments and framework for power supply, data correspondences associations, natural controls and different security gadgets. An extensive data centre is a mechanical scale task utilizing as much power as a community. The primary advantage of pushing the applications on the cloud-based data centres are low infrastructure maintenance with significant cost reduction for the application owners and the high profitability for the data centre cloud service providers. During the application migration to the cloud data centres, the data and few components of the application become exposed to certain users. Also, the applications, which are hosted on the cloud data centres must comply with the certain standards for being accepted by various application consumers. In order to achieve the standard certifications, the applications and the data must be audited by various auditing companies. Few of the cases, the auditors are hired by the data centre owners and few of times, the auditors are engaged by application consumers. Nonetheless, in both situations, the auditors are third party and the risk of exposing business logics in the applications and the data always persists. Nevertheless, the auditor being a third-party user, the data exposure is a high risk. Also, in a data centre environment, it is highly difficult to ensure isolation of the data from different auditors, who may not be have the right to audit the data. Significant number of researches have attempted to provide a generic solution to this problem. However, the solutions are highly criticized by the research community for making generic assumptions during the permission verification process. Henceforth, this work produces a novel machine learning based algorithm to assign or grant audit access permissions to specific auditors in a random situation without other approvals based on the characteristics of the virtual machine, in which the application and the data is deployed, and the auditing user entity. The results of the proposed algorithm are highly satisfactory and demonstrates nearly 99% accuracy on data characteristics analysis, nearly 98% accuracy on user characteristics analysis and 100% accuracy on secure auditor selection process


Sign in / Sign up

Export Citation Format

Share Document