Autonomic Security Management for IoT Smart Spaces

2021 ◽  
Vol 2 (4) ◽  
pp. 1-20
Author(s):  
Changyuan Lin ◽  
Hamzeh Khazaei ◽  
Andrew Walenstein ◽  
Andrew Malton

Embedded sensors and smart devices have turned the environments around us into smart spaces that could automatically evolve, depending on the needs of users, and adapt to the new conditions. While smart spaces are beneficial and desired in many aspects, they could be compromised and expose privacy, security, or render the whole environment a hostile space in which regular tasks cannot be accomplished anymore. In fact, ensuring the security of smart spaces is a very challenging task due to the heterogeneity of devices, vast attack surface, and device resource limitations. The key objective of this study is to minimize the manual work in enforcing the security of smart spaces by leveraging the autonomic computing paradigm in the management of IoT environments. More specifically, we strive to build an autonomic manager that can monitor the smart space continuously, analyze the context, plan and execute countermeasures to maintain the desired level of security, and reduce liability and risks of security breaches. We follow the microservice architecture pattern and propose a generic ontology named Secure Smart Space Ontology (SSSO) for describing dynamic contextual information in security-enhanced smart spaces. Based on SSSO, we build an autonomic security manager with four layers that continuously monitors the managed spaces, analyzes contextual information and events, and automatically plans and implements adaptive security policies. As the evaluation, focusing on a current BlackBerry customer problem, we deployed the proposed autonomic security manager to maintain the security of a smart conference room with 32 devices and 66 services. The high performance of the proposed solution was also evaluated on a large-scale deployment with over 1.8 million triples.

2021 ◽  
Author(s):  
◽  
Vahid Arabnejad

<p>Basic science is becoming ever more computationally intensive, increasing the need for large-scale compute and storage resources, be they within a High-Performance Computer cluster, or more recently, within the cloud. Commercial clouds have increasingly become a viable platform for hosting scientific analyses and computation due to their elasticity, recent introduction of specialist hardware, and pay-as-you-go cost model. This computing paradigm therefore presents a low capital and low barrier alternative to operating dedicated eScience infrastructure. Indeed, commercial clouds now enable universal access to capabilities previously available to only large well funded research groups. While the potential benefits of cloud computing are clear, there are still significant technical hurdles associated with obtaining the best execution efficiency whilst trading off cost. In most cases, large scale scientific computation is represented as a workflow for scheduling and runtime provisioning. Such scheduling becomes an even more challenging problem on cloud systems due to the dynamic nature of the cloud, in particular, the elasticity, the pricing models (both static and dynamic), the non-homogeneous resource types and the vast array of services. This mapping of workflow tasks onto a set of provisioned instances is an example of the general scheduling problem and is NP-complete. In addition, certain runtime constraints, the most typical being the cost of the computation and the time which that computation requires to complete, must be met. This thesis addresses 'the scientific workflow scheduling problem in cloud', which is to schedule workflow tasks on cloud resources in a way that users meet their defined constraints such as budget and deadline, and providers maximize profits and resource utilization. Moreover, it explores different mechanisms and strategies for distributing defined constraints over a workflow and investigate its impact on the overall cost of the resulting schedule.</p>


2012 ◽  
Vol 8 (4) ◽  
pp. 102 ◽  
Author(s):  
Claudia Canali ◽  
Riccardo Lancellotti

The recent growth in demand for modern applicationscombined with the shift to the Cloud computing paradigm have led to the establishment of large-scale cloud data centers. The increasing size of these infrastructures represents a major challenge in terms of monitoring and management of the system resources. Available solutions typically consider every Virtual Machine (VM) as a black box each with independent characteristics, and face scalability issues by reducing the number of monitored resource samples, considering in most cases only average CPU usage sampled at a coarse time granularity. We claim that scalability issues can be addressed by leveraging thesimilarity between VMs in terms of resource usage patterns.In this paper we propose an automated methodology to cluster VMs depending on the usage of multiple resources, both systemand network-related, assuming no knowledge of the services executed on them. This is an innovative methodology that exploits the correlation between the resource usage to cluster together similar VMs. We evaluate the methodology through a case study with data coming from an enterprise datacenter, and we show that high performance may be achieved in automatic VMs clustering. Furthermore, we estimate the reduction in the amount of data collected, thus showing that our proposal may simplify the monitoring requirements and help administrators totake decisions on the resource management of cloud computing datacenters.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
E. A. Huerta ◽  
Asad Khan ◽  
Edward Davis ◽  
Colleen Bushell ◽  
William D. Gropp ◽  
...  

Abstract Significant investments to upgrade and construct large-scale scientific facilities demand commensurate investments in R&D to design algorithms and computing approaches to enable scientific and engineering breakthroughs in the big data era. Innovative Artificial Intelligence (AI) applications have powered transformational solutions for big data challenges in industry and technology that now drive a multi-billion dollar industry, and which play an ever increasing role shaping human social patterns. As AI continues to evolve into a computing paradigm endowed with statistical and mathematical rigor, it has become apparent that single-GPU solutions for training, validation, and testing are no longer sufficient for computational grand challenges brought about by scientific facilities that produce data at a rate and volume that outstrip the computing capabilities of available cyberinfrastructure platforms. This realization has been driving the confluence of AI and high performance computing (HPC) to reduce time-to-insight, and to enable a systematic study of domain-inspired AI architectures and optimization schemes to enable data-driven discovery. In this article we present a summary of recent developments in this field, and describe specific advances that authors in this article are spearheading to accelerate and streamline the use of HPC platforms to design and apply accelerated AI algorithms in academia and industry.


2021 ◽  
Author(s):  
◽  
Vahid Arabnejad

<p>Basic science is becoming ever more computationally intensive, increasing the need for large-scale compute and storage resources, be they within a High-Performance Computer cluster, or more recently, within the cloud. Commercial clouds have increasingly become a viable platform for hosting scientific analyses and computation due to their elasticity, recent introduction of specialist hardware, and pay-as-you-go cost model. This computing paradigm therefore presents a low capital and low barrier alternative to operating dedicated eScience infrastructure. Indeed, commercial clouds now enable universal access to capabilities previously available to only large well funded research groups. While the potential benefits of cloud computing are clear, there are still significant technical hurdles associated with obtaining the best execution efficiency whilst trading off cost. In most cases, large scale scientific computation is represented as a workflow for scheduling and runtime provisioning. Such scheduling becomes an even more challenging problem on cloud systems due to the dynamic nature of the cloud, in particular, the elasticity, the pricing models (both static and dynamic), the non-homogeneous resource types and the vast array of services. This mapping of workflow tasks onto a set of provisioned instances is an example of the general scheduling problem and is NP-complete. In addition, certain runtime constraints, the most typical being the cost of the computation and the time which that computation requires to complete, must be met. This thesis addresses 'the scientific workflow scheduling problem in cloud', which is to schedule workflow tasks on cloud resources in a way that users meet their defined constraints such as budget and deadline, and providers maximize profits and resource utilization. Moreover, it explores different mechanisms and strategies for distributing defined constraints over a workflow and investigate its impact on the overall cost of the resulting schedule.</p>


Author(s):  
Y. Yang ◽  
C. Toth ◽  
D. Brzezinska

Abstract. Indoor positioning technologies represent a fast developing field of research due to the rapidly increasing need for indoor location-based services (ILBS); in particular, for applications using personal smart devices. Recently, progress in indoor mapping, including 3D modeling and semantic labeling started to offer benefits to indoor positioning algorithms; mainly, in terms of accuracy. This work presents a method for efficient and robust indoor localization, allowing to support applications in large-scale environments. To achieve high performance, the proposed concept integrates two main indoor localization techniques: Wi-Fi fingerprinting and deep learning-based visual localization using 3D map. The robustness and efficiency of technique is demonstrated with real-world experiences.


2020 ◽  
Author(s):  
João Paulo Cardoso De Lima ◽  
Leandro Buss Becker ◽  
Frank Siqueira ◽  
Analucia Schaffino Morales ◽  
Gustavo Medeiros De Araújo

The growing development of smart devices makes it possible to create new distributed applications targeted for smart spaces. The design of intelligent spaces assumes that there is an infrastructure to support the applications requirements. Many academic works have proposed middlewares that provide an abstraction for the use of network services. The network services of an smart space, such as an automated home, can have different communications interfaces. Accordingly, we developed a middleware called UDP4US (Universal Device Pipe for Ubiquitous Services) which was designed to abstract different patterns of communication, keeping the discovery of devices on a local network services. In this paper, we present a new UDP4US architecture component that aims to expose the local network devices services to the Internet. The new component was developed with the REST technology, thus the devices services can be discovered and accessed over the Internet. The new component was exhaustively tested in order to find the liits of its effectiveness. The evaluation of the new component was performed by measuring its discovery and execution times plus the success rate of the services execution exposed over the Internet. The results from the present work are important to guide a better design of distributed applications for smart places.


Author(s):  
Eliu Huerta ◽  
Asad Khan ◽  
Edward Davis ◽  
Colleen Bushell ◽  
William Gropp ◽  
...  

Abstract Significant investments to upgrade and construct large-scale scientific facilities demand commensurate investments in R\&D to design algorithms and computing approaches to enable scientific and engineering breakthroughs in the big data era. Innovative Artificial Intelligence (AI) applications have powered transformational solutions for big data challenges in industry and technology that now drive a multi-billion dollar industry, and which play an ever increasing role shaping human social patterns. As AI continues to evolve into a computing paradigm endowed with statistical and mathematical rigor, it has become apparent that single-GPU solutions for training, validation, and testing are no longer sufficient for AI applications that aim to provide novel solutions for big-data challenges posed by scientific facilities that produce data at a rate and volume that outstrip the computing capabilities of available cyberinfrastructure platforms. This realization has been driving the confluence of AI and high performance computing (HPC), which is critical to reduce time-to-insight, and to enable a systematic study of domain-inspired AI architectures and optimization schemes to enable data-driven discovery. In this article we present a summary of recent developments in this field, and discuss avenues to accelerate and streamline the use of HPC platforms to design accelerated AI algorithms.


2020 ◽  
Vol 2 (3) ◽  
pp. 173-180
Author(s):  
Dr. M. Duraipandian

Internet of Things (IoT) has gained more attention in recent years and its influence over future internet is projected to be more as a promising technology. IoT enables sensors to merge with smart devices to monitor, observe and analyse the real time data. These features make the IoT, a suitable technology, for smart applications. On the other hand, cloud offers a better computing paradigm to store and analyse the data. Cloud reduces the complexities in day today life with its novel applications and services, in an efficient manner. However, present IoT and Cloud solutions are focused towards centralized solutions, which limits the user capacity. To enrich the Cloud integrated IoT benefits, a flexible large-scale data collection and analysis is introduced as crowdsourcing, which provides a new dimension in data mining applications. This research work presents a cloud computing crowdsourced data analysis model implemented over IoT, to obtain better computation speed with improved sensitivity, specificity and accuracy.


Author(s):  
Pavel Klavík ◽  
A. Cristiano I. Malossi ◽  
Costas Bekas ◽  
Alessandro Curioni

Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications.


Author(s):  
C.K. Wu ◽  
P. Chang ◽  
N. Godinho

Recently, the use of refractory metal silicides as low resistivity, high temperature and high oxidation resistance gate materials in large scale integrated circuits (LSI) has become an important approach in advanced MOS process development (1). This research is a systematic study on the structure and properties of molybdenum silicide thin film and its applicability to high performance LSI fabrication.


Sign in / Sign up

Export Citation Format

Share Document