scholarly journals A Survey on Machine-Learning Based Security Design for Cyber-Physical Systems

2021 ◽  
Vol 11 (12) ◽  
pp. 5458
Author(s):  
Sangjun Kim ◽  
Kyung-Joon Park

A cyber-physical system (CPS) is the integration of a physical system into the real world and control applications in a computing system, interacting through a communications network. Network technology connecting physical systems and computing systems enables the simultaneous control of many physical systems and provides intelligent applications for them. However, enhancing connectivity leads to extended attack vectors in which attackers can trespass on the network and launch cyber-physical attacks, remotely disrupting the CPS. Therefore, extensive studies into cyber-physical security are being conducted in various domains, such as physical, network, and computing systems. Moreover, large-scale and complex CPSs make it difficult to analyze and detect cyber-physical attacks, and thus, machine learning (ML) techniques have recently been adopted for cyber-physical security. In this survey, we provide an extensive review of the threats and ML-based security designs for CPSs. First, we present a CPS structure that classifies the functions of the CPS into three layers: the physical system, the network, and software applications. Then, we discuss the taxonomy of cyber-physical attacks on each layer, and in particular, we analyze attacks based on the dynamics of the physical system. We review existing studies on detecting cyber-physical attacks with various ML techniques from the perspectives of the physical system, the network, and the computing system. Furthermore, we discuss future research directions for ML-based cyber-physical security research in the context of real-time constraints, resiliency, and dataset generation to learn about the possible attacks.

2012 ◽  
Vol 488-489 ◽  
pp. 881-885
Author(s):  
Ji Yeon Kim ◽  
Hyung Jong Kim ◽  
Jin Myoung Kim ◽  
Won Tae Kim

The term “cyber-physical system (CPS)” refers to a computing system that integrates physical processes and computational devices via a network. There are many physical and computational devices in a CPS, which can function automatically through inter-device interactions. Because a CPS is usually used for large-scale complex systems, to ensure reliable CPS operation, its design and execution should be verified through simulations. For CPS simulation, a communication protocol should be established for data transmission between physical systems and the corresponding simulation models during the simulation, including control algorithms for regulating differences between the two systems. First, because physical systems and simulation models are advanced in real time and logical time, respectively, time regulation methods should be included in the control algorithm. Second, to simulate various types of physical systems, a flexible simulation environment, independent of the operating environment such as the type of communication middleware, is required. In this paper, we propose a communication protocol for data transmission between physical systems and simulation models via a middle layer that contains the policies for handling the two different clocks of each system: virtual and real. The proposed communication protocol can be used not only for communication between the two systems but also for overcoming the problems caused by the differences in their operating environments. The contribution of this work is in that it defines a communication protocol and proposes methods for controlling different types of systems.


2008 ◽  
Vol 20 (5) ◽  
pp. 750-756
Author(s):  
Shingo Nakamura ◽  
◽  
Shuji Hashimoto

We describe the adaptive modeling of a physical system using the affine transform and its application to machine learning. We previously proposed a method to implement machine learning in physical hardware, where we built a simulator based on actual hardware input/output, and used it to optimize a controller. The method decreases stress on hardware because the controller is optimized by software via the simulator. Moreover, it does not require specific physical information on hardware. We also did not need to formulate hardware kinematics. When hardware changes, however, optimization must be redone to build the simulator -a clearly inefficient procedure. We therefore considered using previous optimization results when reoptimizing for new hardware. In the physical system, the aspect of the phase space does not vary much if the system structure remains the same. We applied affine transform to phase space of the physical system, to remodel the simulator for new hardware characteristics triggered by parameter changes. We used the remodeled simulator in machine learning to reoptimize the controller. In experiments, we used the swing-up pendulum problem to evaluate our proposal, comparing our proposal and original methods and finding that our proposal accelerates reoptimization.


2000 ◽  
Vol 23 (3) ◽  
pp. 405-406
Author(s):  
V. K. Jirsa ◽  
J. A. S. Kelso

Nunez's description of the brain as a medium capable of wave propagation has provided some fundamental insights into its dynamics. This approach soon reaches the descriptive limits of the brain as a physical system, however. We point out some biological constraints which differentiate the brain from physical systems and we elaborate on its consequences for future research.


2021 ◽  
Author(s):  
Louis Hickman ◽  
Rachel Saef ◽  
Vincent Ng ◽  
Sang Eun Woo ◽  
Louis Tay ◽  
...  

Organizations are increasingly relying on people analytics to aid human resources decision-making. One application involves using machine learning to automatically infer applicant characteristics from employment interview responses. However, management research has provided scant validity evidence to guide organizations’ decisions about whether and how best to implement these algorithmic approaches. To address this gap, we use closed vocabulary text mining on mock video interviews to train and test machine learning algorithms for predicting interviewee’s self-reported (automatic personality recognition) and interviewer-rated personality traits (automatic personality perception). We use 10-fold cross-validation to test the algorithms’ accuracy for predicting Big Five personality traits across both rating sources. The cross-validated accuracy for predicting self-reports was lower than large-scale investigations using language in social media posts as predictors. The cross-validated accuracy for predicting interviewer ratings of personality was more than double that found for predicting self-reports. We discuss implications for future research and practice.


2018 ◽  
Vol 7 (2) ◽  
pp. 235-267 ◽  
Author(s):  
Christina Wasson ◽  
Melanie Medina ◽  
Miyoung Chong ◽  
Brittany LeMay ◽  
Emma Nalin ◽  
...  

This article explores the challenges of designing large-scale computing systems for multiple, diverse user groups. Such computing systems house large, complex datasets, and often provide analytic tools to interpret the data. They are increasingly central to activities in industry, science, and government agencies, and are often associated with “big data,” data warehousing, and/or scientific “cyberinfrastructure”.  A key characteristic of these systems is the diversity and multiplicity of their intended user groups, which may range from various scientific disciplines, to assorted business functions, to government officials and citizen groups. These user groups occupy structurally different positions in local and global political economies, and bring different forms of expertise to the data housed in the computing system. We argue that design anthropologists can contribute to the usefulness of such systems by engaging in collaborative ethnographic research with the targeted user groups, and communicating findings to the designers and developers creating these systems. 


Author(s):  
Clare Horsman ◽  
Susan Stepney ◽  
Rob C. Wagner ◽  
Viv Kendon

Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution . We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems.


2020 ◽  
Author(s):  
Mengyao Jiang ◽  
Yuxia Ma ◽  
Siyi Guo ◽  
Liuqi Jin ◽  
Lin Lv ◽  
...  

BACKGROUND Pressure injury (PI) is a common and preventable problem, yet it is a challenge for at least two reasons. First, the nurse shortage is a worldwide phenomenon. Second, the majority of nurses have insufficient PI-related knowledge. Machine learning (ML) technologies can contribute to lessening the burden on medical staff by improving the prognosis and diagnostic accuracy of PI. To the best of our knowledge, there is no existing systematic review that evaluates how the current ML technologies are being used in PI management. OBJECTIVE The objective of this review was to synthesize and evaluate the literature regarding the use of ML technologies in PI management, and identify their strengths and weaknesses, as well as to identify improvement opportunities for future research and practice. METHODS We conducted an extensive search on PubMed, EMBASE, Web of Science, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Cochrane Library, China National Knowledge Infrastructure (CNKI), the Wanfang database, the VIP database, and the China Biomedical Literature Database (CBM) to identify relevant articles. Searches were performed in June 2020. Two independent investigators conducted study selection, data extraction, and quality appraisal. Risk of bias was assessed using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). RESULTS A total of 32 articles met the inclusion criteria. Twelve of those articles (38%) reported using ML technologies to develop predictive models to identify risk factors, 11 (34%) reported using them in posture detection and recognition, and 9 (28%) reported using them in image analysis for tissue classification and measurement of PI wounds. These articles presented various algorithms and measured outcomes. The overall risk of bias was judged as high. CONCLUSIONS There is an array of emerging ML technologies being used in PI management, and their results in the laboratory show great promise. Future research should apply these technologies on a large scale with clinical data to further verify and improve their effectiveness, as well as to improve the methodological quality.


Author(s):  
И.М. Соколинская ◽  
Л.Б. Соколинский

Статья посвящена исследованию нового метода решения сверхбольших задач линейного программирования. Указанный метод получил название "апекс-метод". Апекс-метод работает по схеме предиктор-корректор. На фазе предиктор находится точка, лежащая на границе <em>n</em>-мерного многогранника, задающего допустимую область задачи линейного программирования. На фазе корректор организуется итерационный процесс, в результате которого строится последовательность точек, сходящаяся к точному решению задачи линейного программирования. В статье дается формальное описание апекс-метода и приводятся сведения о его параллельной реализации на языке C++ с использованием библиотеки MPI. Приводятся результаты масштабных вычислительных экспериментов на кластерной вычислительной системе по исследованию масштабируемости апекс-метода. The paper is devoted to a new method for solving large-scale linear programming (LP) problems. This method is called the apex-method. The apex-method uses the predictor–corrector framework. Thepredictor step calculates a point belonging to the feasible region of the LP problem. The corrector step calculates a sequence of points converging to the exact solution of the LP problem. The paper gives a formal description of the apex-method and provides information about its parallel implementation in C++ language using the MPI library. The results of large-scale computational experiments on a cluster computing system to study the scalability of the apex method are discussed.


2021 ◽  
Vol 18 (6) ◽  
pp. 1941-1970
Author(s):  
Christopher Holder ◽  
Anand Gnanadesikan

Abstract. A key challenge for biological oceanography is relating the physiological mechanisms controlling phytoplankton growth to the spatial distribution of those phytoplankton. Physiological mechanisms are often isolated by varying one driver of growth, such as nutrient or light, in a controlled laboratory setting producing what we call “intrinsic relationships”. We contrast these with the “apparent relationships” which emerge in the environment in climatological data. Although previous studies have found machine learning (ML) can find apparent relationships, there has yet to be a systematic study examining when and why these apparent relationships diverge from the underlying intrinsic relationships found in the lab and how and why this may depend on the method applied. Here we conduct a proof-of-concept study with three scenarios in which biomass is by construction a function of time-averaged phytoplankton growth rate. In the first scenario, the inputs and outputs of the intrinsic and apparent relationships vary over the same monthly timescales. In the second, the intrinsic relationships relate averages of drivers that vary on hourly timescales to biomass, but the apparent relationships are sought between monthly averages of these inputs and monthly-averaged output. In the third scenario we apply ML to the output of an actual Earth system model (ESM). Our results demonstrated that when intrinsic and apparent relationships operate on the same spatial and temporal timescale, neural network ensembles (NNEs) were able to extract the intrinsic relationships when only provided information about the apparent relationships, while colimitation and its inability to extrapolate resulted in random forests (RFs) diverging from the true response. When intrinsic and apparent relationships operated on different timescales (as little separation as hourly versus daily), NNEs fed with apparent relationships in time-averaged data produced responses with the right shape but underestimated the biomass. This was because when the intrinsic relationship was nonlinear, the response to a time-averaged input differed systematically from the time-averaged response. Although the limitations found by NNEs were overestimated, they were able to produce more realistic shapes of the actual relationships compared to multiple linear regression. Additionally, NNEs were able to model the interactions between predictors and their effects on biomass, allowing for a qualitative assessment of the colimitation patterns and the nutrient causing the most limitation. Future research may be able to use this type of analysis for observational datasets and other ESMs to identify apparent relationships between biogeochemical variables (rather than spatiotemporal distributions only) and identify interactions and colimitations without having to perform (or at least performing fewer) growth experiments in a lab. From our study, it appears that ML can extract useful information from ESM output and could likely do so for observational datasets as well.


Author(s):  
Rajani Chetan ◽  
Ramesh Shahabadkar

<em>‘Internet of Things (IoT)’</em>emerged as an intelligent collaborative computation and communication between a set of objects capable of providing on-demand services to other objects anytime anywhere. A large-scale deployment of data-driven cloud applications as well as automated physical things such as embed electronics, software, sensors and network connectivity enables a joint ubiquitous and pervasive internet-based computing systems well capable of interacting with each other in an IoT. IoT, a well-known term and a growing trend in IT arena certainly bring a highly connected global network structure providing a lot of beneficial aspects to a user regarding business productivity, lifestyle improvement, government efficiency, etc. It also generates enormous heterogeneous and homogeneous data needed to be analyzed properly to get insight into valuable information. However, adoption of this new reality (i.e., IoT) by integrating it with the internet invites a certain challenges from security and privacy perspective. At present, a much effort has been put towards strengthening the security system in IoT still not yet found optimal solutions towards current security flaws. Therefore, the prime aim of this study is to investigate the qualitative aspects of the conventional security solution approaches in IoT. It also extracts some open research problems that could affect the future research track of IoT arena.


Sign in / Sign up

Export Citation Format

Share Document