distributed embedded systems
Recently Published Documents


TOTAL DOCUMENTS

327
(FIVE YEARS 19)

H-INDEX

23
(FIVE YEARS 1)

2021 ◽  
Vol 20 (5s) ◽  
pp. 1-23
Author(s):  
Vipin Kumar Kukkala ◽  
Sooryaa Vignesh Thiruloga ◽  
Sudeep Pasricha

Modern vehicles can be thought of as complex distributed embedded systems that run a variety of automotive applications with real-time constraints. Recent advances in the automotive industry towards greater autonomy are driving vehicles to be increasingly connected with various external systems (e.g., roadside beacons, other vehicles), which makes emerging vehicles highly vulnerable to cyber-attacks. Additionally, the increased complexity of automotive applications and the in-vehicle networks results in poor attack visibility, which makes detecting such attacks particularly challenging in automotive systems. In this work, we present a novel anomaly detection framework called LATTE to detect cyber-attacks in Controller Area Network (CAN) based networks within automotive platforms. Our proposed LATTE framework uses a stacked Long Short Term Memory (LSTM) predictor network with novel attention mechanisms to learn the normal operating behavior at design time. Subsequently, a novel detection scheme (also trained at design time) is used to detect various cyber-attacks (as anomalies) at runtime. We evaluate our proposed LATTE framework under different automotive attack scenarios and present a detailed comparison with the best-known prior works in this area, to demonstrate the potential of our approach.


2021 ◽  
Vol 26 (5) ◽  
pp. 1-38
Author(s):  
Eunjin Jeong ◽  
Dowhan Jeong ◽  
Soonhoi Ha

Existing software development methodologies mostly assume that an application runs on a single device without concern about the non-functional requirements of an embedded system such as latency and resource consumption. Besides, embedded software is usually developed after the hardware platform is determined, since a non-negligible portion of the code depends on the hardware platform. In this article, we present a novel model-based software synthesis framework for parallel and distributed embedded systems. An application is specified as a set of tasks with the given rules for execution and communication. Having such rules enables us to perform static analysis to check some software errors at compile-time to reduce the verification difficulty. Platform-specific programs are synthesized automatically after the mapping of tasks onto processing elements is determined. The proposed framework is expandable to support new hardware platforms easily. The proposed communication code synthesis method is extensible and flexible to support various communication methods between devices. In addition, the fault-tolerant feature can be added by modifying the task graph automatically according to the selected fault-tolerance configurations by the user. The viability of the proposed software development methodology is evaluated with a real-life surveillance application that runs on six processing elements.


2021 ◽  
Author(s):  
Junjie Shi ◽  
Jiang Bian ◽  
Jakob Richter ◽  
Kuan-Hsun Chen ◽  
Jörg Rahnenführer ◽  
...  

AbstractThe predictive performance of a machine learning model highly depends on the corresponding hyper-parameter setting. Hence, hyper-parameter tuning is often indispensable. Normally such tuning requires the dedicated machine learning model to be trained and evaluated on centralized data to obtain a performance estimate. However, in a distributed machine learning scenario, it is not always possible to collect all the data from all nodes due to privacy concerns or storage limitations. Moreover, if data has to be transferred through low bandwidth connections it reduces the time available for tuning. Model-Based Optimization (MBO) is one state-of-the-art method for tuning hyper-parameters but the application on distributed machine learning models or federated learning lacks research. This work proposes a framework $$\textit{MODES}$$ MODES that allows to deploy MBO on resource-constrained distributed embedded systems. Each node trains an individual model based on its local data. The goal is to optimize the combined prediction accuracy. The presented framework offers two optimization modes: (1) $$\textit{MODES}$$ MODES -B considers the whole ensemble as a single black box and optimizes the hyper-parameters of each individual model jointly, and (2) $$\textit{MODES}$$ MODES -I considers all models as clones of the same black box which allows it to efficiently parallelize the optimization in a distributed setting. We evaluate $$\textit{MODES}$$ MODES by conducting experiments on the optimization for the hyper-parameters of a random forest and a multi-layer perceptron. The experimental results demonstrate that, with an improvement in terms of mean accuracy ($$\textit{MODES}$$ MODES -B), run-time efficiency ($$\textit{MODES}$$ MODES -I), and statistical stability for both modes, $$\textit{MODES}$$ MODES outperforms the baseline, i.e., carry out tuning with MBO on each node individually with its local sub-data set.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 672
Author(s):  
Sara Blanc ◽  
José-Luis Bayo-Montón ◽  
Senén Palanca-Barrio ◽  
Néstor X. Arreaga-Alvarado

This paper presents a solution to support service discovery for edge choreography based distributed embedded systems. The Internet of Things (IoT) edge architectural layer is composed of Raspberry Pi machines. Each machine hosts different services organized based on the choreography collaborative paradigm. The solution adds to the choreography middleware three messages passing models to be coherent and compatible with current IoT messaging protocols. It is aimed to support blind hot plugging of new machines and help with service load balance. The discovery mechanism is implemented as a broker service and supports regular expressions (Regex) in message scope to discern both publishing patterns offered by data providers and client services necessities. Results compare Control Process Unit (CPU) usage in a request–response and datacentric configuration and analyze both regex interpreter latency times compared with a traditional message structure as well as its impact on CPU and memory consumption.


2021 ◽  
Vol 12 (1) ◽  
pp. 53-74
Author(s):  
Fateh Boutekkouk

Self-adaptive distributed embedded systems can automatically adjust their behavior and/or structure at run time to respond to some predictable or unpredictable events. On the other hand, architecture description languages (ADLs) are qualified to be a convenient solution to model systems architecture as a set of components with well-defined interfaces and links. ADLs have been well-studied and applied in many engineering areas beyond the software and hardware engineering. This research work reviews the most relevant ADLs taxonomies and surveys from 2000 till now, selects the most suitable ADLs for self-adaptive embedded systems, and compares between standard and non-standard ADLs based on some key criteria. To do this, a search methodology was followed enabling a systematic review. Results showed that only a few standard ADL have been accepted by the embedded industry favoring domain-specific ADLs with a proved support of adaptivity, real time, energy consumption and security.


Author(s):  
Ridha Mehalaine ◽  
Fateh Boutekkouk

The objective of this work is to present a new heuristic for solving the problem of fault tolerance in real time distributed embedded systems. The proposed idea is to model the distributed embedded architecture inspiring from the rennin-angiotensin aldosterone (RAAS) biological system which plays a major role in the pathophysiology of the cardiovascular system, from the point of view of pressure regulation and vascular, cardiac and nephrological remodeling. The proposed heuristic deals with uncertain information on a set of periodic tasks that run on multiple processors and satisfies certain temporal and energetic constraints from which the scheduling and the distribution of these tasks on the different processors are performed. In order to respect the energy constraints, this article proposes the introduction of energy consumption at the dynamic task scheduling level by using the dynamic voltage scaling (DVS) technique. The authors have seen that the introduction of a detection/prevention mechanism against potential errors in the proposed algorithm is a must for good results.


2020 ◽  
Vol 21 (2) ◽  
pp. 309-321
Author(s):  
Abdelkader Aroui ◽  
Abou Elhassan Benyamina ◽  
Pierre Boulet ◽  
Kamel Benhaoua ◽  
Amit Kumar Singh

The Network-on-Chip (NoC) is an alternative pattern that is considered as an emerging technology for distributed embedded systems. The traditional use of multi-cores in computing increase the calculation performance; but affect the network communication causing congestion on nodes which therefore decrease the global performance of the NoC. To alleviate this problematic phenomenon, several strategies were implemented, to reduce or prevent the occurrence of congestion, such as network status metrics, new routing algorithm, packets injection control, and switching strategies. In this paper, we carried out a study on congestion in a 2D mesh network, through various detailed simulations. Our focus was on the most used congestion metrics in NoC. According to our experiments and performed simulations under different traffic scenarios, we found that these metrics are less representative, less significant and yet they do not give a true overview of reading within the NoC nodes at a given cycle. Our study shows that the use of other complementary information regarding the state of nodes and network traffic flow in the design of a novel metric, can really improve the results. In this paper, we put forward a novel metric that takes into account the overall operating state of a router in the design of adaptive XY routing algorithm, aiming to improve routing decisions and network performance. We compare the throughput, latency, resource utilization, and congestion occurrence of proposed metric to three published metrics on two specific traffic patterns in a varied packets injection rate. Our results indicate that our novel metric-based adaptive XY routing has overcome congestion and significantly improve resource utilization through load balancing; achieving an average improvement rate up to 40  compared to adaptive XY routing based on the previous congestion metrics.


Sign in / Sign up

Export Citation Format

Share Document