The Recovery Language Approach

Author(s):  
Vincenzo De Florio

After having discussed the general approach of fault-tolerance languages and their main features, the focus is now set on one particular case: The ARIEL1 recovery language. It is also described as an approach towards resilient computing based on ARIEL and therefore dubbed the “recovery language approach” (ReL). In this chapter, first the main elements of ReL are introduced in general terms, coupling each concept to the technical foundations behind it. After this a quite extensive description of ARIEL and of a compliant architecture are provided. Target applications for such architecture are distributed codes, characterized by non-strict real-time requirements, written in a procedural language such as C, to be executed on distributed or parallel computers consisting of a predefined (fixed) set of processing nodes. The reason for giving special emphasis to ARIEL and its approach is not in their special qualities but more on the fact that, due to the first-hand experience of the author, who conceived, designed, and implemented ARIEL in the course of his studies, it was possible for him to provide the reader with what may be considered as a sort of practical exercise in system and fault modeling and in application-level fault-tolerance design, recalling and applying several of the concepts introduced.

2021 ◽  
Vol 1 (1) ◽  
Author(s):  
E. Bertino ◽  
M. R. Jahanshahi ◽  
A. Singla ◽  
R.-T. Wu

AbstractThis paper addresses the problem of efficient and effective data collection and analytics for applications such as civil infrastructure monitoring and emergency management. Such problem requires the development of techniques by which data acquisition devices, such as IoT devices, can: (a) perform local analysis of collected data; and (b) based on the results of such analysis, autonomously decide further data acquisition. The ability to perform local analysis is critical in order to reduce the transmission costs and latency as the results of an analysis are usually smaller in size than the original data. As an example, in case of strict real-time requirements, the analysis results can be transmitted in real-time, whereas the actual collected data can be uploaded later on. The ability to autonomously decide about further data acquisition enhances scalability and reduces the need of real-time human involvement in data acquisition processes, especially in contexts with critical real-time requirements. The paper focuses on deep neural networks and discusses techniques for supporting transfer learning and pruning, so to reduce the times for training the networks and the size of the networks for deployment at IoT devices. We also discuss approaches based on machine learning reinforcement techniques enhancing the autonomy of IoT devices.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-22
Author(s):  
David Langerman ◽  
Alan George

High-resolution, low-latency apps in computer vision are ubiquitous in today’s world of mixed-reality devices. These innovations provide a platform that can leverage the improving technology of depth sensors and embedded accelerators to enable higher-resolution, lower-latency processing for 3D scenes using depth-upsampling algorithms. This research demonstrates that filter-based upsampling algorithms are feasible for mixed-reality apps using low-power hardware accelerators. The authors parallelized and evaluated a depth-upsampling algorithm on two different devices: a reconfigurable-logic FPGA embedded within a low-power SoC; and a fixed-logic embedded graphics processing unit. We demonstrate that both accelerators can meet the real-time requirements of 11 ms latency for mixed-reality apps. 1


2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


2014 ◽  
Vol 18 (11) ◽  
pp. 4467-4484 ◽  
Author(s):  
B. Revilla-Romero ◽  
J. Thielen ◽  
P. Salamon ◽  
T. De Groeve ◽  
G. R. Brakenridge

Abstract. One of the main challenges for global hydrological modelling is the limited availability of observational data for calibration and model verification. This is particularly the case for real-time applications. This problem could potentially be overcome if discharge measurements based on satellite data were sufficiently accurate to substitute for ground-based measurements. The aim of this study is to test the potentials and constraints of the remote sensing signal of the Global Flood Detection System for converting the flood detection signal into river discharge values. The study uses data for 322 river measurement locations in Africa, Asia, Europe, North America and South America. Satellite discharge measurements were calibrated for these sites and a validation analysis with in situ discharge was performed. The locations with very good performance will be used in a future project where satellite discharge measurements are obtained on a daily basis to fill the gaps where real-time ground observations are not available. These include several international river locations in Africa: the Niger, Volta and Zambezi rivers. Analysis of the potential factors affecting the satellite signal was based on a classification decision tree (random forest) and showed that mean discharge, climatic region, land cover and upstream catchment area are the dominant variables which determine good or poor performance of the measure\\-ment sites. In general terms, higher skill scores were obtained for locations with one or more of the following characteristics: a river width higher than 1km; a large floodplain area and in flooded forest, a potential flooded area greater than 40%; sparse vegetation, croplands or grasslands and closed to open and open forest; leaf area index > 2; tropical climatic area; and without hydraulic infrastructures. Also, locations where river ice cover is seasonally present obtained higher skill scores. This work provides guidance on the best locations and limitations for estimating discharge values from these daily satellite signals.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4045
Author(s):  
Alessandro Sassu ◽  
Jose Francisco Saenz-Cogollo ◽  
Maurizio Agelli

Edge computing is the best approach for meeting the exponential demand and the real-time requirements of many video analytics applications. Since most of the recent advances regarding the extraction of information from images and video rely on computation heavy deep learning algorithms, there is a growing need for solutions that allow the deployment and use of new models on scalable and flexible edge architectures. In this work, we present Deep-Framework, a novel open source framework for developing edge-oriented real-time video analytics applications based on deep learning. Deep-Framework has a scalable multi-stream architecture based on Docker and abstracts away from the user the complexity of cluster configuration, orchestration of services, and GPU resources allocation. It provides Python interfaces for integrating deep learning models developed with the most popular frameworks and also provides high-level APIs based on standard HTTP and WebRTC interfaces for consuming the extracted video data on clients running on browsers or any other web-based platform.


2015 ◽  
Vol 738-739 ◽  
pp. 1105-1110 ◽  
Author(s):  
Yuan Qing Qin ◽  
Ying Jie Cheng ◽  
Chun Jie Zhou

This paper mainly surveys the state-of-the-art on real-time communicaton in industrial wireless local networks(WLANs), and also identifys the suitable approaches to deal with the real-time requirements in future. Firstly, this paper summarizes the features of industrial WLANs and the challenges it encounters. Then according to the real-time problems of industrial WLAN, the fundamental mechanism of each recent representative resolution is analyzed in detail. Meanwhile, the characteristics and performance of these resolutions are adequately compared. Finally, this paper concludes the current of the research and discusses the future development of industrial WLANs.


Sign in / Sign up

Export Citation Format

Share Document