scholarly journals Time-Fluid Field-Based Coordination through Programmable Distributed Schedulers

2021 ◽  
Vol Volume 17, Issue 4 ◽  
Author(s):  
Danilo Pianini ◽  
Roberto Casadei ◽  
Mirko Viroli ◽  
Stefano Mariani ◽  
Franco Zambonelli

Emerging application scenarios, such as cyber-physical systems (CPSs), the Internet of Things (IoT), and edge computing, call for coordination approaches addressing openness, self-adaptation, heterogeneity, and deployment agnosticism. Field-based coordination is one such approach, promoting the idea of programming system coordination declaratively from a global perspective, in terms of functional manipulation and evolution in "space and time" of distributed data structures called fields. More specifically regarding time, in field-based coordination (as in many other distributed approaches to coordination) it is assumed that local activities in each device are regulated by a fair and unsynchronised fixed clock working at the platform level. In this work, we challenge this assumption, and propose an alternative approach where scheduling is programmed in a natural way (along with usual field-based coordination) in terms of causality fields, each enacting a programmable distributed notion of a computation "cause" (why and when a field computation has to be locally computed) and how it should change across time and space. Starting from low-level platform triggers, such causality fields can be organised into multiple layers, up to high-level, collectively-computed time abstractions, to be used at the application level. This reinterpretation of time in terms of articulated causality relations allows us to express what we call "time-fluid" coordination, where scheduling can be finely tuned so as to select the triggers to react to, generally allowing to adaptively balance performance (system reactivity) and cost (resource usage) of computations. We formalise the proposed scheduling framework for field-based coordination in the context of the field calculus, discuss an implementation in the aggregate computing framework, and finally evaluate the approach via simulation on several case studies.

Author(s):  
Lichao Xu ◽  
Szu-Yun Lin ◽  
Andrew W. Hlynka ◽  
Hao Lu ◽  
Vineet R. Kamat ◽  
...  

AbstractThere has been a strong need for simulation environments that are capable of modeling deep interdependencies between complex systems encountered during natural hazards, such as the interactions and coupled effects between civil infrastructure systems response, human behavior, and social policies, for improved community resilience. Coupling such complex components with an integrated simulation requires continuous data exchange between different simulators simulating separate models during the entire simulation process. This can be implemented by means of distributed simulation platforms or data passing tools. In order to provide a systematic reference for simulation tool choice and facilitating the development of compatible distributed simulators for deep interdependent study in the context of natural hazards, this article focuses on generic tools suitable for integration of simulators from different fields but not the platforms that are mainly used in some specific fields. With this aim, the article provides a comprehensive review of the most commonly used generic distributed simulation platforms (Distributed Interactive Simulation (DIS), High Level Architecture (HLA), Test and Training Enabling Architecture (TENA), and Distributed Data Services (DDS)) and data passing tools (Robot Operation System (ROS) and Lightweight Communication and Marshalling (LCM)) and compares their advantages and disadvantages. Three specific limitations in existing platforms are identified from the perspective of natural hazard simulation. For mitigating the identified limitations, two platform design recommendations are provided, namely message exchange wrappers and hybrid communication, to help improve data passing capabilities in existing solutions and provide some guidance for the design of a new domain-specific distributed simulation framework.


2021 ◽  
Vol 54 (2) ◽  
pp. 1-35
Author(s):  
Chenning Li ◽  
Zhichao Cao ◽  
Yunhao Liu

With the development of the Internet of Things (IoT), many kinds of wireless signals (e.g., Wi-Fi, LoRa, RFID) are filling our living and working spaces nowadays. Beyond communication, wireless signals can sense the status of surrounding objects, known as wireless sensing , with their reflection, scattering, and refraction while propagating in space. In the last decade, many sophisticated wireless sensing techniques and systems were widely studied for various applications (e.g., gesture recognition, localization, and object imaging). Recently, deep Artificial Intelligence (AI), also known as Deep Learning (DL), has shown great success in computer vision. And some works have initially proved that deep AI can benefit wireless sensing as well, leading to a brand-new step toward ubiquitous sensing. In this survey, we focus on the evolution of wireless sensing enhanced by deep AI techniques. We first present a general workflow of Wireless Sensing Systems (WSSs) which consists of signal pre-processing, high-level feature, and sensing model formulation. For each module, existing deep AI-based techniques are summarized, further compared with traditional approaches. Then, we provide a view of issues and challenges induced by combining deep AI and wireless sensing together. Finally, we discuss the future trends of deep AI to enable ubiquitous wireless sensing.


Author(s):  
Rutvik Solanki

Abstract: Technological advancements such as the Internet of Things (IoT) and Artificial Intelligence (AI) are helping to boost the global agricultural sector as it is expected to grow by around seventy percent in the next two decades. There are sensor-based systems in place to keep track of the plants and the surrounding environment. This technology allows farmers to watch and control farm operations from afar, but it has a few limitations. For farmers, these technologies are prohibitively expensive and demand a high level of technological competence. Besides, Climate change has a significant impact on crops because increased temperatures and changes in precipitation patterns increase the likelihood of disease outbreaks, resulting in crop losses and potentially irreversible plant destruction. Because of recent advancements in IoT and Cloud Computing, new applications built on highly innovative and scalable service platforms are now being developed. The use of Internet of Things (IoT) solutions has enormous promise for improving the quality and safety of agricultural products. Precision farming's telemonitoring system relies heavily on Internet of Things (IoT) platforms; therefore, this article quickly reviews the most common IoT platforms used in precision agriculture, highlighting both their key benefits and drawbacks


2021 ◽  
Vol 39 (4) ◽  
pp. 1-33
Author(s):  
Fulvio Corno ◽  
Luigi De Russis ◽  
Alberto Monge Roffarello

In the Internet of Things era, users are willing to personalize the joint behavior of their connected entities, i.e., smart devices and online service, by means of trigger-action rules such as “IF the entrance Nest security camera detects a movement, THEN blink the Philips Hue lamp in the kitchen.” Unfortunately, the spread of new supported technologies makes the number of possible combinations between triggers and actions continuously growing, thus motivating the need of assisting users in discovering new rules and functionality, e.g., through recommendation techniques. To this end, we present , a semantic Conversational Search and Recommendation (CSR) system able to suggest pertinent IF-THEN rules that can be easily deployed in different contexts starting from an abstract user’s need. By exploiting a conversational agent, the user can communicate her current personalization intention by specifying a set of functionality at a high level, e.g., to decrease the temperature of a room when she left it. Stemming from this input, implements a semantic recommendation process that takes into account ( a ) the current user’s intention , ( b ) the connected entities owned by the user, and ( c ) the user’s long-term preferences revealed by her profile. If not satisfied with the suggestions, then the user can converse with the system to provide further feedback, i.e., a short-term preference , thus allowing to provide refined recommendations that better align with the original intention. We evaluate by running different offline experiments with simulated users and real-world data. First, we test the recommendation process in different configurations, and we show that recommendation accuracy and similarity with target items increase as the interaction between the algorithm and the user proceeds. Then, we compare with other similar baseline recommender systems. Results are promising and demonstrate the effectiveness of in recommending IF-THEN rules that satisfy the current personalization intention of the user.


2015 ◽  
Vol 16 (2) ◽  
pp. 189-235 ◽  
Author(s):  
DANIELA INCLEZAN ◽  
MICHAEL GELFOND

AbstractThe paper introduces a new modular action language,${\mathcal ALM}$, and illustrates the methodology of its use. It is based on the approach of Gelfond and Lifschitz (1993,Journal of Logic Programming 17, 2–4, 301–321; 1998,Electronic Transactions on AI 3, 16, 193–210) in which a high-level action language is used as a front end for a logic programming system description. The resulting logic programming representation is used to perform various computational tasks. The methodology based on existing action languages works well for small and even medium size systems, but is not meant to deal with larger systems that requirestructuring of knowledge.$\mathcal{ALM}$is meant to remedy this problem. Structuring of knowledge in${\mathcal ALM}$is supported by the concepts ofmodule(a formal description of a specific piece of knowledge packaged as a unit),module hierarchy, andlibrary, and by the division of a system description of${\mathcal ALM}$into two parts:theoryandstructure. Atheoryconsists of one or more modules with a common theme, possibly organized into a module hierarchy based on adependency relation. It contains declarations of sorts, attributes, and properties of the domain together with axioms describing them.Structuresare used to describe the domain's objects. These features, together with the means for defining classes of a domain as special cases of previously defined ones, facilitate the stepwise development, testing, and readability of a knowledge base, as well as the creation of knowledge representation libraries.


Author(s):  
John P.T. Mo ◽  
Ronald C. Beckett

Since the announcement of Industry 4.0 in 2012, multiple variants of this industry paradigm have emerged and built on the common platform of Internet of Things. Traditional engineering driven industries such as aerospace and automotive are able to align with Industry 4.0 and operate on requirements of the Internet of Things platform. Process driven industries such as water treatment and food processing are more influenced by societal perspectives and evolve into Water 4.0 or Dairy 4.0. In essence, the main outcomes of these X4.0 (where X can be any one of Quality, Water or a combination of) paradigms are facilitating communications between socio-technical systems and accumulating large amount of data. As the X4.0 paradigms are researched, defined, developed and applied, many real examples in industries have demonstrated the lack of system of systems design consideration, e.g. the issue of training together with the use of digital twin to simulate operation scenarios and faults in maintenance may lag behind events triggered in the hostile real world environment. This paper examines, from a high level system of systems perspective, how transdisciplinary engineering can incorporate data quality on the often neglected system elements of people and process while adapting applications to operate within the X4.0 paradigms.


Author(s):  
Anton Dries ◽  
Angelika Kimmig ◽  
Jesse Davis ◽  
Vaishak Belle ◽  
Luc de Raedt

The ability to solve probability word problems such as those found in introductory discrete mathematics textbooks, is an important cognitive and intellectual skill. In this paper, we develop a two-step end-to-end fully automated approach for solving such questions that is able to automatically provide answers to exercises about probability formulated in natural language.In the first step, a question formulated in natural language is analysed and transformed into a high-level model specified in a declarative language. In the second step, a solution to the high-level model is computed using a probabilistic programming system. On a dataset of 2160 probability problems, our solver is able to correctly answer 97.5% of the questions given a correct model. On the end-to-end evaluation, we are able to answer 12.5% of the questions (or 31.1% if we exclude examples not supported by design).


2021 ◽  
Author(s):  
Benjamin Secker

Use of the Internet of Things (IoT) is poised to be the next big advancement in environmental monitoring. We present the high-level software side of a proof-of-concept that demonstrates an end-to-end environmental monitoring system,<br><div>replacing Greater Wellington Regional Council’s expensive data loggers with low-cost, IoT centric embedded devices, and it’s supporting cloud platform. The proof-of-concept includes a Micropython-based software stack running on an ESP32 microcontroller. The device software includes a built-in webserver that hosts a responsive Web App for configuration of the device. Telemetry data is sent over Vodafone’s NB-IoT network and stored in Azure IoT Central, where it can be visualised and exported.</div><br>While future development is required for a production-ready system, the proof-of-concept justifies the use of modern IoT technologies for environmental monitoring. The open source nature of the project means that the knowledge gained can be re-used and modified to suit the use-cases for other organisations.


Author(s):  
W. Wang ◽  
Z. He ◽  
D. Huang ◽  
X. Zhang

The application of Internet of Things in surveying and mapping industry basically is at the exploration stage, has not formed a unified standard. Chongqing Institute of Surveying and Mapping (CQISM) launched the research p roject "Research on the Technology of Internet of Things for Smart City". The project focuses on the key technologies of information transmission and exchange on the Internet of Things platform. The data standards of Internet of Things are designed. The real-time acquisition, mass storage and distributed data service of mass sensors are realized. On this basis, CQISM deploys the prototype platform of Internet of Things. The simulation application in Connected Car proves that the platform design is scientific and practical.


2011 ◽  
Vol 12 (1-2) ◽  
pp. 127-156 ◽  
Author(s):  
JOACHIM SCHIMPF ◽  
KISH SHEN

AbstractECLiPSe is a Prolog-based programming system, aimed at the development and deployment of constraint programming applications. It is also used for teaching most aspects of combinatorial problem solving, for example, problem modelling, constraint programming, mathematical programming and search techniques. It uses an extended Prolog as its high-level modelling and control language, complemented by several constraint solver libraries, interfaces to third-party solvers, an integrated development environment and interfaces for embedding into host environments. This paper discusses language extensions, implementation aspects, components, and tools that we consider relevant on the way from Logic Programming to Constraint Logic Programming.


Sign in / Sign up

Export Citation Format

Share Document