scholarly journals PIM: A Novel Architecture for Coordinating Behavior of Distributed Systems

AI Magazine ◽  
2010 ◽  
Vol 31 (2) ◽  
pp. 9 ◽  
Author(s):  
Kenneth M. Ford ◽  
James Allen ◽  
Niranjan Suri ◽  
Patrick J. Hayes ◽  
Robert Morris

Process integrated mechanisms (PIM) offer a new approach to the problem of coordinating the activity of physically distributed systems or devices. Current approaches to coordination all have well-recognized strengths and weaknesses. We propose a novel architecture to add to the mix, called the Process Integrated Mechanism (PIM), which enjoys the advantages of having a single controlling authority while avoiding the structural difficulties that have traditionally led to its rejection in many complex settings. In many situations, PIMs improve on previous models with regard to coordination, security, ease of software development, robustness and communication overhead. In the PIM architecture, the components are conceived as parts of a single mechanism, even when they are physically separated and operate asynchronously. The PIM models offers promise as an effective infrastructure for handling tasks that require a high degree of time-sensitive coordination between the components, as well as a clean mechanism for coordinating the high-level goals of loosely coupled systems. PIM models enable coordination without the fragility and high communication overhead of centralized control, but also without the uncertainty associated with the system-level behavior of a MAS.The PIM model provides an ease of programming with advantages over both multi-agent sys-tems and centralized architectures. It has the robustness of a multi-agent system without the significant complexity and overhead required for inter-agent communication and negotiation. In contrast to centralized approaches, it does not require managing the large amounts of data that the coordinating process needs to compute a global view. In a PIM, the process moves to the data and may perform computations on the components where the data is locally available, sharing only the information needed for coordination of the other components. While there are many remaining research issues to be addressed, we believe that PIMs offer an important and novel tech-nique for the control of distributed systems.

2021 ◽  
Author(s):  
Tadaaki Hosaka

Abstract Background: Integrated Information Theory (IIT) has been attracting attention as a theory of consciousness. The latest version, IIT3.0, is still at the stage of accumulating knowledge concerning fundamental networks. This paper presents an evaluation of the system-level integrated conceptual information of a major complex, ΦMax, associated with the center of consciousness for a small-scale network containing two small loops in accordance with the IIT3.0 framework. We focus on the following parameters characterizing the system model: 1) number of nodes in the loop, 2) frustration of the loop, and 3) temperature controlling the stochastic fluctuation of the state transition. Specifically, assuming that the two loops are coupled systems, such as cerebral hemispheres, the effect of these parameters on the values of ΦMax and conditions for major complexes formed by a single loop, rather than the entire network, is investigated.Results: Our first finding is that parity of the number of nodes forming a loop has a strong effect on the integrated conceptual information ΦMax. For loops with an even number of nodes, the number of concepts tends to decrease, and ΦMax becomes smaller. When the loop is formed with an odd number of nodes, the system without frustration and the system with two frustrated loops can have exactly the same ΦMax. It is also shown that, although counterintuitive, the value of ΦMax can be maximized in the presence of stochastic fluctuations. Our second finding is that a major complex is more likely to be formed by a small number of nodes under small stochastic fluctuations. In particular, this tendency is enhanced for larger numbers of nodes constituting a loop. On the other hand, the entire network can easily become a major complex under larger stochastic fluctuations, and this tendency can be reinforced by frustration.Conclusions: Our results indicating that the entire network dominates and maintains a high level of consciousness in the presence of a certain degree of fluctuation and frustration may qualitatively correspond to actual neural behaviors. The results of this study are expected to contribute to the verification of the consistency of IIT with the actual nervous system in the future.


Author(s):  
Supriya Ghosh

Now that we have discussed today’s information enterprise, this chapter focuses on the larger concept of interoperability. The definition of this word is not simple and a lot has written about this topic, so this chapter focuses on providing an objective definition of the subject matter. It then defines particular types of interoperability and how these types are measured. It defines the concept of loosely coupled systems, and how to obtain greater interoperability through looser coupling. It describes an objective way to measure interoperability based on the LISI profile in use by the DoD and NATO. It then provides an understanding of architecture strategies for achieving greater interoperability. It ends the chapter by discussing interoperability measures within large-scale distributed systems.


2018 ◽  
Vol 73 (4) ◽  
pp. 491-503 ◽  
Author(s):  
Matthias Spitzmuller ◽  
Guihyun Park

2020 ◽  
Vol 14 ◽  
Author(s):  
S. Mahima ◽  
N. Rajendran

: Mobile ad hoc networks (MANET) hold a set of numerous mobile computing devices useful for communication with one another with no centralized control. Due to the inherent features of MANET such as dynamic topology, constrained on bandwidth, energy and computing resources, there is a need to design the routing protocols efficiently. Flooding is a directive for managing traffic since it makes use of only chosen nodes for transmitting data from one node to another. This paper intends to develop a new Cluster-Based Flooding using Fuzzy Logic Scheme (CBF2S). To construct clusters and choose proper cluster heads (CHs), thefuzzy logic approach is applied with the use of three parameters namely link quality, node mobility and node degree. The presented model considerably minimizes the number of retransmissions in the network. The presented model instructs the cluster members (CM) floods the packets inside a cluster called intra-cluster flooding and CHs floods the packets among the clusters called inter-cluster flooding. In addition, the gateway sends a packet to another gateway for minimizing unwanted data retransmissions when it comes under different CH. The presented CBF2S is simulated using NS2 tool under the presence of varying hop count. The CBF2S model exhibits maximum results over the other methods interms of overhead, communication overhead, traffic load, packet delivery ratio and the end to end delay.


Aerospace ◽  
2021 ◽  
Vol 8 (3) ◽  
pp. 61
Author(s):  
Dominik Eisenhut ◽  
Nicolas Moebs ◽  
Evert Windels ◽  
Dominique Bergmann ◽  
Ingmar Geiß ◽  
...  

Recently, the new Green Deal policy initiative was presented by the European Union. The EU aims to achieve a sustainable future and be the first climate-neutral continent by 2050. It targets all of the continent’s industries, meaning aviation must contribute to these changes as well. By employing a systems engineering approach, this high-level task can be split into different levels to get from the vision to the relevant system or product itself. Part of this iterative process involves the aircraft requirements, which make the goals more achievable on the system level and allow validation of whether the designed systems fulfill these requirements. Within this work, the top-level aircraft requirements (TLARs) for a hybrid-electric regional aircraft for up to 50 passengers are presented. Apart from performance requirements, other requirements, like environmental ones, are also included. To check whether these requirements are fulfilled, different reference missions were defined which challenge various extremes within the requirements. Furthermore, figures of merit are established, providing a way of validating and comparing different aircraft designs. The modular structure of these aircraft designs ensures the possibility of evaluating different architectures and adapting these figures if necessary. Moreover, different criteria can be accounted for, or their calculation methods or weighting can be changed.


2021 ◽  
Vol 37 (1-4) ◽  
pp. 1-30
Author(s):  
Vincenzo Agate ◽  
Alessandra De Paola ◽  
Giuseppe Lo Re ◽  
Marco Morana

Multi-agent distributed systems are characterized by autonomous entities that interact with each other to provide, and/or request, different kinds of services. In several contexts, especially when a reward is offered according to the quality of service, individual agents (or coordinated groups) may act in a selfish way. To prevent such behaviours, distributed Reputation Management Systems (RMSs) provide every agent with the capability of computing the reputation of the others according to direct past interactions, as well as indirect opinions reported by their neighbourhood. This last point introduces a weakness on gossiped information that makes RMSs vulnerable to malicious agents’ intent on disseminating false reputation values. Given the variety of application scenarios in which RMSs can be adopted, as well as the multitude of behaviours that agents can implement, designers need RMS evaluation tools that allow them to predict the robustness of the system to security attacks, before its actual deployment. To this aim, we present a simulation software for the vulnerability evaluation of RMSs and illustrate three case studies in which this tool was effectively used to model and assess state-of-the-art RMSs.


2021 ◽  
Vol 10 (2) ◽  
pp. 27
Author(s):  
Roberto Casadei ◽  
Gianluca Aguzzi ◽  
Mirko Viroli

Research and technology developments on autonomous agents and autonomic computing promote a vision of artificial systems that are able to resiliently manage themselves and autonomously deal with issues at runtime in dynamic environments. Indeed, autonomy can be leveraged to unburden humans from mundane tasks (cf. driving and autonomous vehicles), from the risk of operating in unknown or perilous environments (cf. rescue scenarios), or to support timely decision-making in complex settings (cf. data-centre operations). Beyond the results that individual autonomous agents can carry out, a further opportunity lies in the collaboration of multiple agents or robots. Emerging macro-paradigms provide an approach to programming whole collectives towards global goals. Aggregate computing is one such paradigm, formally grounded in a calculus of computational fields enabling functional composition of collective behaviours that could be proved, under certain technical conditions, to be self-stabilising. In this work, we address the concept of collective autonomy, i.e., the form of autonomy that applies at the level of a group of individuals. As a contribution, we define an agent control architecture for aggregate multi-agent systems, discuss how the aggregate computing framework relates to both individual and collective autonomy, and show how it can be used to program collective autonomous behaviour. We exemplify the concepts through a simulated case study, and outline a research roadmap towards reliable aggregate autonomy.


2021 ◽  
Vol 54 (3) ◽  
pp. 1-33
Author(s):  
Blesson Varghese ◽  
Nan Wang ◽  
David Bermbach ◽  
Cheol-Ho Hong ◽  
Eyal De Lara ◽  
...  

Edge computing is the next Internet frontier that will leverage computing resources located near users, sensors, and data stores to provide more responsive services. Therefore, it is envisioned that a large-scale, geographically dispersed, and resource-rich distributed system will emerge and play a key role in the future Internet. However, given the loosely coupled nature of such complex systems, their operational conditions are expected to change significantly over time. In this context, the performance characteristics of such systems will need to be captured rapidly, which is referred to as performance benchmarking, for application deployment, resource orchestration, and adaptive decision-making. Edge performance benchmarking is a nascent research avenue that has started gaining momentum over the past five years. This article first reviews articles published over the past three decades to trace the history of performance benchmarking from tightly coupled to loosely coupled systems. It then systematically classifies previous research to identify the system under test, techniques analyzed, and benchmark runtime in edge performance benchmarking.


2021 ◽  
pp. 1-14
Author(s):  
Debo Dong ◽  
Dezhong Yao ◽  
Yulin Wang ◽  
Seok-Jun Hong ◽  
Sarah Genon ◽  
...  

Abstract Background Schizophrenia has been primarily conceptualized as a disorder of high-order cognitive functions with deficits in executive brain regions. Yet due to the increasing reports of early sensory processing deficit, recent models focus more on the developmental effects of impaired sensory process on high-order functions. The present study examined whether this pathological interaction relates to an overarching system-level imbalance, specifically a disruption in macroscale hierarchy affecting integration and segregation of unimodal and transmodal networks. Methods We applied a novel combination of connectome gradient and stepwise connectivity analysis to resting-state fMRI to characterize the sensorimotor-to-transmodal cortical hierarchy organization (96 patients v. 122 controls). Results We demonstrated compression of the cortical hierarchy organization in schizophrenia, with a prominent compression from the sensorimotor region and a less prominent compression from the frontal−parietal region, resulting in a diminished separation between sensory and fronto-parietal cognitive systems. Further analyses suggested reduced differentiation related to atypical functional connectome transition from unimodal to transmodal brain areas. Specifically, we found hypo-connectivity within unimodal regions and hyper-connectivity between unimodal regions and fronto-parietal and ventral attention regions along the classical sensation-to-cognition continuum (voxel-level corrected, p < 0.05). Conclusions The compression of cortical hierarchy organization represents a novel and integrative system-level substrate underlying the pathological interaction of early sensory and cognitive function in schizophrenia. This abnormal cortical hierarchy organization suggests cascading impairments from the disruption of the somatosensory−motor system and inefficient integration of bottom-up sensory information with attentional demands and executive control processes partially account for high-level cognitive deficits characteristic of schizophrenia.


Sign in / Sign up

Export Citation Format

Share Document