scholarly journals Functional observability and target state estimation in large-scale networks

2021 ◽  
Vol 119 (1) ◽  
pp. e2113750119
Author(s):  
Arthur N. Montanari ◽  
Chao Duan ◽  
Luis A. Aguirre ◽  
Adilson E. Motter

The quantitative understanding and precise control of complex dynamical systems can only be achieved by observing their internal states via measurement and/or estimation. In large-scale dynamical networks, it is often difficult or physically impossible to have enough sensor nodes to make the system fully observable. Even if the system is in principle observable, high dimensionality poses fundamental limits on the computational tractability and performance of a full-state observer. To overcome the curse of dimensionality, we instead require the system to be functionally observable, meaning that a targeted subset of state variables can be reconstructed from the available measurements. Here, we develop a graph-based theory of functional observability, which leads to highly scalable algorithms to 1) determine the minimal set of required sensors and 2) design the corresponding state observer of minimum order. Compared with the full-state observer, the proposed functional observer achieves the same estimation quality with substantially less sensing and fewer computational resources, making it suitable for large-scale networks. We apply the proposed methods to the detection of cyberattacks in power grids from limited phase measurement data and the inference of the prevalence rate of infection during an epidemic under limited testing conditions. The applications demonstrate that the functional observer can significantly scale up our ability to explore otherwise inaccessible dynamical processes on complex networks.

Author(s):  
SHYAM D. BAWANKAR ◽  
SONAL B. BHOPLE ◽  
VISHAL D. JAISWAL

Large-scale networks of wireless sensors are becoming an active topic of research.. We review the key elements of the emergent technology of “Smart Dust” and outline the research challenges they present to the mobile networking and systems community, which must provide coherent connectivity to large numbers of mobile network nodes co-located within a small volume. Smart Dust sensor networks – consisting of cubic millimeter scale sensor nodes capable of limited computation, sensing, and passive optical communication with a base station – are envisioned to fulfil complex large scale monitoring tasks in a wide variety of application areas. RFID technology can realize “smart-dust” applications for the sensor network community. RFID sensor networks (RSNs), which consist of RFID readers and RFID sensor nodes (WISPs), extend RFID to include sensing and bring the advantages of small, inexpensive and long-lived RFID tags to wireless sensor networks. In many potential Smart Dust applications such as object detection and tracking, fine-grained node localization plays a key role.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1732
Author(s):  
Sun-Ho Choi ◽  
Yoonkyung Jang ◽  
Hyowon Seo ◽  
Bum Il Hong ◽  
Intae Ryoo

In this paper, we present an efficient way to find a gateway deployment for a given sensor network topology. We assume that the expired sensors and gateways can be replaced and the locations of the gateways are chosen among the given sensor nodes. The objective is to find a gateway deployment that minimizes the cost per unit time, which consists of the maintenance and installation costs. The proposed algorithm creates a cost reference and uses it to find the optimal deployment via a divide and conquer algorithm. Comparing all cases is the most reliable way to find the optimal gateway deployment, but this is practically impossible to calculate, since its computation time increases exponentially as the number of nodes increases. The method we propose increases linearly, and so is suitable for large scale networks. Additionally, compared to stochastic algorithms such as the genetic algorithm, this methodology has advantages in computational speed and accuracy for a large number of nodes. We also verify our methodology through several numerical experiments.


1999 ◽  
Vol 2 (04) ◽  
pp. 368-376 ◽  
Author(s):  
H.A. Tchelepi ◽  
L.J. Durlofsky ◽  
W.H. Chen ◽  
A. Bernath ◽  
M.C.H. Chien

Summary Scale up and parallel reservoir simulation represent two distinct approaches for the simulation of highly detailed geological or geostatistical reservoir models. In this paper, we discuss the complementary use of these two approaches for practical, large scale reservoir simulation problems. We first review our recently developed approaches for upscaling and parallel reservoir simulation. Then, several practical large scale modeling problems, which include simulations of multiple realizations of a waterflood pattern element, a four well sector model, and a large, 130 well segment model, are addressed. It is shown that, for the pattern waterflood model, significantly coarsened models provide reliable results for many aspects of the reservoir flow. However, the simulation of at least some of the fine scale geostatistical realizations, accomplished using our parallel reservoir simulation technology, is useful in determining the appropriate level of scale up. For models with a large number of wells, the upscaled models can lose accuracy as the grid is coarsened. In these cases, although field-wide performance can still be predicted with reasonable accuracy, parallel reservoir simulation is required to maintain sufficiently refined models capable of accurate flow results on a well by well basis. Finally, some issues concerning the use of highly detailed models in practical simulation studies are discussed. Introduction Reservoir description and flow modeling capabilities continue to benefit from advances in computing hardware and software technologies. However, the level of detail typically included in reservoir characterizations continues to exceed the capabilities of traditional reservoir flow simulators by a significant margin. This resolution gap, due to the much larger computational requirements of flow simulation, has driven the development of two specific technologies: scale up and parallel reservoir simulation. These two technologies represent very distinct approaches—scale up methods attempt to coarsen the simulation model to fit the hardware, while parallel reservoir simulation technology attempts to extend computing capabilities to accommodate the detailed model. The purpose of this paper is to present and discuss ways in which to utilize these two technologies in a complementary fashion for the solution of practical large scale reservoir simulation problems. Toward this end, we first discuss our previously developed capabilities for scale up1,2 and parallel reservoir simulation.3 Next, the two technologies are applied to several reservoirs represented via highly detailed (i.e., on the order of 1 million cells) geostatistical models. Various production scenarios are considered. It will be shown how the direct simulation of the highly detailed models (using parallel reservoir simulation technology on an IBM SP) can be used to assess and guide the scale up procedure and to establish the appropriate level of coarsening allowable. We will show that, once this level is established, upscaled models can be used to evaluate multiple geostatistical realizations. We additionally apply the detailed simulation results to develop general guidelines for the degree of scale up allowable for various types of simulation models; e.g., pattern, sector and large segment models. Our general conclusion is that our scale up technology, as currently used, is quite reliable when sufficient refinement is maintained in the coarsened model. We show that when many wells are to be simulated, the upscaled models can begin to lose accuracy, particularly when well by well production is considered. This is due in part to the fact that, in the coarse models, wells are separated by very few grid blocks, and degradation in accuracy results. There have been many previous studies directed toward the development of parallel reservoir simulation technology and many studies aimed at the development of scale up techniques. To our knowledge, this is the first effort that considers the complementary use of both. Here we will very briefly review the recent literature on both parallel reservoir simulation and upscaling techniques. For more complete discussions of previous work, refer to Refs. 1-3. Traditional techniques for upscaling rely on the use of pseudorelative permeabilities. Although often applied in practice, the use of pseudorelative permeabilities can lead to inaccuracies in some cases.4,5 This is largely due to the high degree of process dependency inherent in the pseudorelative permeability approach; i.e., pseudorelative permeability curves are really only appropriate for the conditions for which they are generated. The deficiencies in the traditional pseudorelative permeability methodology have motivated work in several areas. This includes the generation of more robust pseudorelative permeabilities,6,7 the use of higher moments of the fine scale variables,5 and the nonuniform coarsening approach applied in this study (discussed in Nonuniform Coarsening Method for Scale Up). Generalizations of the nonuniform coarsening approach described in Refs. 1 and 2 have also been presented.8,9 Parallel reservoir simulation is an area of active research. Recent publications emphasize the development of scalable algorithms designed to run efficiently on a variety of parallel platforms.10–13 Most recent implementations involve distributed memory platforms such as a cluster of workstations. The typical size of a simulation model run in parallel is on the order of 1 (or a few) million grid blocks, though results for a 16.5 million cell model have been reported.11 Most parallel implementations are based on message passing techniques such as the message passing interface standard (MPI). Several of the parallel simulation algorithms, including our own, are based on a multilevel domain decomposition approach. This entails communication between domains in a manner analogous to that used in standard domain decomposition approaches.


2005 ◽  
Vol 33 (1) ◽  
pp. 38-62 ◽  
Author(s):  
S. Oida ◽  
E. Seta ◽  
H. Heguri ◽  
K. Kato

Abstract Vehicles, such as an agricultural tractor, construction vehicle, mobile machinery, and 4-wheel drive vehicle, are often operated on unpaved ground. In many cases, the ground is deformable; therefore, the deformation should be taken into consideration in order to assess the off-the-road performance of a tire. Recent progress in computational mechanics enabled us to simulate the large scale coupling problem, in which the deformation of tire structure and of surrounding medium can be interactively considered. Using this technology, hydroplaning phenomena and tire traction on snow have been predicted. In this paper, the simulation methodology of tire/soil coupling problems is developed for pneumatic tires of arbitrary tread patterns. The Finite Element Method (FEM) and the Finite Volume Method (FVM) are used for structural and for soil-flow analysis, respectively. The soil is modeled as an elastoplastic material with a specified yield criterion and a nonlinear elasticity. The material constants are referred to measurement data, so that the cone penetration resistance and the shear resistance are represented. Finally, the traction force of the tire in a cultivated field is predicted, and a good correlation with experiments is obtained.


Author(s):  
S. Pragati ◽  
S. Kuldeep ◽  
S. Ashok ◽  
M. Satheesh

One of the situations in the treatment of disease is the delivery of efficacious medication of appropriate concentration to the site of action in a controlled and continual manner. Nanoparticle represents an important particulate carrier system, developed accordingly. Nanoparticles are solid colloidal particles ranging in size from 1 to 1000 nm and composed of macromolecular material. Nanoparticles could be polymeric or lipidic (SLNs). Industry estimates suggest that approximately 40% of lipophilic drug candidates fail due to solubility and formulation stability issues, prompting significant research activity in advanced lipophile delivery technologies. Solid lipid nanoparticle technology represents a promising new approach to lipophile drug delivery. Solid lipid nanoparticles (SLNs) are important advancement in this area. The bioacceptable and biodegradable nature of SLNs makes them less toxic as compared to polymeric nanoparticles. Supplemented with small size which prolongs the circulation time in blood, feasible scale up for large scale production and absence of burst effect makes them interesting candidates for study. In this present review this new approach is discussed in terms of their preparation, advantages, characterization and special features.


2020 ◽  
Vol 27 (2) ◽  
pp. 105-110 ◽  
Author(s):  
Niaz Ahmad ◽  
Muhammad Aamer Mehmood ◽  
Sana Malik

: In recent years, microalgae have emerged as an alternative platform for large-scale production of recombinant proteins for different commercial applications. As a production platform, it has several advantages, including rapid growth, easily scale up and ability to grow with or without the external carbon source. Genetic transformation of several species has been established. Of these, Chlamydomonas reinhardtii has become significantly attractive for its potential to express foreign proteins inexpensively. All its three genomes – nuclear, mitochondrial and chloroplastic – have been sequenced. As a result, a wealth of information about its genetic machinery, protein expression mechanism (transcription, translation and post-translational modifications) is available. Over the years, various molecular tools have been developed for the manipulation of all these genomes. Various studies show that the transformation of the chloroplast genome has several advantages over nuclear transformation from the biopharming point of view. According to a recent survey, over 100 recombinant proteins have been expressed in algal chloroplasts. However, the expression levels achieved in the algal chloroplast genome are generally lower compared to the chloroplasts of higher plants. Work is therefore needed to make the algal chloroplast transformation commercially competitive. In this review, we discuss some examples from the algal research, which could play their role in making algal chloroplast commercially successful.


2021 ◽  
Author(s):  
Miguel Dasilva ◽  
Christian Brandt ◽  
Marc Alwin Gieselmann ◽  
Claudia Distler ◽  
Alexander Thiele

Abstract Top-down attention, controlled by frontal cortical areas, is a key component of cognitive operations. How different neurotransmitters and neuromodulators flexibly change the cellular and network interactions with attention demands remains poorly understood. While acetylcholine and dopamine are critically involved, glutamatergic receptors have been proposed to play important roles. To understand their contribution to attentional signals, we investigated how ionotropic glutamatergic receptors in the frontal eye field (FEF) of male macaques contribute to neuronal excitability and attentional control signals in different cell types. Broad-spiking and narrow-spiking cells both required N-methyl-D-aspartic acid and α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor activation for normal excitability, thereby affecting ongoing or stimulus-driven activity. However, attentional control signals were not dependent on either glutamatergic receptor type in broad- or narrow-spiking cells. A further subdivision of cell types into different functional types using cluster-analysis based on spike waveforms and spiking characteristics did not change the conclusions. This can be explained by a model where local blockade of specific ionotropic receptors is compensated by cell embedding in large-scale networks. It sets the glutamatergic system apart from the cholinergic system in FEF and demonstrates that a reduction in excitability is not sufficient to induce a reduction in attentional control signals.


Sign in / Sign up

Export Citation Format

Share Document