scholarly journals Integrated Information in Process-Algebraic Compositions

Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 805
Author(s):  
Tommaso Bolognesi

Integrated Information Theory (IIT) is most typically applied to Boolean Nets, a state transition model in which system parts cooperate by sharing state variables. By contrast, in Process Algebra, whose semantics can also be formulated in terms of (labeled) state transitions, system parts—“processes”—cooperate by sharing transitions with matching labels, according to interaction patterns expressed by suitable composition operators. Despite this substantial difference, questioning how much additional information is provided by the integration of the interacting partners above and beyond the sum of their independent contributions appears perfectly legitimate with both types of cooperation. In fact, we collect statistical data about ϕ —integrated information—relative to pairs of boolean nets that cooperate by three alternative mechanisms: shared variables—the standard choice for boolean nets—and two forms of shared transition, inspired by two process algebras. We name these mechanisms α , β and γ . Quantitative characterizations of all of them are obtained by considering three alternative execution modes, namely synchronous, asynchronous and “hybrid”, by exploring the full range of possible coupling degrees in all three cases, and by considering two possible definitions of ϕ based on two alternative notions of distribution distance.

2019 ◽  
Vol 76 (11) ◽  
pp. 3455-3484 ◽  
Author(s):  
Carsten Abraham ◽  
Adam H. Monahan

Abstract The atmospheric nocturnal stable boundary layer (SBL) can be classified into two distinct regimes: the weakly SBL (wSBL) with sustained turbulence and the very SBL (vSBL) with weak and intermittent turbulence. A hidden Markov model (HMM) analysis of the three-dimensional state-variable space of Reynolds-averaged mean dry static stability, mean wind speed, and wind speed shear is used to classify the SBL into these two regimes at nine different tower sites, in order to study long-term regime occupation and transition statistics. Both Reynolds-averaged mean data and measures of turbulence intensity (eddy variances) are separated in a physically meaningful way. In particular, fluctuations of the vertical wind component are found to be much smaller in the vSBL than in the wSBL. HMM analyses of these data using more than two SBL regimes do not result in robust results across measurement locations. To identify which meteorological state variables carry the information about regime occupation, the HMM analyses are repeated using different state-variable subsets. Reynolds-averaged measures of turbulence intensity (such as turbulence kinetic energy) at any observed altitude hold almost the same information as the original set, without adding any additional information. In contrast, both stratification and shear depend on surface information to capture regime transitions accurately. Use of information only in the bottom 10 m of the atmosphere is sufficient for HMM analyses to capture important information about regime occupation and transition statistics. It follows that the commonly measured 10-m wind speed is potentially a good indicator of regime occupation.


1992 ◽  
Vol 02 (03) ◽  
pp. 451-482 ◽  
Author(s):  
WALTER J. FREEMAN

Those classical models are reviewed that are most widely used by neurobiologists to explain the dynamics of neurons and neuron populations, and by modelers to implement artificial neural networks. Each neuron has input fibers called dendrites that integrate and an axon that transmits the output. The differing fiber architectures reflect these dissimilar dynamic operations. The basic tools to describe them are the RC model of the membrane, the core conductor model of the fibers, the Hodgkin–Huxley model of the trigger zone, and the modifiable synapse. Populations additionally require description of macroscopic state variables, the types of nonlinearity (most importantly the sigmoid curve and the dynamic range compression at the input to the cortex), and the types and strengths of connections. The properties of these neural masses can be characterized with the tools of nonlinear dynamics. These include description of point, limit cycle, and chaotic attractors for the cerebal cortex, as well as the types and mechanisms for the state transitions between basins of attraction during learning and perception.


2021 ◽  
Author(s):  
Jake Hanson ◽  
Sara Imari Walker

Integrated Information Theory is currently the leading mathematical theory of conscious- ness. The core of the theory relies on the calculation of a scalar mathematical measure of consciousness, Φ, which is deduced from the phenomenological axioms of the theory. Here, we show that despite its widespread use, Φ is not a well-defined mathematical concept in the sense that the value it specifies is neither unique nor specific. This problem, occasionally referred to as “undetermined qualia”, is the result of degeneracies in the optimization routine used to calculate Φ, which leads to ambiguities in determining the consciousness of systems under study. As demonstration, we first apply the mathematical definition of Φ to a simple AND+OR logic gate system and show 83 non-unique Φ values result, spanning a substantial portion of the range of possibilities. We then introduce a Python package called PyPhi-Spectrum which, unlike currently available packages, delivers the entire spectrum of possible Φ values for a given system. We apply this to a variety of examples of recently published calculations of Φ and show how virtually all Φ values from the sampled literature are chosen arbitrarily from a set of non-unique possibilities, the full range of which often includes both conscious and unconscious predictions. Lastly, we review proposed solutions to this degeneracy problem, and find none to provide a satisfactory solution, either because they fail to specify a unique Φ value or yield Φ = 0 for systems that are clearly integrated. We conclude with a discussion of requirements moving forward for scientifically valid theories of consciousness that avoid these degeneracy issues.


Author(s):  
A. N. Trofimov

Introduction:Suboptimal random coding exponent Er*(R; ψ) for a wide class of finite-state channel models using a mismatched decoding function tp was obtained and presented in the first part of this work. We used tp function represented as a product of a posteriori probabilities of non-overlapped input subblocks of length 2B+1 relative to the overlapped output subblocks of length 2W+1. It has been shown that the computation of function Er*(R; ψ) is reduced to the calculation of the largest eigenvalue of a square non-negative matrix of an order depending on the B and W values.Purpose:Toillustrate the approach developed in the first part of this study with its application to various channel modelled as a probabilistic finite-state machine.Results:We consider channels with state transitions not depending on the input symbol (channels with freely evolving states), and channels with deterministic state transitions, in particular, intersymbol interference channels. We present and discuss numerical results of calculating this random coding exponent in a full range of code rates for some of channel models for which similar results were not obtained before. Practical computations were carried out for relatively small values of B and W. Nevertheless, even for small values of these parameters a good correspondence with some known results for optimal decoding was shown.


Author(s):  
Gheorghe Muresan

In this chapter, we describe and discuss a methodological framework that integrates analysis of interaction logs with the conceptual design of the user interaction. It is based on (i) formalizing the functionality that is supported by an interactive system and the valid interactions that can take place; (ii) deriving schemas for capturing the interactions in activity logs; (iii) deriving log parsers that reveal the system states and the state transitions that took place during the interaction; and (iv) analyzing the user activities and the system’s state transitions in order to describe the user interaction or to test some research hypotheses. This approach is particularly useful for studying user behavior when using highly interactive systems. We present the details of the methodology, and exemplify its use in a mediated retrieval experiment, in which the focus of the study is on studying the information-seeking process and on finding interaction patterns.


2014 ◽  
Vol 142 (11) ◽  
pp. 4017-4035 ◽  
Author(s):  
Yu-Chieng Liou ◽  
Jian-Luen Chiou ◽  
Wei-Hao Chen ◽  
Hsin-Yu Yu

Abstract This research combines an advanced multiple-Doppler radar synthesis technique with the thermodynamic retrieval method, originally proposed by Gal-Chen, and a moisture/temperature adjustment scheme, and formulates a sequential procedure. The focus is on applying this procedure to improve the model quantitative precipitation nowcasting (QPN) skill in the convective scale up to 3 hours. A series of (observing system simulation experiment) OSSE-type tests and a real case study are conducted to investigate the performance of this algorithm under different conditions. It is shown that by using the retrieved three-dimensional wind, thermodynamic, and microphysical parameters to reinitialize a fine-resolution numerical model, its QPN skill can be significantly improved. Since the Gal-Chen method requires the horizontal average properties of the weather system at each altitude, utilization of in situ radiosonde(s) to obtain this additional information for the retrieval is tested. When sounding data are not available, it is demonstrated that using the model results to replace the role played by observing devices is also a feasible choice. The moisture field is obtained through a simple, but effective, adjusting scheme and is found to be beneficial to the rainfall forecast within the first hour after the reinitialization of the model. Since this algorithm retrieves the unobserved state variables instantaneously from the wind measurements and directly uses them to reinitialize the model, fewer radar data and a shorter model spinup time are needed to correct the rainfall forecasts, in comparison with other data assimilation techniques such as four-dimensional variational data assimilation (4DVAR) or ensemble Kalman filter (EnKF) methods.


2017 ◽  
Vol 41 (S1) ◽  
pp. S27-S27
Author(s):  
N. Maric ◽  
S. Andric ◽  
A. Raballo ◽  
M. Rojnic Kuzman ◽  
J. Klosterkötter ◽  
...  

In the last two decades, both early detection (ED) and early intervention (EI) programs and services have gradually become important and innovative components of contemporary mental health care. However, it is unclear whether ED/EI programs have consistently been implemented throughout Europe.Here, we report results of the EPA Survey on ED/EI Programs in Europe in 2016.A 16 item questionnaire was sent to representatives (presidents and secretariats) of 40 EPA National Societies/Associations (NPAs), representing 37 countries. The representatives were also invited to recommend a person for additional information about ED/EI services/programs in the country.The response rate was 59.4% (22 NPAs). Fifteen out of 28 NPAs were from developed, and 7 out of 8 from economies in transition. ED/EI services have been implemented in 54.5% of the included countries, mean duration 10.0 ± 4.9 yrs. Mostly, neither ED were separated from EI, not the adults from adolescents. National plans to develop ED/EI were reported in four countries. Although national guidelines for schizophrenia exist in most of the countries (73.9%), specific chapters focusing on ED/EI and/or at-risk mental states were not included in the majority of them. Duration of untreated psychosis was unknown in 63.6%. In those who gave the estimation it was 12–100 weeks (median in weeks: 33 developed economies; 44 economies in transition).The fields of ED/EI have been unequally developed across Europe. Still, many NPAs are without the development plans. EPA and its Sections should address the identified gaps and suggest how to harmonize services for the full range of assessments and interventions.Disclosure of interestThe authors have not supplied their declaration of competing interest.


Author(s):  
Jérôme Naturel ◽  
Thomas Epsztein ◽  
Thierry Gavouyère

Unbounded Flexible pipe used for offshore fields development are usually composed of different layers of polymer and steel, each layer having a specific function during the product service life. This multi-layer characteristic enables to tailor the cross-section of the pipe to meet project-specific requirement, and optimize the cost of the product for each application. In particular, the main function of the thermoplastic pressure sheath is to guaranty the sealing of the product. The material and the thickness of this pressure sheath mainly depend on the pressure and temperature of the bore, and the design choice is driven by the creeping of the sheath in the interstices of the pressure vault: it must be limited with regard to sheath thickness reduction, as per API17J design requirement. Consequently, when developing new material for pressure sheath application, the early prediction of the creep performance over the full range of the targeted application is crucial. For this reason, before any full-scale test, a test campaign is required to evaluate the creeping of the material on small-scale material sample. In this development context, the use of advanced finite-element simulation for predicting the creeping behavior is quite useful to amplify the benefit of tests campaign results, and to give additional information on material performances. As far as the modelling is validated by correlation with small-scale tests, the numerical tool is used to multiply virtual creep tests configurations. This paper will focus on the numerical challenges for developing such creeping simulation, based on ABAQUS commercial software. Firstly, the identification of the viscoelastoplastic parameters for polymer material law will be presented. This material law is a nonlinear viscoelastoplastic model consisting of multiple networks connected in parallel. The number of parameters of such law is not limited, but a compromise between law precision and identification robustness must be found. Then, the correlation process between small-scale test and finite-element results will be detailed. In particular, the influence of the experimental protocol has to be determined. Finally, a sensitivity study of the most influent parameters, based on parametric FEA model, will be presented to highlight the benefice of such model. The benefice of such model does not only consist on correlation with small-scale test. As the material modeling is intrinsic, it is also possible to use the same law for studying the creep behavior on very different geometrical configurations.


SPE Journal ◽  
2020 ◽  
Vol 25 (06) ◽  
pp. 3317-3331
Author(s):  
Pipat Likanapaisal ◽  
Hamdi A. Tchelepi

Summary In general, a probabilistic framework for a modeling process involves two uncertainty spaces: model parameters and state variables (or predictions). The two uncertainty spaces in reservoir simulation are connected by the governing equations of flow and transport in porous media in the form of a reservoir simulator. In a forward problem (or a predictive run), the reservoir simulator directly maps the uncertainty space of the model parameters to the uncertainty space of the state variables. Conversely, an inverse problem (or history matching) aims to improve the descriptions of the model parameters by using the measurements of state variables. However, we cannot solve the inverse problem directly in practice. Numerous algorithms, including Kriging-based inversion and the ensemble Kalman filter (EnKF) and its many variants, simplify the system by using a linear assumption. The purpose of this paper is to improve the integration of measurement errors in the history-matching algorithms that rely on the linear assumption. The statistical moment equation (SME) approach with the Kriging-based inversion algorithm is used to illustrate several practical examples. In the Motivation section, an example of pressure conditioning has a measurement that contains no additional information because of its significant measurement error. This example highlights the inadequacy of the current method that underestimates the conditional uncertainty for both model parameters and predictions. Accordingly, we derive a new formula that recognizes the absence of additional information and preserves the unconditional uncertainty. We believe this to be the consistent behavior to integrate measurement errors. Other examples are used to validate the new formula with both linear and nonlinear (i.e., the saturation equation) problems, with single and multiple measurements, and with different configurations of measurement errors. For broader applications, we also develop an equivalent formula for algorithms in the Monte Carlo simulation (MCS) approach, such as EnKF and ensemble smoother (ES).


2017 ◽  
Vol 56 (2) ◽  
pp. 263-282 ◽  
Author(s):  
Maximilian Maahn ◽  
Ulrich Löhnert

AbstractRetrievals of ice-cloud properties from cloud-radar observations are challenging because the retrieval methods are typically underdetermined. Here, the authors investigate whether additional information can be obtained from higher-order moments and the slopes of the radar Doppler spectrum such as skewness and kurtosis as well as the slopes of the Doppler peak. To estimate quantitatively the additional information content, a generalized Bayesian retrieval framework that is based on optimal estimation is developed. Real and synthetic cloud-radar observations of the Indirect and Semi-Direct Aerosol Campaign (ISDAC) dataset obtained around Barrow, Alaska, are used in this study. The state vector consists of the microphysical (particle-size distribution, mass–size relation, and cross section–area relation) and kinematic (vertical wind and turbulence) quantities required to forward model the moments and slopes of the radar Doppler spectrum. It is found that, for a single radar frequency, more information can be retrieved when including higher-order moments and slopes than when using only reflectivity and mean Doppler velocity but two radar frequencies. When using all moments and slopes with two or even three frequencies, the uncertainties of all state variables, including the mass–size relation, can be considerably reduced with respect to the prior knowledge.


Sign in / Sign up

Export Citation Format

Share Document