scholarly journals Formalizing falsification for theories of consciousness across computational hierarchies

2021 ◽  
Vol 2021 (2) ◽  
Author(s):  
Jake R Hanson ◽  
Sara I Walker

Abstract The scientific study of consciousness is currently undergoing a critical transition in the form of a rapidly evolving scientific debate regarding whether or not currently proposed theories can be assessed for their scientific validity. At the forefront of this debate is Integrated Information Theory (IIT), widely regarded as the preeminent theory of consciousness because it quantified subjective experience in a scalar mathematical measure called Φ that is in principle measurable. Epistemological issues in the form of the “unfolding argument” have provided a concrete refutation of IIT by demonstrating how it permits functionally identical systems to have differences in their predicted consciousness. The implication is that IIT and any other proposed theory based on a physical system’s causal structure may already be falsified even in the absence of experimental refutation. However, so far many of these arguments surrounding the epistemological foundations of falsification arguments, such as the unfolding argument, are too abstract to determine the full scope of their implications. Here, we make these abstract arguments concrete, by providing a simple example of functionally equivalent machines realizable with table-top electronics that take the form of isomorphic digital circuits with and without feedback. This allows us to explicitly demonstrate the different levels of abstraction at which a theory of consciousness can be assessed. Within this computational hierarchy, we show how IIT is simultaneously falsified at the finite-state automaton level and unfalsifiable at the combinatorial-state automaton level. We use this example to illustrate a more general set of falsification criteria for theories of consciousness: to avoid being already falsified, or conversely unfalsifiable, scientific theories of consciousness must be invariant with respect to changes that leave the inference procedure fixed at a particular level in a computational hierarchy.

VLSI Design ◽  
1995 ◽  
Vol 3 (3-4) ◽  
pp. 249-265 ◽  
Author(s):  
Zafar Hasan ◽  
Maciej J. Ciesielski

Here we present a new method for the decomposition of a Finite State Machine (FSM) into a network of interacting FSMs and a framework for the functional verification of the FSM network at different levels of abstraction. The problem of decomposition is solved by output partitioning and state space decomposition using a multiway graph partitioning technique. The number of submachines is determined dynamically during the partitioning process. The verification algorithm can be used to verify (a) the result of FSM decomposition on a behavioral level, (b) the encoded FSM network, and (c) the FSM network after logic optimization. Our verification technique is based on an efficient enumeration-simulation method which involves traversal of the state transition graph of the prototype machine and simulation of the decomposed machine network. Both the decomposition and verification/simulation algorithms have been implemented as part of an interactive FSM synthesis system and tested on a set of benchmark examples.


2021 ◽  
Author(s):  
Michael H. Herzog ◽  
Aaron Schurger ◽  
Adrien Doerig

We recently put forward an argument, the Unfolding Argument (UA), that integrated information theory (IIT) and other causal structure theories are either already falsified or unfalsifiable, which provoked significant criticism. It seems that we and the critics agree that the main question in this debate is whether pure first-person experience, independent of third-person measurements, is a sufficient foundation for theories of consciousness. Here, we show, first, that the use of pure first-person experience relies on non-scientific, neo-Cartesian reasoning. Second, even if this reasoning is accepted, it leads to consciousness being entirely epiphenomenal, with absolutely no causal power. Third, consciousness would be fully detached from the content of reports about subjective experience. A human may report to perceive X but their content of consciousness is Y. Hence, IIT and other causal structure theories end up in a form of dissociative epiphenomenalism, invalidating pure first-person experience as a viable foundation.


2009 ◽  
Vol 30 (5) ◽  
pp. 1343-1369 ◽  
Author(s):  
DANNY CALEGARI ◽  
KOJI FUJIWARA

AbstractA function on a discrete group is weakly combable if its discrete derivative with respect to a combing can be calculated by a finite-state automaton. A weakly combable function is bicombable if it is Lipschitz in both the left- and right-invariant word metrics. Examples of bicombable functions on word-hyperbolic groups include:(1)homomorphisms to ℤ;(2)word length with respect to a finite generating set;(3)most known explicit constructions of quasimorphisms (e.g. the Epstein–Fujiwara counting quasimorphisms).We show that bicombable functions on word-hyperbolic groups satisfy acentral limit theorem: if$\overline {\phi }_n$is the value of ϕ on a random element of word lengthn(in a certain sense), there areEandσfor which there is convergence in the sense of distribution$n^{-1/2}(\overline {\phi }_n - nE) \to N(0,\sigma )$, whereN(0,σ) denotes the normal distribution with standard deviationσ. As a corollary, we show that ifS1andS2are any two finite generating sets forG, there is an algebraic numberλ1,2depending onS1andS2such that almost every word of lengthnin theS1metric has word lengthn⋅λ1,2in theS2metric, with error of size$O(\sqrt {n})$.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5136
Author(s):  
Bassem Ouni ◽  
Christophe Aussagues ◽  
Saadia Dhouib ◽  
Chokri Mraidha

Sensor-based digital systems for Instrumentation and Control (I&C) of nuclear reactors are quite complex in terms of architecture and functionalities. A high-level framework is highly required to pre-evaluate the system’s performance, check the consistency between different levels of abstraction and address the concerns of various stakeholders. In this work, we integrate the development process of I&C systems and the involvement of stakeholders within a model-driven methodology. The proposed approach introduces a new architectural framework that defines various concepts, allowing system implementations and encompassing different development phases, all actors, and system concerns. In addition, we define a new I&C Modeling Language (ICML) and a set of methodological rules needed to build different architectural framework views. To illustrate this methodology, we extend the specific use of an open-source system engineering tool, named Eclipse Papyrus, to carry out many automation and verification steps at different levels of abstraction. The architectural framework modeling capabilities will be validated using a realistic use case system for the protection of nuclear reactors. The proposed framework is able to reduce the overall system development cost by improving links between different specification tasks and providing a high abstraction level of system components.


2016 ◽  
Author(s):  
Arnold Gehlen

Moral and Hypermoral, Arnold Gehlen´s final book-length publication, is an elaboration on basic theses which had initially been brought forward in Gehlen´s anthropological magnum opus "Der Mensch". In this respect, this draft of a "pluralistic ethics" is conceived as an elaboration on as well as a concretion of his doctrine of man. In this book, Gehlen set himself the task of combining anthropology, behavioral science, and sociology in a “genealogy of morality”, thus exposing four interdependent forms of ethics: from an ethos of "reciprocity" via “eudaimonism” and “humanitarianism” to an ethos of institutions, including the state. Gehlen made a decisive stand against the "abstract ethics of the Enlightenment": systematically, his book is primarily an anthropological justification of ethics, conceived as a "majority of moral authorities" and "social regulations." These are not subjected to an evolutionary interpretation, that is, as progress from an ethics of proximity to a world-encompassing morality. Moralities, whether based on instinct or arising from the needs of particular institutions, are always culturally shaped and set on different levels of abstraction. With its broad scope, the book belongs in the context of basic philosophical-sociological research known as philosophical anthropology.


2013 ◽  
Vol 2013 ◽  
pp. 1-14 ◽  
Author(s):  
Yanlong Sun ◽  
Hongbin Wang

According to the data-frame theory, sensemaking is a macrocognitive process in which people try to make sense of or explain their observations by processing a number of explanatory structures called frames until the observations and frames become congruent. During the sensemaking process, the parietal cortex has been implicated in various cognitive tasks for the functions related to spatial and temporal information processing, mathematical thinking, and spatial attention. In particular, the parietal cortex plays important roles by extracting multiple representations of magnitudes at the early stages of perceptual analysis. By a series of neural network simulations, we demonstrate that the dissociation of different types of spatial information can start early with a rather similar structure (i.e., sensitivity on a common metric), but accurate representations require specific goal-directed top-down controls due to the interference in selective attention. Our results suggest that the roles of the parietal cortex rely on the hierarchical organization of multiple spatial representations and their interactions. The dissociation and interference between different types of spatial information are essentially the result of the competition at different levels of abstraction.


Sign in / Sign up

Export Citation Format

Share Document