scholarly journals A General Theory of Information Cost Incurred by Successful Search

Author(s):  
William A. Dembski ◽  
Winston Ewert ◽  
Robert J. Marks II
2022 ◽  
Vol 6 (1) ◽  
pp. 7
Author(s):  
Rao Mikkilineni

All living beings use autopoiesis and cognition to manage their “life” processes from birth through death. Autopoiesis enables them to use the specification in their genomes to instantiate themselves using matter and energy transformations. They reproduce, replicate, and manage their stability. Cognition allows them to process information into knowledge and use it to manage its interactions between various constituent parts within the system and its interaction with the environment. Currently, various attempts are underway to make modern computers mimic the resilience and intelligence of living beings using symbolic and sub-symbolic computing. We discuss here the limitations of classical computer science for implementing autopoietic and cognitive behaviors in digital machines. We propose a new architecture applying the general theory of information (GTI) and pave the path to make digital automata mimic living organisms by exhibiting autopoiesis and cognitive behaviors. The new science, based on GTI, asserts that information is a fundamental constituent of the physical world and that living beings convert information into knowledge using physical structures that use matter and energy. Our proposal uses the tools derived from GTI to provide a common knowledge representation from existing symbolic and sub-symbolic computing structures to implement autopoiesis and cognitive behaviors.


Author(s):  
Mark Burgin

The general theory of information is a synthetic approach, which organizes and encompasses all main directions in information theory. It is developed on three levels: conceptual, methodological and theoretical. On the conceptual level, the concept of information is purified and information operations are separated and described. On the methodological level, it is formulated as system of principles, explaining what information is and how to measure information. On the theoretical level, mathematical models of information are constructed and studied. The goal of this paper is to clarify the concept of information and discuss its mathematical models, establishing relations with physics as the most developed science.


Author(s):  
Rao Mikkilineni ◽  
Mark Burgin

The General Theory of Information (GTI) tells us that information is represented, processed and communicated using physical structures. The physical universe is made up of structures combining matter and energy. According to GTI, “Information is related to knowledge as energy is related to matter.” GTI also provides tools to deal with transformation of information and knowledge. We present here, the application of these tools for the design of digital autopoietic machines with higher efficiency, resiliency and scalability than the information processing systems based on the Turing machines. We discuss the utilization of these machines for building autopoietic and cognitive applications in a multi-cloud infrastructure.


Author(s):  
Waseem Afzal

Information imperfections of various kinds are present around us and information asymmetry is one such kind. The phrase “information imperfection” indicates information which is less than ideal for many conceivable reasons. The concept of “information asymmetry” is different, and indicates the presence of more information at one end of an informational distribution. The purpose of this chapter is not to provide a literature review of information asymmetry but to (1) build on previous work, (2) suggest a set of concepts, and (3) describe examples of information asymmetries in order to propose a framework for a general theory of information asymmetry. To this end, this chapter provides a brief overview of the concepts of information asymmetry and information imperfection. It also proposes a set of four concepts considered to be of importance in understanding information asymmetry; describes two major categories of information asymmetries; discusses different types of informational disturbances; and finally discusses the potential effects of information asymmetries.


Author(s):  
William A. Dembski ◽  
◽  
Robert J. Marks II ◽  

Needle-in-the-haystack problems look for small targets in large spaces. In such cases, blind search stands no hope of success. Conservation of information dictates any search technique will work, on average, as well as blind search. Success requires an assisted search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search. We prove two results: (1) The Horizontal No Free Lunch Theorem, which shows that average relative performance of searches never exceeds unassisted or blind searches, and (2) The Vertical No Free Lunch Theorem, which shows that the difficulty of searching for a successful search increases exponentially with respect to the minimum allowable active information being sought.


Sign in / Sign up

Export Citation Format

Share Document