scholarly journals From Data Processing to Knowledge Processing: Working with Operational Schemas by Autopoietic Machines

2021 ◽  
Vol 5 (1) ◽  
pp. 13
Author(s):  
Mark Burgin ◽  
Rao Mikkilineni

Knowledge processing is an important feature of intelligence in general and artificial intelligence in particular. To develop computing systems working with knowledge, it is necessary to elaborate the means of working with knowledge representations (as opposed to data), because knowledge is an abstract structure. There are different forms of knowledge representations derived from data. One of the basic forms is called a schema, which can belong to one of three classes: operational, descriptive, and representation schemas. The goal of this paper is the development of theoretical and practical tools for processing operational schemas. To achieve this goal, we use schema representations elaborated in the mathematical theory of schemas and use structural machines as a powerful theoretical tool for modeling parallel and concurrent computational processes. We describe the schema of autopoietic machines as physical realizations of structural machines. An autopoietic machine is a technical system capable of regenerating, reproducing, and maintaining itself by production, transformation, and destruction of its components and the networks of processes downstream contained in them. We present the theory and practice of designing and implementing autopoietic machines as information processing structures integrating both symbolic computing and neural networks. Autopoietic machines use knowledge structures containing the behavioral evolution of the system and its interactions with the environment to maintain stability by counteracting fluctuations.

Author(s):  
Rao Mikkilineni ◽  
Mark Burgin

Knowledge processing is an important feature of intelligence in general and artificial intelligence in particular. To develop computing systems working with knowledge, it is necessary to elaborate means of working with knowledge representations (as opposed to data) because knowledge is an abstract structure. There are different forms of knowledge representations derived from data. One of the basic forms is called a schema. The goal of this paper is the development of theoretical and practical tools for processing schemas. To achieve this goal, we use schema representations elaborated in the mathematical theory of schemas and use structural machine as a powerful theoretical tool for modeling parallel and concurrent computational processes. We describe the schema of autopoietic machines as physical realizations of structural machines. An autopoietic Machine is a technical system capable of regenerating, reproducing and maintaining itself by production, transformation and destruction of its components and the networks of processes downstream contained in them. We present the theory and practice of designing and implementing autopoietic machines as information processing structures integrating both symbolic computing and neural networks. Autopoietic machines use knowledge structures containing the behavioral evolution of the system and its interactions with the environment to maintain stability by counteracting fluctuations.


2021 ◽  
pp. medethics-2020-106820 ◽  
Author(s):  
Juan Manuel Durán ◽  
Karin Rolanda Jongsma

The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that computational processes are indeed methodologically opaque to humans, we argue that the reliability of algorithms provides reasons for trusting the outcomes of medical artificial intelligence (AI). To this end, we explain how computational reliabilism, which does not require transparency and supports the reliability of algorithms, justifies the belief that results of medical AI are to be trusted. We also argue that several ethical concerns remain with black box algorithms, even when the results are trustworthy. Having justified knowledge from reliable indicators is, therefore, necessary but not sufficient for normatively justifying physicians to act. This means that deliberation about the results of reliable algorithms is required to find out what is a desirable action. Thus understood, we argue that such challenges should not dismiss the use of black box algorithms altogether but should inform the way in which these algorithms are designed and implemented. When physicians are trained to acquire the necessary skills and expertise, and collaborate with medical informatics and data scientists, black box algorithms can contribute to improving medical care.


10.2196/15511 ◽  
2019 ◽  
Vol 21 (11) ◽  
pp. e15511 ◽  
Author(s):  
Bach Xuan Tran ◽  
Son Nghiem ◽  
Oz Sahin ◽  
Tuan Manh Vu ◽  
Giang Hai Ha ◽  
...  

Background Artificial intelligence (AI)–based technologies develop rapidly and have myriad applications in medicine and health care. However, there is a lack of comprehensive reporting on the productivity, workflow, topics, and research landscape of AI in this field. Objective This study aimed to evaluate the global development of scientific publications and constructed interdisciplinary research topics on the theory and practice of AI in medicine from 1977 to 2018. Methods We obtained bibliographic data and abstract contents of publications published between 1977 and 2018 from the Web of Science database. A total of 27,451 eligible articles were analyzed. Research topics were classified by latent Dirichlet allocation, and principal component analysis was used to identify the construct of the research landscape. Results The applications of AI have mainly impacted clinical settings (enhanced prognosis and diagnosis, robot-assisted surgery, and rehabilitation), data science and precision medicine (collecting individual data for precision medicine), and policy making (raising ethical and legal issues, especially regarding privacy and confidentiality of data). However, AI applications have not been commonly used in resource-poor settings due to the limit in infrastructure and human resources. Conclusions The application of AI in medicine has grown rapidly and focuses on three leading platforms: clinical practices, clinical material, and policies. AI might be one of the methods to narrow down the inequality in health care and medicine between developing and developed countries. Technology transfer and support from developed countries are essential measures for the advancement of AI application in health care in developing countries.


Author(s):  
Will W. K. Ma

The concept of knowledge sharing finds historical support in theories on the acquisition and creation of knowledge. While the key to knowledge sharing depends on frequent and regular social interaction, the recent rapid development of the Internet has enhanced much of the social interaction taking place among individuals at any time, at any place, and with any person. Through a review of the literature, this chapter defines online knowledge sharing, discusses the effects of intrinsic and extrinsic motivational factors in explaining online knowledge behavior, explores the various forms of knowledge sharing in different online learning environments, and reviews the measurement of online knowledge sharing. The chapter also discusses online knowledge-sharing issues that should be addressed in future.


2015 ◽  
Vol 6 (1) ◽  
pp. 24-39 ◽  
Author(s):  
Max Talanov ◽  
Alexander Toschev

Turing genius anticipated current research in AI field for 65 years and stated that idea of intelligent machines “cannot be wholly ignored, because the idea of 'intelligence' is itself emotional rather than mathematical” (). This is the second article dedicated to emotional thinking bases. In the first article, the authors () created overall picture and proposed framework for computational emotional thinking. They used 3 bases for their work: AI - six thinking levels model described in book “The emotion machine” (). Evolutionary psychology model: “Wheel of emotions” (). Neuroscience (neurotransmission) theory of emotions by Lovheim “Cube of emotions” (). Based on neurotransmitters impact the authors proposed to model emotional computing systems. Current work is dedicated to three aspects left not described in first article: appraisal: algorithm and predicates - how inbound stimulus is estimated to trigger proper emotional response, coping: the way human treat with emotional state triggered by stimulus appraisal and further thinking processes, high level emotions impact on system and its computational processes.


Author(s):  
Alma-Delia Cuevas-Rasgado ◽  
Adolfo Guzman-Arenas

Ontologies are becoming important repositories of information useful for business transactions and operations since they are amenable to knowledge processing using artificial intelligence techniques. They offer the potential of amassing large contents of relevant information, but until now the fusion or merging of ontologies, needed for knowledge buildup and its exploitation by machine, was done manually or through computer-aided ontology editors. Thus, attaining large ontologies was expensive and slow. This chapter offers a new, automatic method of joining two ontologies to obtain a third one. The method works well in spite of inconsistencies, redundancies, and different granularity of information.


Sign in / Sign up

Export Citation Format

Share Document