A Liar’s Epistemology

Qui Parle ◽  
2021 ◽  
Vol 30 (1) ◽  
pp. 119-157
Author(s):  
Brett Zehner

Abstract This methodologically important essay aims to trace a genealogical account of Herbert Simon’s media philosophy and to contest the histories of artificial intelligence that overlook the organizational capacities of computational models. As Simon’s work demonstrates, humans’ subjection to large-scale organizations and divisions of labor is at the heart of artificial intelligence. As such, questions of procedures are key to understanding the power assumed by institutions wielding artificial intelligence. Most media-historical accounts of the development of contemporary artificial intelligence stem from the work of Warren S. McCulloch and Walter Pitts, especially the 1943 essay “A Logical Calculus of the Ideas Immanent in Nervous Activity.” Yet Simon’s revenge is perhaps that reinforcement learning systems adopt his prescriptive approach to algorithmic procedures. Computer scientists criticized Simon for the performative nature of his artificially intelligent systems, mainly for his positivism, but he defended his positivism based on his belief that symbolic computation could stand in for any reality and in fact shape that reality. Simon was not looking to actually re-create human intelligence; he was using coercion, bad faith, and fraud as tactical weapons in the reordering of human decision-making. Artificial intelligence was the perfect medium for his explorations.

Author(s):  
Ephraim Nissan

In order to visualize argumentation, there exist tools from multimedia. The most advanced sides of computational modeling of arguments belong in models and tools upstream of visualization tools: the latter are an interface. Computer models of argumentation come in three categories: logic-based (highly theoretical), probablistic, and pragmatic ad hoc treatments. Theoretical formalisms of argumentation were developed by logicists within artificial intelligence (and were implemented and often can be reused outside the original applications), or then the formalisms are rooted in philosophers’ work. We cite some such work, but focus on tools that support argumentation visually. Argumentation turns out in a wide spectrum of everyday life situations, including professional ones. Computational models of argumentation have found application in tutoring systems, tools for marshalling legal evidence, and models of multiagent communication. Intelligent systems and other computer tools potentially stand to benefit as well. Multimedia are applied to argumentation (in visualization tools), and also are a promising field of application (in tutoring systems). The design of networks could potentially benefit, if communication is modeled using multiagent technology.


AI Magazine ◽  
2017 ◽  
Vol 38 (3) ◽  
pp. 25-36 ◽  
Author(s):  
Katie Atkinson ◽  
Pietro Baroni ◽  
Massimiliano Giacomin ◽  
Anthony Hunter ◽  
Henry Prakken ◽  
...  

The field of computational models of argument is emerging as an important aspect of artificial intelligence research. The reason for this is based on the recognition that if we are to develop robust intelligent systems, then it is imperative that they can handle incomplete and inconsistent information in a way that somehow emulates the way humans tackle such a complex task. And one of the key ways that humans do this is to use argumentation either internally, by evaluating arguments and counterarguments‚ or externally, by for instance entering into a discussion or debate where arguments are exchanged. As we report in this review, recent developments in the field are leading to technology for artificial argumentation, in the legal, medical, and e-government domains, and interesting tools for argument mining, for debating technologies, and for argumentation solvers are emerging.


2020 ◽  
Author(s):  
Aya Sedky Adly ◽  
Afnan Sedky Adly ◽  
Mahmoud Sedky Adly

BACKGROUND Artificial intelligence (AI) and the Internet of Intelligent Things (IIoT) are promising technologies to prevent the concerningly rapid spread of coronavirus disease (COVID-19) and to maximize safety during the pandemic. With the exponential increase in the number of COVID-19 patients, it is highly possible that physicians and health care workers will not be able to treat all cases. Thus, computer scientists can contribute to the fight against COVID-19 by introducing more intelligent solutions to achieve rapid control of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus that causes the disease. OBJECTIVE The objectives of this review were to analyze the current literature, discuss the applicability of reported ideas for using AI to prevent and control COVID-19, and build a comprehensive view of how current systems may be useful in particular areas. This may be of great help to many health care administrators, computer scientists, and policy makers worldwide. METHODS We conducted an electronic search of articles in the MEDLINE, Google Scholar, Embase, and Web of Knowledge databases to formulate a comprehensive review that summarizes different categories of the most recently reported AI-based approaches to prevent and control the spread of COVID-19. RESULTS Our search identified the 10 most recent AI approaches that were suggested to provide the best solutions for maximizing safety and preventing the spread of COVID-19. These approaches included detection of suspected cases, large-scale screening, monitoring, interactions with experimental therapies, pneumonia screening, use of the IIoT for data and information gathering and integration, resource allocation, predictions, modeling and simulation, and robotics for medical quarantine. CONCLUSIONS We found few or almost no studies regarding the use of AI to examine COVID-19 interactions with experimental therapies, the use of AI for resource allocation to COVID-19 patients, or the use of AI and the IIoT for COVID-19 data and information gathering/integration. Moreover, the adoption of other approaches, including use of AI for COVID-19 prediction, use of AI for COVID-19 modeling and simulation, and use of AI robotics for medical quarantine, should be further emphasized by researchers because these important approaches lack sufficient numbers of studies. Therefore, we recommend that computer scientists focus on these approaches, which are still not being adequately addressed.


2020 ◽  
Author(s):  
Tore Pedersen ◽  
Christian Johansen

Artificial Intelligence (AI) receives attention in media as well as in academe and business. In media coverage and reporting, AI is predominantly described in contrasted terms, either as the ultimate solution to all human problems or the ultimate threat to all human existence. In academe, the focus of computer scientists is on developing systems that function, whereas philosophy scholars theorize about the implications of this functionality for human life. In the interface between technology and philosophy there is, however, one imperative aspect of AI yet to be articulated: How do intelligent systems make inferences? We use the overarching concept “Artificial Intelligent Behaviour” which would include both cognition/processing and judgment/behaviour. We argue that due to the complexity and opacity of Artificial Inference, one needs to initiate systematic empirical studies of artificial intelligent behavior similar to what has previously been done to study human cognition, judgment and decision making. This will provide valid knowledge, outside of what current computer science methods can offer, about the judgments and decisions made by intelligent systems. Moreover, outside academe – in the public as well as the private sector – expertise in epistemology, critical thinking and reasoning are crucial to ensure human oversight of the artificial intelligent judgments and decisions that are made, because only competent human insight into AI-inference processes will ensure accountability. Such insights require systematic studies of AI-behaviour founded on the natural sciences and philosophy, as well as the employment of methodologies from the cognitive and behavioral sciences.


2019 ◽  
Vol 9 (2) ◽  
pp. 110 ◽  
Author(s):  
Meng-Leong HOW ◽  
Wei Loong David HUNG

Artificial intelligence-enabled adaptive learning systems (AI-ALS) are increasingly being deployed in education to enhance the learning needs of students. However, educational stakeholders are required by policy-makers to conduct an independent evaluation of the AI-ALS using a small sample size in a pilot study, before that AI-ALS can be approved for large-scale deployment. Beyond simply believing in the information provided by the AI-ALS supplier, there arises a need for educational stakeholders to independently understand the motif of the pedagogical characteristics that underlie the AI-ALS. Laudable efforts were made by researchers to engender frameworks for the evaluation of AI-ALS. Nevertheless, those highly technical techniques often require advanced mathematical knowledge or computer programming skills. There remains a dearth in the extant literature for a more intuitive way for educational stakeholders—rather than computer scientists—to carry out the independent evaluation of an AI-ALS to understand how it could provide opportunities to educe the problem-solving abilities of the students so that they can successfully learn the subject matter. This paper proffers an approach for educational stakeholders to employ Bayesian networks to simulate predictive hypothetical scenarios with controllable parameters to better inform them about the suitability of the AI-ALS for the students.


Author(s):  
José-Antonio Cervantes ◽  
Luis-Felipe Rodríguez ◽  
Sonia López ◽  
Félix Ramos ◽  
Francisco Robles

There are a great variety of theoretical models of cognition whose main purpose is to explain the inner workings of the human brain. Researchers from areas such as neuroscience, psychology, and physiology have proposed these models. Nevertheless, most of these models are based on empirical studies and on experiments with humans, primates, and rodents. In fields such as cognitive informatics and artificial intelligence, these cognitive models may be translated into computational implementations and incorporated into the architectures of intelligent autonomous agents (AAs). Thus, the main assumption in this work is that knowledge in those fields can be used as a design approach contributing to the development of intelligent systems capable of displaying very believable and human-like behaviors. Decision-Making (DM) is one of the most investigated and computationally implemented functions. The literature reports several computational models that enable AAs to make decisions that help achieve their personal goals and needs. However, most models disregard crucial aspects of human decision-making such as other agents' needs, ethical values, and social norms. In this paper, the authors present a set of criteria and mechanisms proposed to develop a biologically inspired computational model of Moral Decision-Making (MDM). To achieve a process of moral decision-making believable, the authors propose a cognitive function to determine the importance of each criterion based on the mood and emotional state of AAs, the main objective the model is to enable AAs to make decisions based on ethical and moral judgment.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Mariem Gandouz ◽  
Hajo Holzmann ◽  
Dominik Heider

AbstractMachine learning and artificial intelligence have entered biomedical decision-making for diagnostics, prognostics, or therapy recommendations. However, these methods need to be interpreted with care because of the severe consequences for patients. In contrast to human decision-making, computational models typically make a decision also with low confidence. Machine learning with abstention better reflects human decision-making by introducing a reject option for samples with low confidence. The abstention intervals are typically symmetric intervals around the decision boundary. In the current study, we use asymmetric abstention intervals, which we demonstrate to be better suited for biomedical data that is typically highly imbalanced. We evaluate symmetric and asymmetric abstention on three real-world biomedical datasets and show that both approaches can significantly improve classification performance. However, asymmetric abstention rejects as many or fewer samples compared to symmetric abstention and thus, should be used in imbalanced data.


10.2196/19104 ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. e19104 ◽  
Author(s):  
Aya Sedky Adly ◽  
Afnan Sedky Adly ◽  
Mahmoud Sedky Adly

Background Artificial intelligence (AI) and the Internet of Intelligent Things (IIoT) are promising technologies to prevent the concerningly rapid spread of coronavirus disease (COVID-19) and to maximize safety during the pandemic. With the exponential increase in the number of COVID-19 patients, it is highly possible that physicians and health care workers will not be able to treat all cases. Thus, computer scientists can contribute to the fight against COVID-19 by introducing more intelligent solutions to achieve rapid control of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus that causes the disease. Objective The objectives of this review were to analyze the current literature, discuss the applicability of reported ideas for using AI to prevent and control COVID-19, and build a comprehensive view of how current systems may be useful in particular areas. This may be of great help to many health care administrators, computer scientists, and policy makers worldwide. Methods We conducted an electronic search of articles in the MEDLINE, Google Scholar, Embase, and Web of Knowledge databases to formulate a comprehensive review that summarizes different categories of the most recently reported AI-based approaches to prevent and control the spread of COVID-19. Results Our search identified the 10 most recent AI approaches that were suggested to provide the best solutions for maximizing safety and preventing the spread of COVID-19. These approaches included detection of suspected cases, large-scale screening, monitoring, interactions with experimental therapies, pneumonia screening, use of the IIoT for data and information gathering and integration, resource allocation, predictions, modeling and simulation, and robotics for medical quarantine. Conclusions We found few or almost no studies regarding the use of AI to examine COVID-19 interactions with experimental therapies, the use of AI for resource allocation to COVID-19 patients, or the use of AI and the IIoT for COVID-19 data and information gathering/integration. Moreover, the adoption of other approaches, including use of AI for COVID-19 prediction, use of AI for COVID-19 modeling and simulation, and use of AI robotics for medical quarantine, should be further emphasized by researchers because these important approaches lack sufficient numbers of studies. Therefore, we recommend that computer scientists focus on these approaches, which are still not being adequately addressed.


Author(s):  
M. G. Koliada ◽  
T. I. Bugayova

The article discusses the history of the development of the problem of using artificial intelligence systems in education and pedagogic. Two directions of its development are shown: “Computational Pedagogic” and “Educational Data Mining”, in which poorly studied aspects of the internal mechanisms of functioning of artificial intelligence systems in this field of activity are revealed. The main task is a problem of interface of a kernel of the system with blocks of pedagogical and thematic databases, as well as with the blocks of pedagogical diagnostics of a student and a teacher. The role of the pedagogical diagnosis as evident reflection of the complex influence of factors and reasons is shown. It provides the intelligent system with operative and reliable information on how various reasons intertwine in the interaction, which of them are dangerous at present, where recession of characteristics of efficiency is planned. All components of the teaching and educational system are subject to diagnosis; without it, it is impossible to own any pedagogical situation optimum. The means in obtaining information about students, as well as the “mechanisms” of work of intelligent systems based on innovative ideas of advanced pedagogical experience in diagnostics of the professionalism of a teacher, are considered. Ways of realization of skill of the teacher on the basis of the ideas developed by the American scientists are shown. Among them, the approaches of researchers D. Rajonz and U. Bronfenbrenner who put at the forefront the teacher’s attitude towards students, their views, intellectual and emotional characteristics are allocated. An assessment of the teacher’s work according to N. Flanders’s system, in the form of the so-called “The Interaction Analysis”, through the mechanism of fixing such elements as: the verbal behavior of the teacher, events at the lesson and their sequence is also proposed. A system for assessing the professionalism of a teacher according to B. O. Smith and M. O. Meux is examined — through the study of the logic of teaching, using logical operations at the lesson. Samples of forms of external communication of the intellectual system with the learning environment are given. It is indicated that the conclusion of the found productive solutions can have the most acceptable and comfortable form both for students and for the teacher in the form of three approaches. The first shows that artificial intelligence in this area can be represented in the form of robotized being in the shape of a person; the second indicates that it is enough to confine oneself only to specially organized input-output systems for targeted transmission of effective methodological recommendations and instructions to both students and teachers; the third demonstrates that life will force one to come up with completely new hybrid forms of interaction between both sides in the form of interactive educational environments, to some extent resembling the educational spaces of virtual reality.


2020 ◽  
Vol 27 ◽  
Author(s):  
Zaheer Ullah Khan ◽  
Dechang Pi

Background: S-sulfenylation (S-sulphenylation, or sulfenic acid) proteins, are special kinds of post-translation modification, which plays an important role in various physiological and pathological processes such as cytokine signaling, transcriptional regulation, and apoptosis. Despite these aforementioned significances, and by complementing existing wet methods, several computational models have been developed for sulfenylation cysteine sites prediction. However, the performance of these models was not satisfactory due to inefficient feature schemes, severe imbalance issues, and lack of an intelligent learning engine. Objective: In this study, our motivation is to establish a strong and novel computational predictor for discrimination of sulfenylation and non-sulfenylation sites. Methods: In this study, we report an innovative bioinformatics feature encoding tool, named DeepSSPred, in which, resulting encoded features is obtained via n-segmented hybrid feature, and then the resampling technique called synthetic minority oversampling was employed to cope with the severe imbalance issue between SC-sites (minority class) and non-SC sites (majority class). State of the art 2DConvolutional Neural Network was employed over rigorous 10-fold jackknife cross-validation technique for model validation and authentication. Results: Following the proposed framework, with a strong discrete presentation of feature space, machine learning engine, and unbiased presentation of the underline training data yielded into an excellent model that outperforms with all existing established studies. The proposed approach is 6% higher in terms of MCC from the first best. On an independent dataset, the existing first best study failed to provide sufficient details. The model obtained an increase of 7.5% in accuracy, 1.22% in Sn, 12.91% in Sp and 13.12% in MCC on the training data and12.13% of ACC, 27.25% in Sn, 2.25% in Sp, and 30.37% in MCC on an independent dataset in comparison with 2nd best method. These empirical analyses show the superlative performance of the proposed model over both training and Independent dataset in comparison with existing literature studies. Conclusion : In this research, we have developed a novel sequence-based automated predictor for SC-sites, called DeepSSPred. The empirical simulations outcomes with a training dataset and independent validation dataset have revealed the efficacy of the proposed theoretical model. The good performance of DeepSSPred is due to several reasons, such as novel discriminative feature encoding schemes, SMOTE technique, and careful construction of the prediction model through the tuned 2D-CNN classifier. We believe that our research work will provide a potential insight into a further prediction of S-sulfenylation characteristics and functionalities. Thus, we hope that our developed predictor will significantly helpful for large scale discrimination of unknown SC-sites in particular and designing new pharmaceutical drugs in general.


Sign in / Sign up

Export Citation Format

Share Document