Common-sense reasoning, theories of

Author(s):  
John Horty

The task of formalizing common-sense reasoning within a logical framework can be viewed as an extension of the programme of formalizing mathematical and scientific reasoning that has occupied philosophers throughout much of the twentieth century. The most significant progress in applying logical techniques to the study of common-sense reasoning has been made, however, not by philosophers, but by researchers in artificial intelligence, and the logical study of common-sense reasoning is now a recognized sub-field of that discipline. The work involved in this area is similar to what one finds in philosophical logic, but it tends to be more detailed, since the ultimate goal is to encode the information that would actually be needed to drive a reasoning agent. Still, the formal study of common-sense reasoning is not just a matter of applied logic, but has led to theoretical advances within logic itself. The most important of these is the development of a new field of ‘non-monotonic’ logic, in which the conclusions supported by a set of premises might have to be withdrawn as the premise set is supplemented with new information.

Author(s):  
Carme Torras ◽  
Ramon López de Mántaras

Robotics and artificial intelligence are two scientific research fields that receive considerable attention from the media and, consequently, from society. Unfortunately, many advances are reported to the general public in sensationalist (or even alarmist) terms, leading to false hopes or unjustified fears, and taking the focus from other key points. For instance, recent successes in artificial intelligence, amplified by the media, are the cause of a mistaken perception of this discipline’s state of the art. The reality is that artificial intelligence is still far from achieving many high-level cognitive skills; particularly, common sense reasoning.


2020 ◽  
Vol 29 (4) ◽  
pp. 436-451
Author(s):  
Yilang Peng

Applications in artificial intelligence such as self-driving cars may profoundly transform our society, yet emerging technologies are frequently faced with suspicion or even hostility. Meanwhile, public opinions about scientific issues are increasingly polarized along the ideological line. By analyzing a nationally representative panel in the United States, we reveal an emerging ideological divide in public reactions to self-driving cars. Compared with liberals and Democrats, conservatives and Republicans express more concern about autonomous vehicles and more support for restrictively regulating autonomous vehicles. This ideological gap is largely driven by social conservatism. Moreover, both familiarity with driverless vehicles and scientific literacy reduce respondents’ concerns over driverless vehicles and support for regulation policies. Still, the effects of familiarity and scientific literacy are weaker among social conservatives, indicating that people may assimilate new information in a biased manner that promotes their worldviews.


2021 ◽  
pp. 337-350
Author(s):  
Vincent Wolters

In this work I will lend support to the theory of «dynamic efficien - cy», as outlined by Prof. Huerta de Soto in The Theory of Dynamic Efficiency (2010a). Whereas Huerta de Soto connects economics with ethics, I will take a different approach. Since I have a back-ground in Artificial Intelligence (A.I.), I will show that this and related fields have yielded insights that, when applied to the study of economics, may call for a different way of looking at the eco-nomy and its processes. At first glance, A.I. and economics do not seem to have a lot in common. The former is thought to attempt to build a human being; the latter is supposed to deal with depressions, growth, inflation, etc. That view is too simplistic; in fact there are strong similarities. First, economics is based on (inter-)acting individuals, i.e. on human action. A.I. tries to understand and simulate human (and animal) behavior. Second, economics deals with information pro-cessing, such as how the allocation of resources can best be orga-nized. A.I. also investigates information processing. This can be in specific systems, such as the brain, or the evolutionary process, or purely in an abstract form. Finally, A.I. tries to answer more philosophical questions like: what is intelligence? What is a mind? What is consciousness? Is there free will? These topics play a less prominent role in economics, but are sometimes touched upon, together with the related topic of the «entrepreneurial function». The paradigm that was dominant in the early days of A.I. is static in nature. Reaching a solution is done in different steps. First: gathering all necessary information. Second: processing this in - formation. Finally: the outcome of this process, a clear conclusion. Each step in the process is entirely separate. During information gathering no processing is done, and during processing, no new information is added. The conclusion reached is final and cannot change later on. Logical problems are what is mostly dealt with, finding ways in which a computer can perform deductions based on the information that is represented as logical statements. Other applications are optimization problems, and so-called «Expert Systems», developed to perform the work of a judge reaching a verdict, or a medical doctor making a diagnosis based on the symptoms of the patient. This paradigm is also called «top-down», because information flows to a central point where it is processed, or «symbolic processing», referring to deduction in formal logic.1 In economics there is a similar paradigm, and it is still the do-minant one. This is the part of economics that deals with opti - mization of resources: given costs and given prices, what is the allocation that will lead to the highest profit? Also belonging to this paradigm are the equilibrium models. Demand and supply curves are supposed to be knowable and unchangeable, and the price is a necessary outcome. The culmination is central planning that supposes all necessary information, such as demand and supply curves and available resources to be known. Based on this, the central planner determines prices.


Author(s):  
Marie Bernert ◽  
Fano Ramparany

AbstractArtificial Intelligence applications often require to maintain a knowledge base about the observed environment. In particular, when the current knowledge is inconsistent with new information, it has to be updated. Such inconsistency can be due to erroneous assumptions or to changes in the environment. Here we considered the second case, and develop a knowledge update algorithm based on event logic that takes into account constraints according to which the environment can evolve. These constraints take the form of events that modify the environment in a well-defined manner. The belief update triggered by a new observation is thus explained by a sequence of events. We then apply this algorithm to the problem of locating people in a smart home and show that taking into account past information and move’s constraints improves location inference.


Author(s):  
Kaisheng Wu ◽  
Liangda Fang ◽  
Liping Xiong ◽  
Zhao-Rong Lai ◽  
Yong Qiao ◽  
...  

Strategy representation and reasoning has recently received much attention in artificial intelligence. Impartial combinatorial games (ICGs) are a type of elementary and fundamental games in game theory. One of the challenging problems of ICGs is to construct winning strategies, particularly, generalized winning strategies for possibly infinitely many instances of ICGs. In this paper, we investigate synthesizing generalized winning strategies for ICGs. To this end, we first propose a logical framework to formalize ICGs based on the linear integer arithmetic fragment of numeric part of PDDL. We then propose an approach to generating the winning formula that exactly captures the states in which the player can force to win. Furthermore, we compute winning strategies for ICGs based on the winning formula. Experimental results on several games demonstrate the effectiveness of our approach.


Author(s):  
Troels Andreasen ◽  
Henrik Bulskov ◽  
Jørgen Fischer Nilsson

This paper describes principles and structure for a software system that implements a dialect of natural logic for knowledge bases. Natural logics are formal logics that resemble stylized natural language fragments, and whose reasoning rules reflect common-sense reasoning. Natural logics may be seen as forms of extended syllogistic logic. The paper proposes and describes realization of deductive querying functionalities using a previously specified natural logic dialect called Natura-Log. In focus here is the engineering of an inference engine employing as a key feature relational database operations. Thereby the inference steps are subjected to computation in bulk for scaling-up to large knowledge bases. Accordingly, the system eventually is to be realized as a general-purpose database application package with the database being turned logical knowledge base.


1998 ◽  
Vol 10 (1) ◽  
pp. 85-100
Author(s):  
Miloš Dokulil ◽  

At the end of the second millennium, we seem to be somewhat nervous agairn. Twentieth-century scientific developments have opened up fascinating new fields of study both in the micro- and macrocosmos. Yet none of the new codes, paradigms, and ideologies appear to bring us nearer to some new and generally shared creed. Life without work for many, not only in the Third World, the successful integration of Europe, armed conflicts on local battlefields, as well as superficialities on TV screens, are our near-to-be contemporaneity. The seeming unlimited technical possibilities of artificial intelligence, the relativtatim of civic values, and a cartoon-like culture portend risks for the fiiture. Yet, while secular and lacking a binding sense of responsibility, postmodem society epitomizes spiritual hunger. Nurtured by good family traditions, the spiritual quest promises an open-ended, post-Godotian future.


2021 ◽  
Author(s):  
Christopher Kadow ◽  
David Hall ◽  
Uwe Ulbrich

<p>Historical temperature measurements are the basis of global climate datasets like HadCRUT4. This dataset contains many missing values, particularly for periods before the mid-twentieth century, although recent years are also incomplete. Here we demonstrate that artificial intelligence can skilfully fill these observational gaps when combined with numerical climate model data. We show that recently developed image inpainting techniques perform accurate monthly reconstructions via transfer learning using either 20CR (Twentieth-Century Reanalysis) or the CMIP5 (Coupled Model Intercomparison Project Phase 5) experiments. The resulting global annual mean temperature time series exhibit high Pearson correlation coefficients (≥0.9941) and low root mean squared errors (≤0.0547 °C) as compared with the original data. These techniques also provide advantages relative to state-of-the-art kriging interpolation and principal component analysis-based infilling. When applied to HadCRUT4, our method restores a missing spatial pattern of the documented El Niño from July 1877. With respect to the global mean temperature time series, a HadCRUT4 reconstruction by our method points to a cooler nineteenth century, a less apparent hiatus in the twenty-first century, an even warmer 2016 being the warmest year on record and a stronger global trend between 1850 and 2018 relative to previous estimates. We propose image inpainting as an approach to reconstruct missing climate information and thereby reduce uncertainties and biases in climate records.</p><p>From:</p><p>Kadow, C., Hall, D.M. & Ulbrich, U. Artificial intelligence reconstructs missing climate information. <em>Nature Geoscience</em> <strong>13, </strong>408–413 (2020). https://doi.org/10.1038/s41561-020-0582-5</p><p>The presentation will tell from the journey of changing an image AI to a climate research application.</p>


Sign in / Sign up

Export Citation Format

Share Document