Proof Theory of the Cut Rule

Author(s):  
J. R. B. Cockett ◽  
R. A. G. Seely

This chapter describes the categorical proof theory of the cut rule, a very basic component of any sequent-style presentation of a logic, assuming a minimum of structural rules and connectives, in fact, starting with none. It is shown how logical features can be added to this basic logic in a modular fashion, at each stage showing the appropriate corresponding categorical semantics of the proof theory, starting with multicategories, and moving to linearly distributive categories and *-autonomous categories. A key tool is the use of graphical representations of proofs (“proof circuits”) to represent formal derivations in these logics. This is a powerful symbolism, which on the one hand is a formal mathematical language, but crucially, at the same time, has an intuitive graphical representation.

2010 ◽  
Vol 13 (2) ◽  
pp. 765-776 ◽  
Author(s):  
Cristina Cambra ◽  
Aurora Leal ◽  
Núria Silvestre

The understanding of a television story can be very different depending on the age of the viewer, their background knowledge, the content of the programme and the way in which they combine the information gathered from linguistic, audio and visual elements. This study explores the different ways of interpreting an audiovisual document considering that, due to a hearing impaired, visual, audio and linguistic information could be perceived very differently to the way it is by hearing people. The study involved the participation of 20 deaf and 20 hearing adolescents, aged 12 to 19 years who, after watching a fragment of a television series, were asked to draw a picture of what had happened in the story. The results show that the graphical representation of the film is similar for both groups in terms of the number of scenes, but there is greater profusion, in the deaf group, of details about the context and characters, and there are differences in their interpretations of some of the sequences in the story.


Axioms ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 84 ◽  
Author(s):  
Sopo Pkhakadze ◽  
Hans Tompits

Default logic is one of the basic formalisms for nonmonotonic reasoning, a well-established area from logic-based artificial intelligence dealing with the representation of rational conclusions, which are characterised by the feature that the inference process may require to retract prior conclusions given additional premisses. This nonmonotonic aspect is in contrast to valid inference relations, which are monotonic. Although nonmonotonic reasoning has been extensively studied in the literature, only few works exist dealing with a proper proof theory for specific logics. In this paper, we introduce sequent-type calculi for two variants of default logic, viz., on the one hand, for three-valued default logic due to Radzikowska, and on the other hand, for disjunctive default logic, due to Gelfond, Lifschitz, Przymusinska, and Truszczyński. The first variant of default logic employs Łukasiewicz’s three-valued logic as the underlying base logic and the second variant generalises defaults by allowing a selection of consequents in defaults. Both versions have been introduced to address certain representational shortcomings of standard default logic. The calculi we introduce axiomatise brave reasoning for these versions of default logic, which is the task of determining whether a given formula is contained in some extension of a given default theory. Our approach follows the sequent method first introduced in the context of nonmonotonic reasoning by Bonatti, which employs a rejection calculus for axiomatising invalid formulas, taking care of expressing the consistency condition of defaults.


Author(s):  
Menaouer Brahami ◽  
Baghdad Atmani ◽  
Nada Matta

The interest of companies for a greater valuation of their information, knowledge and competency is increasing. These companies have a knowledge capital (tacit and explicit) often poorly exploited. These information resources include knowledge and information useful and necessary to the execution of trades' processes and that it will be captured and formalized by using knowledge engineering methods, such as knowledge mapping techniques. In this context, the authors present a new approach to dynamic fusion of knowledge maps for an activities process that builds on the one hand, the graphical representation of the knowledge mapping and the boolean modelling of the graph (MBG). On the other hand, the authors' fusion algorithm of the maps which relies on notions of “index” type which allows determines the type of node of map to merge their fusion algorithm of the maps which relies on notions of “index” type which allows determines the type of node to merge and on notions of the boolean modelling of the knowledge maps. The authors finally implemented this algorithm to obtain experimental results. This result can be used as a decision support tool, whether individual or collective.


2017 ◽  
Vol 17 (4) ◽  
pp. 316-334
Author(s):  
Pere Millán-Martínez ◽  
Pedro Valero-Mora

The search for an efficient method to enhance data cognition is especially important when managing data from multidimensional databases. Open data policies have dramatically increased not only the volume of data available to the public, but also the need to automate the translation of data into efficient graphical representations. Graphic automation involves producing an algorithm that necessarily contains inputs derived from the type of data. A set of rules are then applied to combine the input variables and produce a graphical representation. Automated systems, however, fail to provide an efficient graphical representation because they only consider either a one-dimensional characterization of variables, which leads to an overwhelmingly large number of available solutions, a compositional algebra that leads to a single solution, or requires the user to predetermine the graphical representation. Therefore, we propose a multidimensional characterization of statistical variables that when complemented with a catalog of graphical representations that match any single combination, presents the user with a more specific set of suitable graphical representations to choose from. Cognitive studies can then determine the most efficient perceptual procedures to further shorten the path to the most efficient graphical representations. The examples used herein are limited to graphical representations with three variables given that the number of combinations increases drastically as the number of selected variables increases.


Author(s):  
Vinh T Nguyen ◽  
Kwanghee Jung ◽  
Vibhuti Gupta

AbstractData visualization blends art and science to convey stories from data via graphical representations. Considering different problems, applications, requirements, and design goals, it is challenging to combine these two components at their full force. While the art component involves creating visually appealing and easily interpreted graphics for users, the science component requires accurate representations of a large amount of input data. With a lack of the science component, visualization cannot serve its role of creating correct representations of the actual data, thus leading to wrong perception, interpretation, and decision. It might be even worse if incorrect visual representations were intentionally produced to deceive the viewers. To address common pitfalls in graphical representations, this paper focuses on identifying and understanding the root causes of misinformation in graphical representations. We reviewed the misleading data visualization examples in the scientific publications collected from indexing databases and then projected them onto the fundamental units of visual communication such as color, shape, size, and spatial orientation. Moreover, a text mining technique was applied to extract practical insights from common visualization pitfalls. Cochran’s Q test and McNemar’s test were conducted to examine if there is any difference in the proportions of common errors among color, shape, size, and spatial orientation. The findings showed that the pie chart is the most misused graphical representation, and size is the most critical issue. It was also observed that there were statistically significant differences in the proportion of errors among color, shape, size, and spatial orientation.


2014 ◽  
Vol 8 (2) ◽  
pp. 61-73
Author(s):  
Matías Arce ◽  
Tomás Ortega

Este trabajo trata sobre el concepto de función, básico en el Análisis Matemático, y, en particular, su representación gráfica. Nos centramos en aspectos relacionados con la forma; es decir, el trazado de dicha representación. Analizamos las representaciones gráficas de funciones existentes en los cuadernos de matemáticas de estudiantes de varias aulas de 1º de Bachillerato. Encontramos deficiencias en el trazado de gráficas que se repiten en un alto número de estudiantes, relacionadas con los conceptos de función y asíntota, con el uso de las escalas en los ejes del diagrama cartesiano y con las características de algunas funciones. Además, discutimos sobre las limitaciones técnicas y las dificultades didácticas y cognitivas que pueden dar lugar a su aparición y hacemos algunas recomendaciones didácticas al respecto.High school students’ deficiencies in plotting graphs of functionsThis paper deals with the concept of function, basic in mathematical analysis, and, in particular, with its graphical representation. We focus our attention on plotting graphs of functions. We analyzed the graphical representations of functions found in mathematical notebooks of high school students. We encountered several deficiencies related to the concepts of function and asymptote, the use of scales in diagram axes and the characteristics of some functions. Besides, we discuss the technical limitations and the didactic and cognitive difficulties that may promote their emergence, and, make some didactic recommendations for teachers.Handle: http://hdl.handle.net/10481/29576Nº de citas en WOS (2017): 1 (Citas de 2º orden, 0)


Author(s):  
Ashesh Nandy

The exponential growth in the depositories of biological sequence data have generated an urgent need to store, retrieve and analyse the data efficiently and effectively for which the standard practice of using alignment procedures are not adequate due to high demand on computing resources and time. Graphical representation of sequences has become one of the most popular alignment-free strategies to analyse the biological sequences where each basic unit of the sequences – the bases adenine, cytosine, guanine and thymine for DNA/RNA, and the 20 amino acids for proteins – are plotted on a multi-dimensional grid. The resulting curve in 2D and 3D space and the implied graph in higher dimensions provide a perception of the underlying information of the sequences through visual inspection; numerical analyses, in geometrical or matrix terms, of the plots provide a measure of comparison between sequences and thus enable study of sequence hierarchies. The new approach has also enabled studies of comparisons of DNA sequences over many thousands of bases and provided new insights into the structure of the base compositions of DNA sequences In this article we review in brief the origins and applications of graphical representations and highlight the future perspectives in this field.


1980 ◽  
Vol 26 (94) ◽  
pp. 501-505 ◽  
Author(s):  
H. J. Körner

AbstractThe energy-line method results from the graphical representation of the energy law as applied in hydraulics. It makes it easier to understand the entire process of the movement of an avalanche from where it breaks away to where it is deposited. The technique, the energy lines, and the average slope of avalanches, as well as the theoretical energy lines of the one-coefficient and two-coefficient models of avalanche dynamics, are described. The method is explained by means of an example.


2020 ◽  
Vol 2 ◽  
Author(s):  
Evdoxia Taka ◽  
Sebastian Stein ◽  
John H. Williamson

Bayesian probabilistic modeling is supported by powerful computational tools like probabilistic programming and efficient Markov Chain Monte Carlo (MCMC) sampling. However, the results of Bayesian inference are challenging for users to interpret in tasks like decision-making under uncertainty or model refinement. Decision-makers need simultaneous insight into both the model's structure and its predictions, including uncertainty in inferred parameters. This enables better assessment of the risk overall possible outcomes compatible with observations and thus more informed decisions. To support this, we see a need for visualization tools that make probabilistic programs interpretable to reveal the interdependencies in probabilistic models and their inherent uncertainty. We propose the automatic transformation of Bayesian probabilistic models, expressed in a probabilistic programming language, into an interactive graphical representation of the model's structure at varying levels of granularity, with seamless integration of uncertainty visualization. This interactive graphical representation supports the exploration of the prior and posterior distribution of MCMC samples. The interpretability of Bayesian probabilistic programming models is enhanced through the interactive graphical representations, which provide human users with more informative, transparent, and explainable probabilistic models. We present a concrete implementation that translates probabilistic programs to interactive graphical representations and show illustrative examples for a variety of Bayesian probabilistic models.


Sign in / Sign up

Export Citation Format

Share Document