scholarly journals Intuiciones en lógica: una propuesta moderada

Author(s):  
Diego Tajer

Intuitions play a significant role in debates about logic. In this paper, I analyze how legitimate is that practice. In the first part of the paper, I distinguish between theoretical and pretheoretical intuitions, and argue that some pretheoretical intuitions are not to be taken into account in logic. Particularly, our pretheoretical intuitions about the concept of validity are not of much importance, since we don’t have a uniform or clear concept of validity in the natural language to be elucidated. Nevertheless, I argue that, since logical connectives are more homogeneously used in our ordinary speech, we can appeal to pretheoretical intuitions to establish their meaning in a logical theory. In the second part of the paper, I consider and reply to four objections to this moderate proposal. Two of them try to show that, if this position is adopted, then the pretheoretical intuitions about the connectives are completely unreliable and useless. One of them argues that this mixed position is unstable: pretheoretical intuitions about the connectives are also pretheoretical intuitions about validity. The last problem is related to the definition of validity and the possibility of revising it.

2021 ◽  
Vol 4 (4) ◽  
pp. 99-136
Author(s):  
Ibrahiem Mohammed Abdullah ◽  

The research paper aims to highlight the STEM approach as one of the modern integrated approaches in the field of mathematics education. STEM which means the integration of Science, Technology, Engineering, and Math has its significant role in the development of curricula in the Arab world generally and particularly in mathematics curricula. This paper addresses the definition of STEM, the justifications for its emergence and the causes for the attention it recently receives. Moreover, the paper sheds light on its objectives, content, related teaching strategies, educational activities, evaluation, characteristics, advantages and obstacles found in its application.


1995 ◽  
Vol 06 (03) ◽  
pp. 203-234 ◽  
Author(s):  
YUKIYOSHI KAMEYAMA

This paper studies an extension of inductive definitions in the context of a type-free theory. It is a kind of simultaneous inductive definition of two predicates where the defining formulas are monotone with respect to the first predicate, but not monotone with respect to the second predicate. We call this inductive definition half-monotone in analogy of Allen’s term half-positive. We can regard this definition as a variant of monotone inductive definitions by introducing a refined order between tuples of predicates. We give a general theory for half-monotone inductive definitions in a type-free first-order logic. We then give a realizability interpretation to our theory, and prove its soundness by extending Tatsuta’s technique. The mechanism of half-monotone inductive definitions is shown to be useful in interpreting many theories, including the Logical Theory of Constructions, and Martin-Löf’s Type Theory. We can also formalize the provability relation “a term p is a proof of a proposition P” naturally. As an application of this formalization, several techniques of program/proof-improvement can be formalized in our theory, and we can make use of this fact to develop programs in the paradigm of Constructive Programming. A characteristic point of our approach is that we can extract an optimization program since our theory enjoys the program extraction theorem.


2019 ◽  
Vol 8 (4) ◽  
pp. 10289-10293

Sentiment Analysis is a tool used for determining the Polarity or Emotion of a Sentence. It is a field of Natural Language Processing which focuses on the study of opinions. In this study, the researchers solved one key challenge in Sentiment Analysis, which is to consider the Ending Punctuation Marks present in a sentence. Ending punctuation marks plays a significant role in Emotion Recognition and Intensity Level Recognition. The research made used of tweets expressing opinions about Philippine President Rodrigo Duterte. These downloaded tweets served as the inputs. It was initially subjected to pre-processing stage to be able to prepare the sentences for processing. A Language Model was created to serve as the classifier for determining the scores of the tweets. The scores give the polarity of the sentence. Accuracy is very important in sentiment analysis. To increase the chance of correctly identifying the polarity of the tweets, the input undergone Intensity Level Recognition which determines the intensifiers and negations within the sentences. The system was evaluated with overall performance of 80.27%.


2020 ◽  
pp. 73-79
Author(s):  
Alina Aidarovna-Kamalova ◽  
Dinara Lenarovna-Kurbangalieva

There are many interpretations of the term reputation'. Most authors refer to the general definition of reputation, considering reputation (fr., from lat. 'Reputatio' reflection, reasoning) as "a common opinion about the merits and demerits of someone, smth". The nature and essence of the business reputation of the enterprise are multifaceted and depending on the discipline studied, each researcher has his own interpretation. As well as the very concept of reputation for various fields of science is interpreted differently, the methods of assessment differ. In this article, we analyze the Economic Assessment of the Dependence of an Organization's Competitiveness on Reputation Capital and consider the tools that form reputation capital and methods for assessing it. Based on a review of existing factors in the formation of reputation capital and valuation methods, we will be able to identify key focuses for further research. In the course of the analysis carried out in the work, we found that the formation internal factors of reputation capital play a significant role in ensuring competitiveness, namely, the organization's personnel, its corporate culture.


2021 ◽  
Vol 38 (38) ◽  
pp. 122-137
Author(s):  
Darko Trifunovic ◽  
Juliusz Piwowarski

This article generally contains two parts. One is a theoretical approach to dealing with the phenomenon of terrorism as well as international terrorism. Within the first part, a unique definition of the concept of security science is given, without which it is not possible to properly perceive or investigate security threats and risks within which terrorism is one of the significant threats. The second part deals with models of terrorist activities with special attention to the webspace and the significant role that terrorists attach to the increasing use of the Internet for their purposes. The theoretical part leads to the conclusion that there are five essential elements whose presence, if detected in one territory or state, indicates the existence of a mechanism that produces or creates new jihad warriors. The paper also gives a unique forecast of the degree of endangerment on the example of a territory, which gives scientists who investigate these threats a new direction of research.


Author(s):  
Paula Estrella ◽  
Nikos Tsourakis

When it comes to the evaluation of natural language systems, it is well acknowledged that there is a lack of common evaluation methodologies, making the fair comparison of such systems a difficult task. Many attempts to standardize this process have used a quality model based on the ISO/IEC 9126 standards. The authors have also used these standards for the definition of a weighted quality model for the evaluation of a medical speech translator, showing the relative importance of the system's features depending on the potential user (patient or doctor, developer). More recently, ISO/IEC 9126 has been replaced by a new series of standards, the 25000 or SQuaRE series, indicating that the model should be migrated to the new series in order to maintain compliance adherence to current standards. This chapter demonstrates how to migrate from ISO/IEC 9126 to ISO 25000 by using the authors' previous work as a use case.


2016 ◽  
Vol 8 (1) ◽  
pp. 41-62
Author(s):  
Imre Kilián

Abstract The backward-chaining inference strategy of Prolog is inefficient for a number of problems. The article proposes Contralog: a Prolog-conform, forward-chaining language and an inference engine that is implemented as a preprocessor-compiler to Prolog. The target model is Prolog, which ensures mutual switching from Contralog to Prolog and back. The Contralog compiler is implemented using Prolog's de facto standardized macro expansion capability. The article goes into details regarding the target model. We introduce first a simple application example for Contralog. Then the next section shows how a recursive definition of some problems is executed by their Contralog definition automatically in a dynamic programming way. Two examples, the well-known matrix chain multiplication problem and the Warshall algorithm are shown here. After this, the inferential target model of Prolog/Contralog programs is introduced, and the possibility for implementing the ReALIS natural language parsing technology is described relying heavily on Contralog's forward chaining inference engine. Finally the article also discusses some practical questions of Contralog program development.


2008 ◽  
Vol 34 (4) ◽  
pp. 597-614 ◽  
Author(s):  
Trevor Cohn ◽  
Chris Callison-Burch ◽  
Mirella Lapata

Automatic paraphrasing is an important component in many natural language processing tasks. In this article we present a new parallel corpus with paraphrase annotations. We adopt a definition of paraphrase based on word alignments and show that it yields high inter-annotator agreement. As Kappa is suited to nominal data, we employ an alternative agreement statistic which is appropriate for structured alignment tasks. We discuss how the corpus can be usefully employed in evaluating paraphrase systems automatically (e.g., by measuring precision, recall, and F1) and also in developing linguistically rich paraphrase models based on syntactic structure.


2014 ◽  
pp. 297-323
Author(s):  
Paolo Arcaini ◽  
Angelo Gargantini ◽  
Elvinia Riccobene ◽  
Patrizia Scandurra

Domain Specific Languages (DSLs) are often defined in terms of metamodels capturing the abstract syntax of the language. For a complete definition of a DSL, both syntactic and semantic aspects of the language have to be specified. Metamodeling environments support syntactic definition issues, but they do not provide any help in defining the semantics of metamodels, which is usually given in natural language. In this chapter, the authors present an approach to formally define the semantics of metamodel-based languages. It is based on a translational technique that hooks to the language metamodel its precise and executable semantics expressed in terms of the Abstract State Machine formal method. The chapter also shows how different techniques can be used for formal analysis of models (i.e., instance of the language metamodel). The authors exemplify the use of their approach on a language for Petri nets.


2021 ◽  
Vol 227 ◽  
pp. 04005
Author(s):  
Abdusali Suyunov ◽  
Shukhrat Suyunov ◽  
Malika Aminjanova ◽  
Kamola Rakhmatullaeva

To improve the quality of construction and increase the durability of engineering structures under construction, complex geodetic works should be performed, including geodetic observations of deformations of structures. These observations are carried out during the construction of buildings and structures and their operation, mainly before the period of deformation stabilization. In this regard, a reliable statistical definition of deformations close to the limit is necessary, based on the data of geodetic observations. The research helps to improve the definition of deformations of structures using the Fischer’s F-test and the Foster-Stuart test, based on analysis of the measurements of horizontal and vertical monitoring of industrial structures. According to the results, the magnitude of the subsidence plays a more significant role from than its absolute value, thus the value of the deformation intensity is of primary importance in justifying observation periodicity.


Sign in / Sign up

Export Citation Format

Share Document