scholarly journals From Discursive Practice to Logic? Remarks on Logical Expressivism

2020 ◽  
Vol 11 (2) ◽  
Author(s):  
Rodger Kibble

This paper investigates Robert Brandom's programme of logical expressivism and in the processattempts to clarify his use of the term practice, by means of a comparison with the works of sociologistand anthropologist Pierre Bourdieu. The key claim of logical expressivisim is the ideathat logical terms serve to make explicit the inferential relations between statements which alreadyhold implicitly in a discursive practice that lacks such terms in its vocabulary. Along with this, itis claimed that the formal validity of an argument is derivative on so-called material inference, inthat an inference is taken to be logically valid only if it is a materially good inference and cannotbe made into a bad inference by substituting nonlogical for nonlogical vocabulary in its premisesand conclusion. We note that no systematic account of logical validity employing this substitutionalmethod has been offered to date; rather, proposals by e.g. Lance and Kremer, Piwek, Kibbleand Brandom himself have followed the more conventional path of developing a formally definedsystem which is informally associated with natural language examples. We suggest a number of refinementsto Brandom’s account of conditionals and of validity, supported by analysis of linguisticexamples including material from the SNLI and MultiNLI corpora and a review of relevant literature.The analysis suggests that Brandom’s expressivist programme faces formidable challengesonce exposed to a wide range of linguistic data, and may not in fact be realisable owing to thepervasive context-dependence of linguistic expressions, including 'logical' vocabulary. A furtherclaim of this paper is that a purely assertional practice may not provide an adequate basis for conditionalreasoning, but that a more promising route is provided by the introduction of imperatives,as in so-called "pseudo-imperatives" such as "Get individuals to invest their time and the fundingwill follow". We conclude the resulting dialogical analysis of conditional reasoning is faithful toBrandom's Sellarsian intuition of linguistic practice as a game of giving and asking for reasons, andconjecture that language is best analysed not as a system of rules but as a Wittgensteinian repertoireof evolving micro-practices.

2021 ◽  
Vol 30 (1) ◽  
pp. 774-792
Author(s):  
Mazin Abed Mohammed ◽  
Dheyaa Ahmed Ibrahim ◽  
Akbal Omran Salman

Abstract Spam electronic mails (emails) refer to harmful and unwanted commercial emails sent to corporate bodies or individuals to cause harm. Even though such mails are often used for advertising services and products, they sometimes contain links to malware or phishing hosting websites through which private information can be stolen. This study shows how the adaptive intelligent learning approach, based on the visual anti-spam model for multi-natural language, can be used to detect abnormal situations effectively. The application of this approach is for spam filtering. With adaptive intelligent learning, high performance is achieved alongside a low false detection rate. There are three main phases through which the approach functions intelligently to ascertain if an email is legitimate based on the knowledge that has been gathered previously during the course of training. The proposed approach includes two models to identify the phishing emails. The first model has proposed to identify the type of the language. New trainable model based on Naive Bayes classifier has also been proposed. The proposed model is trained on three types of languages (Arabic, English and Chinese) and the trained model has used to identify the language type and use the label for the next model. The second model has been built by using two classes (phishing and normal email for each language) as a training data. The second trained model (Naive Bayes classifier) has been applied to identify the phishing emails as a final decision for the proposed approach. The proposed strategy is implemented using the Java environments and JADE agent platform. The testing of the performance of the AIA learning model involved the use of a dataset that is made up of 2,000 emails, and the results proved the efficiency of the model in accurately detecting and filtering a wide range of spam emails. The results of our study suggest that the Naive Bayes classifier performed ideally when tested on a database that has the biggest estimate (having a general accuracy of 98.4%, false positive rate of 0.08%, and false negative rate of 2.90%). This indicates that our Naive Bayes classifier algorithm will work viably on the off chance, connected to a real-world database, which is more common but not the largest.


Author(s):  
Clifford Nangle ◽  
Stuart McTaggart ◽  
Margaret MacLeod ◽  
Jackie Caldwell ◽  
Marion Bennie

ABSTRACT ObjectivesThe Prescribing Information System (PIS) datamart, hosted by NHS National Services Scotland receives around 90 million electronic prescription messages per year from GP practices across Scotland. Prescription messages contain information including drug name, quantity and strength stored as coded, machine readable, data while prescription dose instructions are unstructured free text and difficult to interpret and analyse in volume. The aim, using Natural Language Processing (NLP), was to extract drug dose amount, unit and frequency metadata from freely typed text in dose instructions to support calculating the intended number of days’ treatment. This then allows comparison with actual prescription frequency, treatment adherence and the impact upon prescribing safety and effectiveness. ApproachAn NLP algorithm was developed using the Ciao implementation of Prolog to extract dose amount, unit and frequency metadata from dose instructions held in the PIS datamart for drugs used in the treatment of gastrointestinal, cardiovascular and respiratory disease. Accuracy estimates were obtained by randomly sampling 0.1% of the distinct dose instructions from source records, comparing these with metadata extracted by the algorithm and an iterative approach was used to modify the algorithm to increase accuracy and coverage. ResultsThe NLP algorithm was applied to 39,943,465 prescription instructions issued in 2014, consisting of 575,340 distinct dose instructions. For drugs used in the gastrointestinal, cardiovascular and respiratory systems (i.e. chapters 1, 2 and 3 of the British National Formulary (BNF)) the NLP algorithm successfully extracted drug dose amount, unit and frequency metadata from 95.1%, 98.5% and 97.4% of prescriptions respectively. However, instructions containing terms such as ‘as directed’ or ‘as required’ reduce the usability of the metadata by making it difficult to calculate the total dose intended for a specific time period as 7.9%, 0.9% and 27.9% of dose instructions contained terms meaning ‘as required’ while 3.2%, 3.7% and 4.0% contained terms meaning ‘as directed’, for drugs used in BNF chapters 1, 2 and 3 respectively. ConclusionThe NLP algorithm developed can extract dose, unit and frequency metadata from text found in prescriptions issued to treat a wide range of conditions and this information may be used to support calculating treatment durations, medicines adherence and cumulative drug exposure. The presence of terms such as ‘as required’ and ‘as directed’ has a negative impact on the usability of the metadata and further work is required to determine the level of impact this has on calculating treatment durations and cumulative drug exposure.


Author(s):  
Paolo Santorio

On a traditional view, the semantics of natural language makes essential use of a context parameter, i.e. a set of coordinates that representss the situation of speech. In classical frameworks, this parameter plays two roles: it contributes to determining the content of utterances and it is used to define logical consequence. This paper argues that recent empirical proposals about context shift in natural language, which are supported by an increasing body of cross-linguistic data, are incompatible with this traditional view. The moral is that context has no place in semantic theory proper. We should revert back to so-called multiple-indexing frameworks that were developed by Montague and others, and relegate context to the postsemantic stage of a theory of meaning.


Author(s):  
Neal Jean ◽  
Sherrie Wang ◽  
Anshul Samar ◽  
George Azzari ◽  
David Lobell ◽  
...  

Geospatial analysis lacks methods like the word vector representations and pre-trained networks that significantly boost performance across a wide range of natural language and computer vision tasks. To fill this gap, we introduce Tile2Vec, an unsupervised representation learning algorithm that extends the distributional hypothesis from natural language — words appearing in similar contexts tend to have similar meanings — to spatially distributed data. We demonstrate empirically that Tile2Vec learns semantically meaningful representations for both image and non-image datasets. Our learned representations significantly improve performance in downstream classification tasks and, similarly to word vectors, allow visual analogies to be obtained via simple arithmetic in the latent space.


Author(s):  
Manuel Lama ◽  
Eduardo Sánchez

In the last years, the growing of the Internet have opened the door to new ways of learning and education methodologies. Furthermore, the appearance of different tools and applications has increased the need for interoperable as well as reusable learning contents, teaching resources and educational tools (Wiley, 2000). Driven by this new environment, several metadata specifications describing learning resources, such as IEEE LOM (LTCS, 2002) or Dublin Core (DCMI, 2004), and learning design processes (Rawlings et al., 2002) have appeared. In this context, the term learning design is used to describe the method that enables learners to achieve learning objectives after a set of activities are carried out using the resources of an environment. From the proposed specifications, the IMS (IMS, 2003) has emerged as the de facto standard that facilitates the representation of any learning design that can be based on a wide range of pedagogical techniques. The metadata specifications are useful solutions to describe educational resources in order to favour the interoperability and reuse between learning software platforms. However, the majority of the metadata standards are just focused on determining the vocabulary to represent the different aspects of the learning process, while the meaning of the metadata elements is usually described in natural language. Although this description is easy to understand for the learning participants, it is not appropriate for software programs designed to process the metadata. To solve this issue, ontologies (Gómez-Pérez, Fernández-López, and Corcho, 2004) could be used to describe formally and explicitly the structure and meaning of the metadata elements; that is, an ontology would semantically describe the metadata concepts. Furthermore, both metadata and ontologies emphasize that its description must be shared (or standardized) for a given community. In this paper, we present a short review of the main ontologies developed in last years in the Education field, focusing on the use that authors have given to the ontologies. As we will show, ontologies solve issues related with the inconsistencies of using natural language descriptions and with the consensous for managing the semantics of a given specification.


Author(s):  
Kit Fine

Please keep the original abstract. A number of philosophers have flirted with the idea of impossible worlds and some have even become enamored of it. But it has not met with the same degree of acceptance as the more familiar idea of a possible world. Whereas possible worlds have played a broad role in specifying the semantics for natural language and for a wide range of formal languages, impossible worlds have had a much more limited role; and there has not even been general agreement as to how a reasonable theory of impossible worlds is to be developed or applied. This chapter provides a natural way of introducing impossible states into the framework of truthmaker semantics and shows how their introduction permits a number of useful applications.


AI Magazine ◽  
2015 ◽  
Vol 36 (1) ◽  
pp. 99-102
Author(s):  
Tiffany Barnes ◽  
Oliver Bown ◽  
Michael Buro ◽  
Michael Cook ◽  
Arne Eigenfeldt ◽  
...  

The AIIDE-14 Workshop program was held Friday and Saturday, October 3–4, 2014 at North Carolina State University in Raleigh, North Carolina. The workshop program included five workshops covering a wide range of topics. The titles of the workshops held Friday were Games and Natural Language Processing, and Artificial Intelligence in Adversarial Real-Time Games. The titles of the workshops held Saturday were Diversity in Games Research, Experimental Artificial Intelligence in Games, and Musical Metacreation. This article presents short summaries of those events.


2004 ◽  
Vol 10 (1) ◽  
pp. 57-89 ◽  
Author(s):  
MARJORIE MCSHANE ◽  
SERGEI NIRENBURG ◽  
RON ZACHARSKI

The topic of mood and modality (MOD) is a difficult aspect of language description because, among other reasons, the inventory of modal meanings is not stable across languages, moods do not map neatly from one language to another, modality may be realised morphologically or by free-standing words, and modality interacts in complex ways with other modules of the grammar, like tense and aspect. Describing MOD is especially difficult if one attempts to develop a unified approach that not only provides cross-linguistic coverage, but is also useful in practical natural language processing systems. This article discusses an approach to MOD that was developed for and implemented in the Boas Knowledge-Elicitation (KE) system. Boas elicits knowledge about any language, L, from an informant who need not be a trained linguist. That knowledge then serves as the static resources for an L-to-English translation system. The KE methodology used throughout Boas is driven by a resident inventory of parameters, value sets, and means of their realisation for a wide range of language phenomena. MOD is one of those parameters, whose values are the inventory of attested and not yet attested moods (e.g. indicative, conditional, imperative), and whose realisations include flective morphology, agglutinating morphology, isolating morphology, words, phrases and constructions. Developing the MOD elicitation procedures for Boas amounted to wedding the extensive theoretical and descriptive research on MOD with practical approaches to guiding an untrained informant through this non-trivial task. We believe that our experience in building the MOD module of Boas offers insights not only into cross-linguistic aspects of MOD that have not previously been detailed in the natural language processing literature, but also into KE methodologies that could be applied more broadly.


Sign in / Sign up

Export Citation Format

Share Document