scholarly journals Alternative Semantics for Normative Reasoning with an Application to Regret and Responsibility

2021 ◽  
Vol 30 (4) ◽  
pp. 653-679
Author(s):  
Daniela Glavaničová ◽  
Matteo Pascucci

We provide a fine-grained analysis of notions of regret and responsibility (such as agent-regret and individual responsibility) in terms of a language of multimodal logic. This language undergoes a detailed semantic analysis via two sorts of models: (i) relating models, which are equipped with a relation of propositional pertinence, and (ii) synonymy models, which are equipped with a relation of propositional synonymy. We specify a class of strictly relating models and show that each synonymy model can be transformed into an equivalent strictly relating model. Moreover, we define an axiomatic system that captures the notion of validity in the class of all strictly relating models.

Semantic Web ◽  
2020 ◽  
pp. 1-16
Author(s):  
Francesco Beretta

This paper addresses the issue of interoperability of data generated by historical research and heritage institutions in order to make them re-usable for new research agendas according to the FAIR principles. After introducing the symogih.org project’s ontology, it proposes a description of the essential aspects of the process of historical knowledge production. It then develops an epistemological and semantic analysis of conceptual data modelling applied to factual historical information, based on the foundational ontologies Constructive Descriptions and Situations and DOLCE, and discusses the reasons for adopting the CIDOC CRM as a core ontology for the field of historical research, but extending it with some relevant, missing high-level classes. Finally, it shows how collaborative data modelling carried out in the ontology management environment OntoME makes it possible to elaborate a communal fine-grained and adaptive ontology of the domain, provided an active research community engages in this process. With this in mind, the Data for history consortium was founded in 2017 and promotes the adoption of a shared conceptualization in the field of historical research.


Terminology ◽  
2014 ◽  
Vol 20 (2) ◽  
pp. 279-303 ◽  
Author(s):  
Ann Bertels ◽  
Dirk Speelman

This paper presents an innovative approach, within the framework of distributional semantics, for the exploration of semantic similarity in a technical corpus. In complement to a previous quantitative semantic analysis conducted in the same domain of machining terminology, this paper sets out to discover fine-grained semantic distinctions in an attempt to explore the semantic heterogeneity of a number of technical items. Multidimensional scaling analysis (MDS) was carried out in order to cluster first-order co-occurrences of a technical node with respect to shared second-order and third-order co-occurrences. By taking into account the association values between relevant first and second-order co-occurrences, semantic similarities and dissimilarities between first-order co-occurrences could be determined, as well as proximities and distances on a graph. In our discussion of the methodology and results of statistical clustering techniques for semantic purposes, we pay special attention to the linguistic and terminological interpretation.


2018 ◽  
Vol 45 (2) ◽  
pp. 259-280 ◽  
Author(s):  
Bilal Abu-Salih ◽  
Pornpit Wongthongtham ◽  
Kit Yan Chan ◽  
Dengya Zhu

The widespread use of big social data has influenced the research community in several significant ways. In particular, the notion of social trust has attracted a great deal of attention from information processors and computer scientists as well as information consumers and formal organisations. This attention is embodied in the various shapes social trust has taken, such as its use in recommendation systems, viral marketing and expertise retrieval. Hence, it is essential to implement frameworks that are able to temporally measure a user’s credibility in all categories of big social data. To this end, this article suggests the CredSaT (Credibility incorporating Semantic analysis and Temporal factor), which is a fine-grained credibility analysis framework for use in big social data. A novel metric that includes both new and current features, as well as the temporal factor, is harnessed to establish the credibility ranking of users. Experiments on real-world datasets demonstrate the efficacy and applicability of our model in determining highly domain-based trustworthy users. Furthermore, CredSaT may also be used to identify spammers and other anomalous users.


Author(s):  
Abdessamad Benlahbib ◽  
El Habib Nfaoui

Customer reviews are a valuable source of information from which we can extract very useful data about different online shopping experiences. For trendy items (products, movies, TV shows, hotels, services . . . ), the number of available users and customers’ opinions could easily surpass thousands. Therefore, online reputation systems could aid potential customers in making the right decision (buying, renting, booking . . . ) by automatically mining textual reviews and their ratings. This paper presents MTVRep, a movie and TV show reputation system that incorporates fine-grained opinion mining and semantic analysis to generate and visualize reputation toward movies and TV shows. Differently from previous studies on reputation generation that treat the task of sentiment analysis as a binary classification problem (positive, negative), the proposed system identifies the sentiment strength during the phase of sentiment classification by using fine-grained sentiment analysis to separate movie and TV show reviews into five discrete classes: strongly negative, weakly negative, neutral, weakly positive and strongly positive. Besides, it employs embeddings from language models (ELMo) representations to extract semantic relations between reviews. The contribution of this paper is threefold. First, movie and TV show reviews are separated into five groups based on their sentiment orientation. Second, a custom score is computed for each opinion group. Finally, a numerical reputation value is produced toward the target movie or TV show. The efficacy of the proposed system is illustrated by conducting several experiments on a real-world movie and TV show dataset.


2021 ◽  
Vol 12 ◽  
Author(s):  
Changcheng Wu ◽  
Junyi Li ◽  
Ye Zhang ◽  
Chunmei Lan ◽  
Kaiji Zhou ◽  
...  

Nowadays, most courses in massive open online course (MOOC) platforms are xMOOCs, which are based on the traditional instruction-driven principle. Course lecture is still the key component of the course. Thus, analyzing lectures of the instructors of xMOOCs would be helpful to evaluate the course quality and provide feedback to instructors and researchers. The current study aimed to portray the lecture styles of instructors in MOOCs from the perspective of natural language processing. Specifically, 129 course transcripts were downloaded from two major MOOC platforms. Two semantic analysis tools (linguistic inquiry and word count and Coh-Metrix) were used to extract semantic features including self-reference, tone, effect, cognitive words, cohesion, complex words, and sentence length. On the basis of the comments of students, course video review, and the results of cluster analysis, we found four different lecture styles: “perfect,” “communicative,” “balanced,” and “serious.” Significant differences were found between the different lecture styles within different disciplines for notes taking, discussion posts, and overall course satisfaction. Future studies could use fine-grained log data to verify the results of our study and explore how to use the results of natural language processing to improve the lecture of instructors in both MOOCs and traditional classes.


2019 ◽  
Vol 40 (2) ◽  
pp. 355-400
Author(s):  
Georgios Ioannou

Abstract This is a diachronic corpus-based semantic analysis of the verb plēróō in Ancient Greek, from 6th c. BCE to the 2nd c. ce. Adopting the usage-based profile approach, it inquiries into the relation between formal features and themes of discourse, following the methodological consequences of two theoretical assumptions: first, that textual themes have a prototypical structure. Second, that formal features as a conceptual schematicity underlying the elaborated and situated level of discourse are immanent to these themes. Methodologically, it implements a Multiple Correspondence Analysis for each century, exploring the contribution of the formal and a fine-grained range of semantic features to the variation, as well as the strength of association between them. In order to test the plausibility of the immanence hypothesis, the analysis implements a Hierarchical Agglomerative Clustering for the totality of data over the eight-century period, comparing the results of the latter with the individual MCA analyses.


2018 ◽  
Vol 1060 ◽  
pp. 012028
Author(s):  
Bopeng Liu ◽  
Hong Yang ◽  
Zhengkui Lin ◽  
Menglu You ◽  
Zhaohui Wang

2009 ◽  
Vol 34 ◽  
pp. 443-498 ◽  
Author(s):  
E. Gabrilovich ◽  
S. Markovitch

Adequate representation of natural language semantics requires access to vast amounts of common sense and domain-specific world knowledge. Prior work in the field was based on purely statistical techniques that did not make use of background knowledge, on limited lexicographic knowledge bases such as WordNet, or on huge manual efforts such as the CYC project. Here we propose a novel method, called Explicit Semantic Analysis (ESA), for fine-grained semantic interpretation of unrestricted natural language texts. Our method represents meaning in a high-dimensional space of concepts derived from Wikipedia, the largest encyclopedia in existence. We explicitly represent the meaning of any text in terms of Wikipedia-based concepts. We evaluate the effectiveness of our method on text categorization and on computing the degree of semantic relatedness between fragments of natural language text. Using ESA results in significant improvements over the previous state of the art in both tasks. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.


2003 ◽  
Vol 27 (2) ◽  
pp. 287-322 ◽  
Author(s):  
Cliff Goddard

This paper undertakes a fine-grained semantic analysis of some of the multiple uses of the polyfunctional verbal prefix ter- in Malay (Bahasa Melayu), the national language of Malaysia. The analysis is conducted within the natural semantic metalanguage (NSM) framework originated by Anna Wierzbicka, supported by examples drawn from a large corpus of naturally occuring Malay texts. The main goals are to accurately describe the full range of meanings, and to decide to what extent apparent differences are contextually-induced as opposed to being semantically encoded. In the end, seven distinct but interrelated lexico-semantic schemas are identified, constituting a network of grammatical polysemy.


Linguistics ◽  
2020 ◽  
Vol 58 (5) ◽  
pp. 1285-1322
Author(s):  
Ileana Paul ◽  
Baholisoa Simone Ralalaoherivony ◽  
Henriëtte de Swart

AbstractMalagasy is a language with non-culminating accomplishments. There is, however, a specific prefix (maha-), which appears to entail culmination. Moreover, verbs prefixed with maha- display a range of interpretations: causative, abilitive, ‘manage to’, and unintentionality. This paper accounts for these two aspects of this prefix with a unified semantic analysis. In particular, maha- encodes double prevention. The double prevention configuration is associated with a circumstantial modal base, which leads to culminating readings in the past and future, but not the present tense. The embedding of double prevention in a force-theoretic framework leads to a more fine-grained theory of causation, which the Malagasy data show to have empirical relevance.


Sign in / Sign up

Export Citation Format

Share Document