scholarly journals Towards Automated Semantic Explainability of Multimedia Feature Graphs

Information ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 502
Author(s):  
Stefan Wagenpfeil ◽  
Paul Mc Kevitt ◽  
Matthias Hemmje

Multimedia feature graphs are employed to represent features of images, video, audio, or text. Various techniques exist to extract such features from multimedia objects. In this paper, we describe the extension of such a feature graph to represent the meaning of such multimedia features and introduce a formal context-free PS-grammar (Phrase Structure grammar) to automatically generate human-understandable natural language expressions based on such features. To achieve this, we define a semantic extension to syntactic multimedia feature graphs and introduce a set of production rules for phrases of natural language English expressions. This explainability, which is founded on a semantic model provides the opportunity to represent any multimedia feature in a human-readable and human-understandable form, which largely closes the gap between the technical representation of such features and their semantics. We show how this explainability can be formally defined and demonstrate the corresponding implementation based on our generic multimedia analysis framework. Furthermore, we show how this semantic extension can be employed to increase the effectiveness in precision and recall experiments.

2021 ◽  
Vol 3 (2) ◽  
pp. 215-244
Author(s):  
Diego Gabriel Krivochen

Abstract Proof-theoretic models of grammar are based on the view that an explicit characterization of a language comes in the form of the recursive enumeration of strings in that language. That recursive enumeration is carried out by a procedure which strongly generates a set of structural descriptions Σ and weakly generates a set of strings S; a grammar is thus a function that pairs an element of Σ with elements of S. Structural descriptions are obtained by means of Context-Free phrase structure rules or via recursive combinatorics and structure is assumed to be uniform: binary branching trees all the way down. In this work we will analyse natural language constructions for which such a rigid conception of phrase structure is descriptively inadequate and propose a solution for the problem of phrase structure grammars assigning too much or too little structure to natural language strings: we propose that the grammar can oscillate between levels of computational complexity in local domains, which correspond to elementary trees in a lexicalised Tree Adjoining Grammar.


1998 ◽  
Vol 4 (4) ◽  
pp. 287-307 ◽  
Author(s):  
RONALDO TEIXEIRA MARTINS ◽  
RICARDO HASEGAWA ◽  
MARIA DAS GRAÇAS VOLPE NUNES ◽  
GISELE MONTILHA ◽  
OSVALDO NOVAIS DE OLIVEIRA

This paper presents a number of linguistic and computational issues identified during the implementation of a general use grammar checker for contemporary Brazilian Portuguese, ReGra, that has been incorporated in the word processor REDATOR by Itautec/Philco (Brazil). Two main strategies were employed in the implementation of correction rules: an error-driven, localist approach based on the identification of patterns indicative of grammatical mistakes; and a more generic approach that requires automatic syntactic analysis. In this discussion, particular emphasis is given to the development of a parser based on a phrase structure grammar comprising over 600 production rules. As for the computational performance, ReGra permits texts to be revised at a rate of ca. 200 words per second.


A traditional concern of grammarians has been the question of whether the members of given pairs of expressions belong to the same or different syntactic categories. Consider the following example sentences. ( a ) I think Fido destroyed the kennel . ( b ) The kennel, I think Fido destroyed . Are the two underlined expressions members of the same syntactic category or not? The generative grammarians of the last quarter century have, almost without exception, taken the answer to be affirmative. In the present paper I explore the implications of taking the answer to be negative. The changes consequent upon this negative answer turn out to be very far-reaching: (i) it becomes as simple to state rules for constructions of the general type exemplified in ( b ) as it is for the canonical NP VP construction in ( a ); (ii) we immediately derive an explanation for a range of coordination facts that have remained quite mysterious since they were discovered by J. R. Ross some 15 years ago; (iii) our grammars can entirely dispense with the class of rules known as transformations; (iv) our grammars can be shown to be formally equivalent to what are known as the context-free phrase structure grammars; (v) this latter consequence has the effect of making potentially relevant to natural language grammars a whole literature of mathematical results on the parsability and learnability of context-free phrase structure grammars.


1984 ◽  
Vol 7 (2) ◽  
pp. 115-143 ◽  
Author(s):  
Robin Cooper

Swedish noun-phrases of the form (Det) (Adj)* N are examined in the light of recent work in generalized phrase-structure grammar. It is argued that simple generalizations about the phrase-structure of these NPs are lost by trying to account for the precise morphological possibilities by using phrase-structure rules mentioning categories marked with morphological features. What could be accounted for by two rules must be broken down into subcases which need seven rules, thereby obscuring the overall syntactic structure of the NPs. An alternative is suggested which maintains the simple syntax which generates morphologically ill-formed NPs but only allows morphologically well-formed ones to be interpreted. It is suggested that this system can be constrained so as to generate only context-free languages.


Author(s):  
Ronald M. Kaplan

This article introduces some of the phenomena that theories of natural language syntax aim to explain. It briefly discusses a few of the formal approaches to syntax that have figured prominently in computational research and implementation. The fundamental problem of syntax is to characterize the relation between semantic predicate-argument relations and the superficial word and phrase configurations by which a language expresses them. The major task of syntactic theory is to define an explicit notation for writing grammars. This article details a framework called transformational grammar that combines a context-free phrase-structure grammar with another component of transformations that specify how trees of a given form can be transformed into other trees in a systematic way. Finally, it mentions briefly two syntactic systems that are of linguistic and computational interest, namely, generalized phrase structure grammar and tree-adjoining grammars.


2021 ◽  
Vol 55 (1) ◽  
pp. 231-263
Author(s):  
Timothy Osborne

Abstract The so-called ‘Big Mess Construction’ (BMC) frustrates standard assumptions about the structure of nominal groups. The normal position of an attributive adjective is after the determiner and before the noun, but in the BMC, the adjective precedes the determiner, e.g. that strange a sound, so big a scandal, too lame an excuse. Previous accounts of the BMC are couched in ‘Phrase Structure Grammar’ (PSG) and view the noun or the determiner (or the preposition of) as the root/head of the BMC phrase. In contrast, the current approach, which is couched in a ‘Dependency Grammar’ (DG) model, argues that the adjective is in fact the root/head of the phrase. A number of insights point to the adjective as the root/head, the most important of which is the optional appearance of the preposition of, e.g. that strange of a sound, so big of a scandal, too lame of an excuse.


Linguistics ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Frank Van Eynde

Abstract It is commonly assumed that participles show a mixture of verbal and adjectival properties, but the issue of how this mixed nature can best be captured is anything but settled. Analyses range from the purely adjectival to the purely verbal with various shades in between. This lack of consensus is at least partly due to the fact that participles are used in a variety of ways and that an analysis which fits one of them is not necessarily equally plausible for the other. In an effort to overcome the resulting fragmentation this paper proposes an analysis that covers all uses of the participles, from the adnominal over the predicative to the free adjunct uses, including also the nominalized ones. To keep it feasible we focus on one language, namely Dutch. With the help of a treebank we first identify the uses of the Dutch participles and describe their properties in informal terms. In a second step we provide an analysis in terms of the notation of Head-driven Phrase Structure Grammar. A key property of the analysis is the differentiation between core uses and grammaticalized uses. The treatment of the latter is influenced by insights from Grammaticalization Theory.


Sign in / Sign up

Export Citation Format

Share Document