distinct word
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

2018 ◽  
Vol 115 (32) ◽  
pp. E7595-E7604 ◽  
Author(s):  
Aliette Lochy ◽  
Corentin Jacques ◽  
Louis Maillard ◽  
Sophie Colnat-Coulbois ◽  
Bruno Rossion ◽  
...  

We report a comprehensive cartography of selective responses to visual letters and words in the human ventral occipito-temporal cortex (VOTC) with direct neural recordings, clarifying key aspects of the neural basis of reading. Intracerebral recordings were performed in a large group of patients (n = 37) presented with visual words inserted periodically in rapid sequences of pseudofonts, nonwords, or pseudowords, enabling classification of responses at three levels of word processing: letter, prelexical, and lexical. While letter-selective responses are found in much of the VOTC, with a higher proportion in left posterior regions, prelexical/lexical responses are confined to the middle and anterior sections of the left fusiform gyrus. This region overlaps with and extends more anteriorly than the visual word form area typically identified with functional magnetic resonance imaging. In this region, prelexical responses provide evidence for populations of neurons sensitive to the statistical regularity of letter combinations independently of lexical responses to familiar words. Despite extensive sampling in anterior ventral temporal regions, there is no hierarchical organization between prelexical and lexical responses in the left fusiform gyrus. Overall, distinct word processing levels depend on neural populations that are spatially intermingled rather than organized according to a strict postero-anterior hierarchy in the left VOTC.


2018 ◽  
Vol 3 (1) ◽  
pp. 39
Author(s):  
Ksenia Ershova

This paper presents an analysis of word formation in West Circassian, a polysynthetic language. I argue that while verbs and nouns superficially share a similar morphological profile, they are in fact constructed through two distinct word formation strategies: while verbal morphology is concatenated via syntactic head movement, the noun phrase is pronounced as a single word due to rules of syntax-to-prosody mapping. Such a division of labor provides an account for why only nouns, and not verbs, exhibit productive noun incorporation in the language: West Circassian noun incorporation is prosodic, rather than syntactic. The evidence for this two-fold approach to word formation comes from morpheme ordering in nominalizations.


2017 ◽  
pp. 18-26
Author(s):  
Petro Bilousenko ◽  
Svitlana Sablina

On the material reconstructed derivates of proto-Slavic era a lexical-derivational taxonomy of feminine nouns with the suffix –VA is carried out. The origin and the typology of derivational suffix-va in such derivatives are established. It is proved that the suffix -VA demonstrates the ability to be combined mostly with verbal and nominal bases. Minor frequency of the creation of proto-Slavic nouns by attaching to the original foundations of the suffix -VA and different types of semantic-derivational specifics of such derivative testify in favor of assumptions about the variant of the semantic-derivational behavior of this formant, and the lack of primary word-formation functions in proto-Slavic era. It is found that the formant -VA had a distinct word-formation potential with the creation of protoSlavic nouns abstract-deverbative. In verbal morphology this particle was sometimes a marker of collectivity in nouns. In proto-Slavic era deadjectival, which are created from noun, are not uniquely identify the primary function of this suffix, since it was formed besubstantial for the representation of names of plants, household utensils, somatisms and collective nouns. Lexical-derivational typology derived from the adjective with suffix -VA is represented by two groups: the names of persons and names of animals.


2016 ◽  
Vol 2 (1) ◽  
pp. 195-214
Author(s):  
Agata Rozumko

Abstract The category of epistemic adverbs has recently received increased attention in both Anglophone and Polish linguistics, but English–Polish contrastive research in this area has so far been rather fragmentary. English and Polish grammars differ considerably in the ways they classify epistemic adverbs. The differences largely result from the different understanding of adverbs as a category, which in English grammar tends to be presented as broad and heterogeneous while in Polish grammar – rather narrow and uniform. Polish equivalents of English epistemic adverbs are classified as particles – a distinct word class with its own characteristic properties. This paper presents an overview of approaches to epistemic adverbs taken in Anglophone and Polish linguistics with the aim of identifying their convergent points and suggesting a framework for a contrastive analysis. In the case of Anglophone research, the focus is largely on discourse studies because epistemic adverbs are usually seen as a discourse category. In Polish linguistics, however, they are analysed within different theoretical frameworks, which is why the discussion will not be limited to one specific methodological school. Reference is also made to more general issues, such as the treatment of adverbs as a category.


2014 ◽  
Vol 42 (1) ◽  
pp. 180-195 ◽  
Author(s):  
JULIE M. ESTIS ◽  
BRENDA L. BEVERLY

ABSTRACTFast mapping weaknesses in children with specific language impairment (SLI) may be explained by differences in disambiguation, mapping an unknown word to an unnamed object. The impact of language ability and linguistic stimulus on disambiguation was investigated. Sixteen children with SLI (8 preschool, 8 school-age) and sixteen typically developing age-matched children selected referents given familiar and unfamiliar object pairs in three ambiguous conditions: phonologically distinct word (PD), phonologically similar word (PS), no word (NW). Preschoolers with SLI did not disambiguate, unlike typically developing age-matched participants, who consistently selected unfamiliar objects given PD. School-age children with SLI disambiguated given PD. Delays in disambiguation for young children with SLI suggest limitations in processes that facilitate word learning for typically developing children. School-age children with SLI consistently selected familiar objects for PS, unlike typically developing children, suggesting differences in phonological activation for word learning.


2014 ◽  
Vol 9 ◽  
Author(s):  
Marco Baroni ◽  
Raffaella Bernardi ◽  
Roberto Zamparelli

The lexicon of any natural language encodes a huge number of distinct word meanings. Just to understand this article, you will need to know what thousands of words mean. The space of possible sentential meanings is infinite: In this article alone, you will encounter many sentences that express ideas you have never heard before, we hope. Statistical semantics has addressed the issue of the vastness of word meaning by proposing methods to harvest meaning automatically from large collections of text (corpora). Formal semantics in the Fregean tradition has developed methods to account for the infinity of sentential meaning based on the crucial insight of compositionality, the idea that meaning of sentences is built incrementally by combining the meanings of their constituents. This article sketches a new approach to semantics that brings together ideas from statistical and formal semantics to account, in parallel, for the richness of lexical meaning and the combinatorial power of sentential semantics. We adopt, in particular, the idea that word meaning can be approximated by the patterns of co-occurrence of words in corpora from statistical semantics, and the idea that compositionality can be captured in terms of a syntax-driven calculus of function application from formal semantics.


Author(s):  
Andrew Spencer

Based on the notion of a lexicon with default inheritance, I address the problem of how to provide a template for lexical representations that allows us to capture the relatedness between inflected word forms and canonically derived lexemes within a broadly realizational-inferential model of morphology. To achieve this we need to be able to represent a whole host of intermediate types of lexical relatedness that are much less frequently discussed in the literature. These include transpositions such as deverbal participles, in which a word's morphosyntactic class changes (e.g. verb ⇒ adjective) but no semantic predicate is added to the semantic representation and the derived word remains, in an important sense, a "form" of the base lexeme (e.g. the 'present participle form of the verb'). I propose a model in which morphological properties are inherited by default from syntactic properties and syntactic properties are inherited from semantic properties, such as ontological category (the Default Cascade). Relatedness is defined in terms of a Generalized Paradigm Function (perhaps in reality a relation), a generalization of the Paradigm Function of Paradigm Function Morphology (Stump 2001). The GPF has four components which deliver respectively specifications of a morphological form, syntactic properties, semantic representation and a lexemic index (LI) unique to each individuated lexeme in the lexicon. In principle, therefore, the same function delivers derived lexemes as inflected forms. In order to ensure that a newly derived lexeme of a distinct word class can be inflected I assume two additional principles. First, I assume an Inflectional Specifiability Principle, which states that the form component of the GPF (which defines inflected word forms of a lexeme) is dependent on the specification of the lexeme's morpholexical signature, a declaration of the properties that the lexeme is obliged to inflect for (defined by default on the basis of morpholexical class). I then propose a Category Erasure Principle, which states that 'lower' attributes are erased when the GPF introduces a non-trivial change to a 'higher' attribute (e.g. a change to the semantic representation entails erasure of syntactic and morphological information). The required information is then provided by the Default Cascade, unless overridden by specific declarations in the GPF. I show how this model can account for a variety of intermediate types of relatedness which cannot easily be treated as either inflection or derivation, and conclude with a detailed illustration of how the system applies to a particularly interesting type of transposition in the Samoyedic language Sel'kup, in which a noun is transposed to a similitudinal adjective whose form is in paradigmatic opposition to case-marked noun forms, and which is therefore a kind of inflection.


Author(s):  
Marcelo A. Montemurro

Human language evolved by natural mechanisms into an efficient system capable of coding and transmitting highly structured information [12, 13, 14]. As a remarkable complex system it allows many levels of description across its organizational hierarchy [1, 11, 18]. In this context statistical analysis stands as a valuable tool in order to reveal robust structural patterns that may have resulted from its long evolutionary history. In this chapter we shall address the statistical regularities of human language at its most basic level of description, namely the rank-frequency distribution of words. Around 1932 the philologist George Zipf [6, 19, 20] noted the manifestation of several robust power-law distributions arising in different realms of human activity. Among them, the most striking was undoubtedly the one referring to the distribution of words frequencies in human languages. The best way to introduce Zipf's law for words is by means of a concrete example. Let us take a literary work, say, James Joyce's Ulysses, and perform some basic statistics on it, whic simply consists in counting all the words present in the text and noting how many occurrences each distinct word form has. For this particular text we should arrive at the following numbers: the total number of words N = 268,112, and the number of different word forms V = 28,838. We can now order the list of different words according to decreasing number of occurrences, and we can assign to each word a rank index s equal to its position in the list starting from the most frequent word. Some general features of the rank-ordered list of words can be mentioned at this point. First, the top-rank words are functional components of language devoid of direct meaning, such as the article the and prepositions, for instance. A few ranks down the list, words more related to the contents of the text start to appear.


Sign in / Sign up

Export Citation Format

Share Document