lexical database
Recently Published Documents


TOTAL DOCUMENTS

177
(FIVE YEARS 36)

H-INDEX

15
(FIVE YEARS 2)

Author(s):  
Oksana Smirnova ◽  
Sigita Rackevičienė ◽  
Liudmila Mockienė

The article attempts to show how the theory of Frame Semantics and the resources of the lexical database FrameNet can be used for teaching/learning terminology of specialised domains. The article discusses the principles of Frame Semantics and presents a use case of application of the frame-based methodology for developing classification of terminology of the selected financial subdomain for learning/teaching purposes. The use case focuses on terms denoting concepts that compose ‘CAUSE-RISK’ frame which was developed on the basis of several related frames in the FrameNet database. The stages of the use case and its outcomes are described in detail and the benefits of application of the methodology for learning/teaching specialised vocabulary are provided. Hopefully, the provided insights will give ideas to teachers of foreign languages for specific purposes and help to develop effective terminology teaching/learning techniques.


2021 ◽  
Author(s):  
Yosra Hamdoun Bghiyel

This article aims to discuss the lemmatisation process of Old English adverbs inflected for the superlative from a corpus-based perspective. This study has been conducted on the basis of a semi-automatic methodology through which the inflectional forms have been automatically extracted from The York-Toronto-Helsinki Parsed Corpus of Old English Prose and The York Toronto-Helsinki Parsed Corpus of Old English Poetry whereas the task of assigning a lemma has been completed manually. The list of adverbial lemmas amounts to 1,755 and has been provided by the lexical database of Old English Nerthus. Additionally, the resulting lemmatised list has been checked against the lemmatised forms compiled by the Dictionary of Old English and Seelig’s (1930) work on Old English comparative and superlative adjectives and adverbs. Through this comparison, it has been possible to verify doubtful forms and incorporate new ones that are unattested by the YCOE. This pilot study has implemented for the first time a methodology for the lemmatisation of a non-verbal class and can be further applied to those categories that are still unlemmatised, namely nouns and adjectives.


2021 ◽  
Vol 12 ◽  
pp. 7-30
Author(s):  
Agnė Bielinskienė ◽  
◽  
Jolanta Kovalevskaitė ◽  
Erika Rimkutė

This paper describes the grammatical patterning of two parts of speech – nouns and adjectives – included in the corpus-driven “Lexical Database of Lithuanian” as a foreign language. The lexical database is a lexicographic application of the Lithuanian Pedagogic Corpus (approx. 620.000 tokens) which was used to develop headword lists and to collect word usage information in the form of corpus patterns. In this project, we adopted a partially automated inductive procedure of Corpus Pattern Analysis for 207 verbs, 386 nouns, 87 adjectives, and 41 adverbs. The detected corpus patterns reflect different meanings of the headword. Each pattern presents information on grammatical, semantic, and lexical levels. Manually selected examples illustrate all pattern components. In this paper, 673 patterns with nouns and 99 patterns with adjectives will be analysed discussing their syntactic behaviour in detail and providing some comments on lexis-grammar interface. The majority of patterns with nouns and adjectives are minimal patterns which include only the closest syntactical partners. This result is influenced by different procedures used to describe patterns with nouns, adjectives, and adverbs and patterns with verbs. Due to rich grammatical information, there are several similar patterns with one main (usually the most frequent) type and its variants. Pattern variants show that the grammatical characteristics of a specific word usage are rather individual.


Author(s):  
Archibald Michiels

DEFI is a prototype computer tool aimed at ranking (from most to least relevant) the French translations of an English lexical item in context. This paper deals with the strategies used by DEFI to recognize multi-word units (mwus) in running text. Any lexical unit included in the lexical database used in the project (a merge of the Oxford/Hachette and Robert/Collins English-to-French dictionaries) and longer than a single word is submitted to a surface parser, and the same process is applied to the user ’s text. A program written in Prolog assesses the quality of the match between the parsed user’s text and candidate mwus retrieved from the project’s lexical database. The matcher is able to account for some of the distortions undergone by the mwu, e.g. movement of a constituent as a result of relativization or passivization.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Guillaume Segerer ◽  
Martine Vanhove

Abstract Of all the semantic domains, colour terms have attracted the largest amount of attention, notably from a typological point of view. However, there is much more to be discovered. A search of the cross-linguistic lexical database of African languages (RefLex) reveals several previously undetected areal colexification patterns and shared lexico-constructional patterns in a genetically balanced sample of 401 languages. In this paper, we illustrate several areal characteristics of colour terms: (i) the spread of an areal feature due to a common extra-linguistic setting (locust bean – Parkia biglobosa – as the lexical source of yellow); (ii) two convergence phenomena, one based on a shared lexico-constructional pattern including a term for water, and one based on shared colexifications (red and ripe vs. green and unripe); and (iii) an areal pattern of lexical diffusion of colour ideophones, a category which has thus far been considered difficult to borrow.


Entrelinhas ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 130-144
Author(s):  
Jessica Braun de Moraes

Among the various challenges regarding distance education is the necessity of reducing the student dropout rate. In this sense, the present research aimed to contribute to the design of a lexical database focused on emotions and opinions that can be incorporated into a predictive evasion software. For the database design, we used the Scup tool to collect 150 tweets containing distance education students’ opinions and analyzed them in the light of Martin and White’s Appraisal Framework, along with five resources related to the sentiment Analysis field, which were taken from Liu’s work. In addition, we used the Aulete dictionary to describe the lexical units found in our corpus to better fit them into the analysis categories. Results showed 220 opinion tokens, which were identified and labeled according to their polarity. Moreover, these tokens were included in the domains attitude (judgment and appreciation) and graduation (sharp and strong) from the linguistic framework used. The results also indicated the necessity of another resource to help identify the use of figurative language, slangs, and extralinguistic elements, such as GIFS and emojis.


Author(s):  
Raquel Mateo Mendaza

The aim of this article is to identify the Old English exponent for the semantic prime LIVE following the principles of the Natural Semantic Metalanguage theory (Wierzbicka 1996, Goddard & Wierzbicka 2002, Goddard 2011). The methodology applied in the study is based on previous research in Old English semantic primes. In these terms, a search for those Old English words conveying the meaning of the semantic prime LIVE is made. This search selects the verbs (ge)buan, drohtian, (ge)eardian, (ge)libban, and wunian as candidate words for prime exponent. Then, these verbs are analysed in terms of morphological, textual, semantic, and syntactic criteria. With this purpose, relevant information on these words has been gathered from different lexicographical and textual sources in Old English, such as the Dictionary of Old English, the Dictionary of Old English Corpus, and the lexical database of Old English Nerthus. After the analysis of these verbs, the conclusion is drawn that the Old English verb (ge)libban is selected as prime exponent, as it satisfies the requirements proposed by each criterion.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 872
Author(s):  
Jih-Jeng Huang

The analytic hierarchy process (AHP) is a well-known approach in decision-making because of its simplicity and rationality. However, in conventional AHP, it cannot account for the correlation effect between criteria. In this paper, we use the lexical database, WordNet, to calculate the similarity between criteria set by a decision-maker. Then, we use the similarity matrix to process the factor analysis and obtain the independent factors, which are composed of their criteria. Finally, the weights of factors are derived to evaluate the alternatives. Moreover, we use a case study of online shopping to illustrate the proposed method and compare the result with the conventional AHP.


Author(s):  
Richard Beckwith ◽  
Christiane Fellbaum ◽  
Derek Gross ◽  
George A. Miller
Keyword(s):  

Author(s):  
Zed Sevcikova Sehyr ◽  
Naomi Caselli ◽  
Ariel M Cohen-Goldberg ◽  
Karen Emmorey

Abstract ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We report on the expanded database (ASL-LEX 2.0) that contains 2,723 ASL signs. For each sign, ASL-LEX now includes a more detailed phonological description, phonological density and complexity measures, frequency ratings (from deaf signers), iconicity ratings (from hearing non-signers and deaf signers), transparency (“guessability”) ratings (from non-signers), sign and videoclip durations, lexical class, and more. We document the steps used to create ASL-LEX 2.0 and describe the distributional characteristics for sign properties across the lexicon and examine the relationships among lexical and phonological properties of signs. Correlation analyses revealed that frequent signs were less iconic and phonologically simpler than infrequent signs and iconic signs tended to be phonologically simpler than less iconic signs. The complete ASL-LEX dataset and supplementary materials are available at https://osf.io/zpha4/ and an interactive visualization of the entire lexicon can be accessed on the ASL-LEX page: http://asl-lex.org/.


Sign in / Sign up

Export Citation Format

Share Document