Are Atypical Things More Popular?

2018 ◽  
Vol 29 (7) ◽  
pp. 1178-1184 ◽  
Author(s):  
Jonah Berger ◽  
Grant Packard

Why do some cultural items become popular? Although some researchers have argued that success is random, we suggest that how similar items are to each other plays an important role. Using natural language processing of thousands of songs, we examined the relationship between lyrical differentiation (i.e., atypicality) and song popularity. Results indicated that the more different a song’s lyrics are from its genre, the more popular it becomes. This relationship is weaker in genres where lyrics matter less (e.g., dance) or where differentiation matters less (e.g., pop) and occurs for lyrical topics but not style. The results shed light on cultural dynamics, why things become popular, and the psychological foundations of culture more broadly.

Author(s):  
Tianyuan Zhou ◽  
João Sedoc ◽  
Jordan Rodu

Many tasks in natural language processing require the alignment of word embeddings. Embedding alignment relies on the geometric properties of the manifold of word vectors. This paper focuses on supervised linear alignment and studies the relationship between the shape of the target embedding. We assess the performance of aligned word vectors on semantic similarity tasks and find that the isotropy of the target embedding is critical to the alignment. Furthermore, aligning with an isotropic noise can deliver satisfactory results. We provide a theoretical framework and guarantees which aid in the understanding of empirical results.


Author(s):  
Rachid Ammari ◽  
Ahbib Zenkoua

Our work aims to present an amazigh pronominal morphological analyzer (APMorph) based on xerox’s finite-state transducer (XFST). Our system revolves around a large lexicon named “APlex” including the affixed pronoun to the noun and to the verb and the characteristics relating to each lemma. A set of rules are added to define the inflectional behavior and morphosyntactic links of each entry as well as the relationship between the different lexical units. The implementation and the evaluation of our approach will be detailed within this article. The use of XFST remains a relevant choice in the sense that this platform allows both analysis and generation. The robustness of our system makes it able to be integrated in other applications of natural language processing (NLP) especially spellchecking, machine translation, and machine learning. This paper presents a continuation of our previous works on the automatic processing of Amazigh nouns and verbs.


Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 181 ◽  
Author(s):  
Pablo Gamallo ◽  
José Ramom Pichel ◽  
Iñaki Alegria

Phylogenetics is a sub-field of historical linguistics whose aim is to classify a group of languages by considering their distances within a rooted tree that stands for their historical evolution. A few European languages do not belong to the Indo-European family or are otherwise isolated in the European rooted tree. Although it is not possible to establish phylogenetic links using basic strategies, it is possible to calculate the distances between these isolated languages and the rest using simple corpus-based techniques and natural language processing methods. The objective of this article is to select some isolated languages and measure the distance between them and from the other European languages, so as to shed light on the linguistic distances and proximities of these controversial languages without considering phylogenetic issues. The experiments were carried out with 40 European languages including six languages that are isolated in their corresponding families: Albanian, Armenian, Basque, Georgian, Greek, and Hungarian.


2021 ◽  
Author(s):  
Flurina M. Wartmann ◽  
Olga Koblet ◽  
Ross S. Purves

Abstract Context Identifying tranquil areas is important for landscape planning and policy-making. Research demonstrated discrepancies between modelled potential tranquil areas and where people experience tranquillity based on field surveys. Because surveys are resource-intensive, user-generated text data offers potential for extracting where people experience tranquillity. Objectives We explore and model the relationship between landscape ecological measures and experienced tranquillity extracted from user-generated text descriptions. Methods Georeferenced, user-generated landscape descriptions from Geograph.UK were filtered using keywords related to tranquillity. We stratify resulting tranquil locations according to dominant land cover and quantify the influence of landscape characteristics including diversity and naturalness on explaining the presence of tranquillity. Finally, we apply natural language processing to identify terms linked to tranquillity keywords and compare the similarity of these terms across land cover classes. Results Evaluation of potential keywords yielded six keywords associated with experienced tranquillity, resulting in 15,350 extracted tranquillity descriptions. The two most common land cover classes associated with tranquillity were arable and horticulture, and improved grassland, followed by urban and suburban. In the logistic regression model across all land cover classes, freshwater, elevation and naturalness were positive predictors of tranquillity. Built-up area was a negative predictor. Descriptions of tranquillity were most similar between improved grassland and arable and horticulture, and most dissimilar between arable and horticulture and urban. Conclusions This study highlights the potential of applying natural language processing to extract experienced tranquillity from text, and demonstrates links between landscape ecological measures and tranquillity as a perceived landscape quality.


Author(s):  
Y. Losieva

The article is devoted to research to the state-of-art vector representation of words in natural language processing. Three main types of vector representation of a word are described, namely: static word embeddings, use of deep neural networks for word representation and dynamic) word embeddings based on the context of the text. This is a very actual and much-demanded area in natural language processing, computational linguistics and artificial intelligence at all. Proposed to consider several different models for vector representation of the word (or word embeddings), from the simplest (as a representation of text that describes the occurrence of words within a document or learning the relationship between a pair of words) to the multilayered neural networks and deep bidirectional transformers for language understanding, are described chronologically in relation to the appearance of models. Improvements regarding previous models are described, both the advantages and disadvantages of the presented models and in which cases or tasks it is better to use one or another model.


AI Magazine ◽  
2017 ◽  
Vol 1 (1) ◽  
pp. 11 ◽  
Author(s):  
Barbara J. Grosz

Two premises, reflected in the title, underlie the perspective from which I will consider research in natural language processing in this article. First, progress on building computer systems that process natural languages in any meaningful sense (i.e., systems that interact reasonably with people in natural language) requires considering language as part of a larger communicative situation. Second, as the phrase “utterance and objective” suggests, regarding language as communication requires consideration of what is said literally, what is intended, and the relationship between the two.


2022 ◽  
Vol 355 ◽  
pp. 03028
Author(s):  
Saihan Li ◽  
Zhijie Hu ◽  
Rong Cao

Natural Language inference refers to the problem of determining the relationships between a premise and a hypothesis, it is an emerging area of natural language processing. The paper uses deep learning methods to complete natural language inference task. The dataset includes 3GPP dataset and SNLI dataset. Gensim library is used to get the word embeddings, there are 2 methods which are word2vec and doc2vec to map the sentence to array. 2 deep learning models DNNClassifier and Attention are implemented separately to classify the relationship between the proposals from the telecommunication area dataset. The highest accuracy of the experiment is 88% and we found that the quality of the dataset decided the upper bound of the accuracy.


2020 ◽  
pp. 3-17
Author(s):  
Peter Nabende

Natural Language Processing for under-resourced languages is now a mainstream research area. However, there are limited studies on Natural Language Processing applications for many indigenous East African languages. As a contribution to covering the current gap of knowledge, this paper focuses on evaluating the application of well-established machine translation methods for one heavily under-resourced indigenous East African language called Lumasaaba. Specifically, we review the most common machine translation methods in the context of Lumasaaba including both rule-based and data-driven methods. Then we apply a state of the art data-driven machine translation method to learn models for automating translation between Lumasaaba and English using a very limited data set of parallel sentences. Automatic evaluation results show that a transformer-based Neural Machine Translation model architecture leads to consistently better BLEU scores than the recurrent neural network-based models. Moreover, the automatically generated translations can be comprehended to a reasonable extent and are usually associated with the source language input.


Sign in / Sign up

Export Citation Format

Share Document