scholarly journals Assessing experienced tranquillity through natural language processing and landscape ecology measures

2021 ◽  
Author(s):  
Flurina M. Wartmann ◽  
Olga Koblet ◽  
Ross S. Purves

Abstract Context Identifying tranquil areas is important for landscape planning and policy-making. Research demonstrated discrepancies between modelled potential tranquil areas and where people experience tranquillity based on field surveys. Because surveys are resource-intensive, user-generated text data offers potential for extracting where people experience tranquillity. Objectives We explore and model the relationship between landscape ecological measures and experienced tranquillity extracted from user-generated text descriptions. Methods Georeferenced, user-generated landscape descriptions from Geograph.UK were filtered using keywords related to tranquillity. We stratify resulting tranquil locations according to dominant land cover and quantify the influence of landscape characteristics including diversity and naturalness on explaining the presence of tranquillity. Finally, we apply natural language processing to identify terms linked to tranquillity keywords and compare the similarity of these terms across land cover classes. Results Evaluation of potential keywords yielded six keywords associated with experienced tranquillity, resulting in 15,350 extracted tranquillity descriptions. The two most common land cover classes associated with tranquillity were arable and horticulture, and improved grassland, followed by urban and suburban. In the logistic regression model across all land cover classes, freshwater, elevation and naturalness were positive predictors of tranquillity. Built-up area was a negative predictor. Descriptions of tranquillity were most similar between improved grassland and arable and horticulture, and most dissimilar between arable and horticulture and urban. Conclusions This study highlights the potential of applying natural language processing to extract experienced tranquillity from text, and demonstrates links between landscape ecological measures and tranquillity as a perceived landscape quality.

2020 ◽  
Author(s):  
David DeFranza ◽  
Himanshu Mishra ◽  
Arul Mishra

Language provides an ever-present context for our cognitions and has the ability to shape them. Languages across the world can be gendered (language in which the form of noun, verb, or pronoun is presented as female or male) versus genderless. In an ongoing debate, one stream of research suggests that gendered languages are more likely to display gender prejudice than genderless languages. However, another stream of research suggests that language does not have the ability to shape gender prejudice. In this research, we contribute to the debate by using a Natural Language Processing (NLP) method which captures the meaning of a word from the context in which it occurs. Using text data from Wikipedia and the Common Crawl project (which contains text from billions of publicly facing websites) across 45 world languages, covering the majority of the world’s population, we test for gender prejudice in gendered and genderless languages. We find that gender prejudice occurs more in gendered rather than genderless languages. Moreover, we examine whether genderedness of language influences the stereotypic dimensions of warmth and competence utilizing the same NLP method.


Vector representations for language have been shown to be useful in a number of Natural Language Processing tasks. In this paper, we aim to investigate the effectiveness of word vector representations for the problem of Sentiment Analysis. In particular, we target three sub-tasks namely sentiment words extraction, polarity of sentiment words detection, and text sentiment prediction. We investigate the effectiveness of vector representations over different text data and evaluate the quality of domain-dependent vectors. Vector representations has been used to compute various vector-based features and conduct systematically experiments to demonstrate their effectiveness. Using simple vector based features can achieve better results for text sentiment analysis of APP.


Author(s):  
Nazmun Nessa Moon ◽  
Imrus Salehin ◽  
Masuma Parvin ◽  
Md. Mehedi Hasan ◽  
Iftakhar Mohammad Talha ◽  
...  

<span>In this study we have described the process of identifying unnecessary video using an advanced combined method of natural language processing and machine learning. The system also includes a framework that contains analytics databases and which helps to find statistical accuracy and can detect, accept or reject unnecessary and unethical video content. In our video detection system, we extract text data from video content in two steps, first from video to MPEG-1 audio layer 3 (MP3) and then from MP3 to WAV format. We have used the text part of natural language processing to analyze and prepare the data set. We use both Naive Bayes and logistic regression classification algorithms in this detection system to determine the best accuracy for our system. In our research, our video MP4 data has converted to plain text data using the python advance library function. This brief study discusses the identification of unauthorized, unsocial, unnecessary, unfinished, and malicious videos when using oral video record data. By analyzing our data sets through this advanced model, we can decide which videos should be accepted or rejected for the further actions.</span>


2018 ◽  
Author(s):  
Jeremy Petch ◽  
Jane Batt ◽  
Joshua Murray ◽  
Muhammad Mamdani

BACKGROUND The increasing adoption of electronic health records (EHRs) in clinical practice holds the promise of improving care and advancing research by serving as a rich source of data, but most EHRs allow clinicians to enter data in a text format without much structure. Natural language processing (NLP) may reduce reliance on manual abstraction of these text data by extracting clinical features directly from unstructured clinical digital text data and converting them into structured data. OBJECTIVE This study aimed to assess the performance of a commercially available NLP tool for extracting clinical features from free-text consult notes. METHODS We conducted a pilot, retrospective, cross-sectional study of the accuracy of NLP from dictated consult notes from our tuberculosis clinic with manual chart abstraction as the reference standard. Consult notes for 130 patients were extracted and processed using NLP. We extracted 15 clinical features from these consult notes and grouped them a priori into categories of simple, moderate, and complex for analysis. RESULTS For the primary outcome of overall accuracy, NLP performed best for features classified as simple, achieving an overall accuracy of 96% (95% CI 94.3-97.6). Performance was slightly lower for features of moderate clinical and linguistic complexity at 93% (95% CI 91.1-94.4), and lowest for complex features at 91% (95% CI 87.3-93.1). CONCLUSIONS The findings of this study support the use of NLP for extracting clinical features from dictated consult notes in the setting of a tuberculosis clinic. Further research is needed to fully establish the validity of NLP for this and other purposes.


Sentiment Classification is one of the well-known and most popular domain of machine learning and natural language processing. An algorithm is developed to understand the opinion of an entity similar to human beings. This research fining article presents a similar to the mention above. Concept of natural language processing is considered for text representation. Later novel word embedding model is proposed for effective classification of the data. Tf-IDF and Common BoW representation models were considered for representation of text data. Importance of these models are discussed in the respective sections. The proposed is testing using IMDB datasets. 50% training and 50% testing with three random shuffling of the datasets are used for evaluation of the model.


2021 ◽  
Author(s):  
Minoru Yoshida ◽  
Kenji Kita

Both words and numerals are tokens found in almost all documents but they have different properties. However, relatively little attention has been paid in numerals found in texts and many systems treated the numbers found in the document in ad-hoc ways, such as regarded them as mere strings in the same way as words, normalized them to zeros, or simply ignored them. Recent growth of natural language processing (NLP) research areas has change this situations and more and more attentions have been paid to the numeracy in documents. In this survey, we provide a quick overview of the history and recent advances of the research of mining such relations between numerals and words found in text data.


2020 ◽  
Author(s):  
Tianyong Hao ◽  
Zhengxing Huang ◽  
Likeng Liang ◽  
Heng Weng ◽  
Buzhou Tang

UNSTRUCTURED With the rapid growth of information technology, the necessity for processing massive amounts of health and medical data utilizing advanced information technologies has also grown. A large amount of valuable data exists in natural text such as free diagnosis text, discharge summaries, online health discussions, eligibility criteria of clinical trials, and so on. Health natural language processing automatically analyzes the commonalities and differences of large amounts of text data and recommend appropriate actions on behalf of domain experts to assist medical decision making. This editorial shares the methodology innovation of health natural language processing and its applications in medial domain.


Author(s):  
Mitta Roja

Abstract: Cyberbullying is a major problem encountered on internet that affects teenagers and also adults. It has lead to mishappenings like suicide and depression. Regulation of content on Social media platorms has become a growing need. The following study uses data from two different forms of cyberbullying, hate speech tweets from Twittter and comments based on personal attacks from Wikipedia forums to build a model based on detection of Cyberbullying in text data using Natural Language Processing and Machine learning. Threemethods for Feature extraction and four classifiers are studied to outline the best approach. For Tweet data the model provides accuracies above 90% and for Wikipedia data it givesaccuracies above 80%. Keywords: Cyberbullying, Hate speech, Personal attacks,Machine learning, Feature extraction, Twitter, Wikipedia


Sign in / Sign up

Export Citation Format

Share Document