scholarly journals Prior Beliefs Modulate Projection

Open Mind ◽  
2021 ◽  
pp. 1-12
Author(s):  
Judith Degen ◽  
Judith Tonhauser

Abstract Beliefs about the world affect language processing and interpretation in several empirical domains. In two experiments, we tested whether subjective prior beliefs about the probability of utterance content modulate projection, that is, listeners’ inferences about speaker commitment to that content. We find that prior beliefs predict projection at both the group and the by-participant level: the higher the prior belief in a content, the more speakers are taken to be committed to it. This result motivates the integration of formal analyses of projection with cognitive theories of language understanding.

2021 ◽  
Author(s):  
Judith Degen ◽  
Judith Tonhauser

Beliefs about the world affect language processing and interpretation in several empirical domains. In two experiments, we tested whether subjective prior beliefs about the probability of utterance content modulate projection, that is, listeners’ inferences about speaker commitment to that content. We find that prior beliefs predict projection at both the group and the by-participant level: the higher the prior belief in a content, the more speakers are taken to be committed to it. This result motivates the integration of formal analyses of projection with cognitive theories of language understanding.


Author(s):  
TIAN-SHUN YAO

With the word-based theory of natural language processing, a word-based Chinese language understanding system has been developed. In the light of psychological language analysis and the features of the Chinese language, this theory of natural language processing is presented with the description of the computer programs based on it. The heart of the system is to define a Total Information Dictionary and the World Knowledge Source used in the system. The purpose of this research is to develop a system which can understand not only Chinese sentences but also the whole text.


Author(s):  
Jelena Luketina ◽  
Nantas Nardelli ◽  
Gregory Farquhar ◽  
Jakob Foerster ◽  
Jacob Andreas ◽  
...  

To be successful in real-world tasks, Reinforcement Learning (RL) needs to exploit the compositional, relational, and hierarchical structure of the world, and learn to transfer it to the task at hand. Recent advances in representation learning for language make it possible to build models that acquire world knowledge from text corpora and integrate this knowledge into downstream decision making problems. We thus argue that the time is right to investigate a tight integration of natural language understanding into RL in particular. We survey the state of the field, including work on instruction following, text games, and learning from textual domain knowledge. Finally, we call for the development of new environments as well as further investigation into the potential uses of recent Natural Language Processing (NLP) techniques for such tasks.


Author(s):  
Annie Zaenen

Hearers and readers make inferences on the basis of what they hear or read. These inferences are partly determined by the linguistic form that the writer or speaker chooses to give to her utterance. The inferences can be about the state of the world that the speaker or writer wants the hearer or reader to conclude are pertinent, or they can be about the attitude of the speaker or writer vis-à-vis this state of affairs. The attention here goes to the inferences of the first type. Research in semantics and pragmatics has isolated a number of linguistic phenomena that make specific contributions to the process of inference. Broadly, entailments of asserted material, presuppositions (e.g., factive constructions), and invited inferences (especially scalar implicatures) can be distinguished. While we make these inferences all the time, they have been studied piecemeal only in theoretical linguistics. When attempts are made to build natural language understanding systems, the need for a more systematic and wholesale approach to the problem is felt. Some of the approaches developed in Natural Language Processing are based on linguistic insights, whereas others use methods that do not require (full) semantic analysis. In this article, I give an overview of the main linguistic issues and of a variety of computational approaches, especially those stimulated by the RTE challenges first proposed in 2004.


2021 ◽  
Vol 21 (S2) ◽  
Author(s):  
Feihong Yang ◽  
Xuwen Wang ◽  
Hetong Ma ◽  
Jiao Li

Abstract Background Transformer is an attention-based architecture proven the state-of-the-art model in natural language processing (NLP). To reduce the difficulty of beginning to use transformer-based models in medical language understanding and expand the capability of the scikit-learn toolkit in deep learning, we proposed an easy to learn Python toolkit named transformers-sklearn. By wrapping the interfaces of transformers in only three functions (i.e., fit, score, and predict), transformers-sklearn combines the advantages of the transformers and scikit-learn toolkits. Methods In transformers-sklearn, three Python classes were implemented, namely, BERTologyClassifier for the classification task, BERTologyNERClassifier for the named entity recognition (NER) task, and BERTologyRegressor for the regression task. Each class contains three methods, i.e., fit for fine-tuning transformer-based models with the training dataset, score for evaluating the performance of the fine-tuned model, and predict for predicting the labels of the test dataset. transformers-sklearn is a user-friendly toolkit that (1) Is customizable via a few parameters (e.g., model_name_or_path and model_type), (2) Supports multilingual NLP tasks, and (3) Requires less coding. The input data format is automatically generated by transformers-sklearn with the annotated corpus. Newcomers only need to prepare the dataset. The model framework and training methods are predefined in transformers-sklearn. Results We collected four open-source medical language datasets, including TrialClassification for Chinese medical trial text multi label classification, BC5CDR for English biomedical text name entity recognition, DiabetesNER for Chinese diabetes entity recognition and BIOSSES for English biomedical sentence similarity estimation. In the four medical NLP tasks, the average code size of our script is 45 lines/task, which is one-sixth the size of transformers’ script. The experimental results show that transformers-sklearn based on pretrained BERT models achieved macro F1 scores of 0.8225, 0.8703 and 0.6908, respectively, on the TrialClassification, BC5CDR and DiabetesNER tasks and a Pearson correlation of 0.8260 on the BIOSSES task, which is consistent with the results of transformers. Conclusions The proposed toolkit could help newcomers address medical language understanding tasks using the scikit-learn coding style easily. The code and tutorials of transformers-sklearn are available at https://doi.org/10.5281/zenodo.4453803. In future, more medical language understanding tasks will be supported to improve the applications of transformers_sklearn.


2021 ◽  
Vol 11 (7) ◽  
pp. 3095
Author(s):  
Suhyune Son ◽  
Seonjeong Hwang ◽  
Sohyeun Bae ◽  
Soo Jun Park ◽  
Jang-Hwan Choi

Multi-task learning (MTL) approaches are actively used for various natural language processing (NLP) tasks. The Multi-Task Deep Neural Network (MT-DNN) has contributed significantly to improving the performance of natural language understanding (NLU) tasks. However, one drawback is that confusion about the language representation of various tasks arises during the training of the MT-DNN model. Inspired by the internal-transfer weighting of MTL in medical imaging, we introduce a Sequential and Intensive Weighted Language Modeling (SIWLM) scheme. The SIWLM consists of two stages: (1) Sequential weighted learning (SWL), which trains a model to learn entire tasks sequentially and concentrically, and (2) Intensive weighted learning (IWL), which enables the model to focus on the central task. We apply this scheme to the MT-DNN model and call this model the MTDNN-SIWLM. Our model achieves higher performance than the existing reference algorithms on six out of the eight GLUE benchmark tasks. Moreover, our model outperforms MT-DNN by 0.77 on average on the overall task. Finally, we conducted a thorough empirical investigation to determine the optimal weight for each GLUE task.


2020 ◽  
Author(s):  
David DeFranza ◽  
Himanshu Mishra ◽  
Arul Mishra

Language provides an ever-present context for our cognitions and has the ability to shape them. Languages across the world can be gendered (language in which the form of noun, verb, or pronoun is presented as female or male) versus genderless. In an ongoing debate, one stream of research suggests that gendered languages are more likely to display gender prejudice than genderless languages. However, another stream of research suggests that language does not have the ability to shape gender prejudice. In this research, we contribute to the debate by using a Natural Language Processing (NLP) method which captures the meaning of a word from the context in which it occurs. Using text data from Wikipedia and the Common Crawl project (which contains text from billions of publicly facing websites) across 45 world languages, covering the majority of the world’s population, we test for gender prejudice in gendered and genderless languages. We find that gender prejudice occurs more in gendered rather than genderless languages. Moreover, we examine whether genderedness of language influences the stereotypic dimensions of warmth and competence utilizing the same NLP method.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Kazi Nabiul Alam ◽  
Md Shakib Khan ◽  
Abdur Rab Dhruba ◽  
Mohammad Monirujjaman Khan ◽  
Jehad F. Al-Amri ◽  
...  

The COVID-19 pandemic has had a devastating effect on many people, creating severe anxiety, fear, and complicated feelings or emotions. After the initiation of vaccinations against coronavirus, people’s feelings have become more diverse and complex. Our aim is to understand and unravel their sentiments in this research using deep learning techniques. Social media is currently the best way to express feelings and emotions, and with the help of Twitter, one can have a better idea of what is trending and going on in people’s minds. Our motivation for this research was to understand the diverse sentiments of people regarding the vaccination process. In this research, the timeline of the collected tweets was from December 21 to July21. The tweets contained information about the most common vaccines available recently from across the world. The sentiments of people regarding vaccines of all sorts were assessed using the natural language processing (NLP) tool, Valence Aware Dictionary for sEntiment Reasoner (VADER). Initializing the polarities of the obtained sentiments into three groups (positive, negative, and neutral) helped us visualize the overall scenario; our findings included 33.96% positive, 17.55% negative, and 48.49% neutral responses. In addition, we included our analysis of the timeline of the tweets in this research, as sentiments fluctuated over time. A recurrent neural network- (RNN-) oriented architecture, including long short-term memory (LSTM) and bidirectional LSTM (Bi-LSTM), was used to assess the performance of the predictive models, with LSTM achieving an accuracy of 90.59% and Bi-LSTM achieving 90.83%. Other performance metrics such as precision,, F1-score, and a confusion matrix were also used to validate our models and findings more effectively. This study improves understanding of the public’s opinion on COVID-19 vaccines and supports the aim of eradicating coronavirus from the world.


10.2196/21504 ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. e21504
Author(s):  
Angela Chang ◽  
Peter Johannes Schulz ◽  
ShengTsung Tu ◽  
Matthew Tingchi Liu

Background Information about a new coronavirus emerged in 2019 and rapidly spread around the world, gaining significant public attention and attracting negative bias. The use of stigmatizing language for the purpose of blaming sparked a debate. Objective This study aims to identify social stigma and negative sentiment toward the blameworthy agents in social communities. Methods We enabled a tailored text-mining platform to identify data in their natural settings by retrieving and filtering online sources, and constructed vocabularies and learning word representations from natural language processing for deductive analysis along with the research theme. The data sources comprised of ten news websites, eleven discussion forums, one social network, and two principal media sharing networks in Taiwan. A synthesis of news and social networking analytics was present from December 30, 2019, to March 31, 2020. Results We collated over 1.07 million Chinese texts. Almost two-thirds of the texts on COVID-19 came from news services (n=683,887, 63.68%), followed by Facebook (n=297,823, 27.73%), discussion forums (n=62,119, 5.78%), and Instagram and YouTube (n=30,154, 2.81%). Our data showed that online news served as a hotbed for negativity and for driving emotional social posts. Online information regarding COVID-19 associated it with China—and a specific city within China through references to the “Wuhan pneumonia”—potentially encouraging xenophobia. The adoption of this problematic moniker had a high frequency, despite the World Health Organization guideline to avoid biased perceptions and ethnic discrimination. Social stigma is disclosed through negatively valenced responses, which are associated with the most blamed targets. Conclusions Our sample is sufficiently representative of a community because it contains a broad range of mainstream online media. Stigmatizing language linked to the COVID-19 pandemic shows a lack of civic responsibility that encourages bias, hostility, and discrimination. Frequently used stigmatizing terms were deemed offensive, and they might have contributed to recent backlashes against China by directing blame and encouraging xenophobia. The implications ranging from health risk communication to stigma mitigation and xenophobia concerns amid the COVID-19 outbreak are emphasized. Understanding the nomenclature and biased terms employed in relation to the COVID-19 outbreak is paramount. We propose solidarity with communication professionals in combating the COVID-19 outbreak and the infodemic. Finding solutions to curb the spread of virus bias, stigma, and discrimination is imperative.


2021 ◽  
Author(s):  
Igor Grossmann ◽  
Oliver Twardus ◽  
Michael E. W. Varnum ◽  
Eranda Jayawickreme ◽  
John McLevey

How will the world change as a result of the Covid-19 pandemic? What can people do to best adapt to the societal changes ahead? To answer these questions, over the course of the summer-fall 2020 we launched the World After COVID Project, interviewing more than 50 of the world’s leading scholars in the behavioral and social sciences, including fellows of national academies and presidents of major scientific societies. Experts independently shared their thoughts on what effects the COVID-19 pandemic will have on our societies and provided advice for successful response to new challenges and opportunities. Using mixed-method and natural language processing analyses, we distilled and analyzed these predictions and suggestions, observing a diversity of scenarios. Results also show that half of the experts approach their post-Covid predictions dialectically, highlighting both positive and negative features of the same prediction. Moreover, prosocial goals and meta-cognition—two chief tenants of the Common Wisdom model—were evident in their recommendations for how to cope with possible changes. The project provides a time capsule of experts’ predictions during major societal changes. We discuss implications for strengthening focus on prediction (vs. mere explanation) in psychological science as well as the value of uncertainty and dialecticism in forecasting.


Sign in / Sign up

Export Citation Format

Share Document