scholarly journals Content Analysis of Textbooks via Natural Language Processing: Findings on Gender, Race, and Ethnicity in Texas U.S. History Textbooks

AERA Open ◽  
2020 ◽  
Vol 6 (3) ◽  
pp. 233285842094031
Author(s):  
Li Lucy ◽  
Dorottya Demszky ◽  
Patricia Bromley ◽  
Dan Jurafsky

Cutting-edge data science techniques can shed new light on fundamental questions in educational research. We apply techniques from natural language processing (lexicons, word embeddings, topic models) to 15 U.S. history textbooks widely used in Texas between 2015 and 2017, studying their depiction of historically marginalized groups. We find that Latinx people are rarely discussed, and the most common famous figures are nearly all White men. Lexicon-based approaches show that Black people are described as performing actions associated with low agency and power. Word embeddings reveal that women tend to be discussed in the contexts of work and the home. Topic modeling highlights the higher prominence of political topics compared with social ones. We also find that more conservative counties tend to purchase textbooks with less representation of women and Black people. Building on a rich tradition of textbook analysis, we release our computational toolkit to support new research directions.

2018 ◽  
Vol 2 (3) ◽  
pp. 22 ◽  
Author(s):  
Jeffrey Ray ◽  
Olayinka Johnny ◽  
Marcello Trovati ◽  
Stelios Sotiriadis ◽  
Nik Bessis

The continuous creation of data has posed new research challenges due to its complexity, diversity and volume. Consequently, Big Data has increasingly become a fully recognised scientific field. This article provides an overview of the current research efforts in Big Data science, with particular emphasis on its applications, as well as theoretical foundation.


2020 ◽  
Author(s):  
Abeed Sarker ◽  
Mohammed Ali Al-Garadi ◽  
Yuan-Chi Yang ◽  
Jinho Choi ◽  
Arshed A Quyyumi ◽  
...  

UNSTRUCTURED The capabilities of natural language processing (NLP) methods have expanded significantly in recent years, particularly driven by advances in data science and machine learning. However, the utilization of NLP for patient-oriented clinical research and care (POCRC) is still limited. A primary reason behind this is perhaps the fact that clinical NLP methods are developed, optimized, and evaluated on narrow-focus datasets and tasks (e.g., for the detection of specific symptoms from free texts). Such research and development (R&D) approaches may be described as problem-oriented, and the developed systems only perform well for a given specialized task. As standalone systems, they are also typically not suitable for addressing the needs of POCRC, leaving a gap between the capabilities of clinical NLP methods and the needs of patient-facing medical experts. We believe that to make clinical NLP systems more valuable, future R&D efforts need to follow a new research paradigm, one that explicitly incorporates characteristics that are crucial for POCRC. We present our viewpoint about four interrelated characteristics, three representing NLP system properties and one associated with the R&D process—(i) generalizability (capability to characterize patients, not clinical problems), (ii) interpretability (ability to explain system decisions), (iii) customizability (flexibility for adaptation to distinct settings, problems and cohorts), and (iv) cross-evaluation (validated performance on heterogeneous datasets)—that are relevant for NLP systems suitable for POCRC. Using the NLP task of clinical concept detection as an example, we detail these characteristics and discuss how they may lead to increased uptake of NLP systems for POCRC.


2020 ◽  
Author(s):  
Masashi Sugiyama

Recently, word embeddings have been used in many natural language processing problems successfully and how to train a robust and accurate word embedding system efficiently is a popular research area. Since many, if not all, words have more than one sense, it is necessary to learn vectors for all senses of word separately. Therefore, in this project, we have explored two multi-sense word embedding models, including Multi-Sense Skip-gram (MSSG) model and Non-parametric Multi-sense Skip Gram model (NP-MSSG). Furthermore, we propose an extension of the Multi-Sense Skip-gram model called Incremental Multi-Sense Skip-gram (IMSSG) model which could learn the vectors of all senses per word incrementally. We evaluate all the systems on word similarity task and show that IMSSG is better than the other models.


Online business has opened up several avenues for researchers and computer scientists to initiate new research models. The business activities that the customers accomplish certainly produce abundant information /data. Analysis of the data/information will obviously produce useful inferences and many declarations. These inferences may support the system in improving the quality of service, understand the current market requirement, Trend of the business, future need of the society and so on. In this connection the current paper is trying to propose a feature extraction technique named as Business Sentiment Quotient (BSQ). BSQ involves word2vec[1] word embedding technique from Natural Language Processing. Number of tweets related to business are accessed from twitter and processed to estimate BSQ using python programming language. BSQ may be utilized for further Machine Learning Activities.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Ivano Lauriola ◽  
Fabio Aiolli ◽  
Alberto Lavelli ◽  
Fabio Rinaldi

Abstract Background Named Entity Recognition is a common task in Natural Language Processing applications, whose purpose is to recognize named entities in textual documents. Several systems exist to solve this task in the biomedical domain, based on Natural Language Processing techniques and Machine Learning algorithms. A crucial step of these applications is the choice of the representation which describes data. Several representations have been proposed in the literature, some of which are based on a strong knowledge of the domain, and they consist of features manually defined by domain experts. Usually, these representations describe the problem well, but they require a lot of human effort and annotated data. On the other hand, general-purpose representations like word-embeddings do not require human domain knowledge, but they could be too general for a specific task. Results This paper investigates methods to learn the best representation from data directly, by combining several knowledge-based representations and word embeddings. Two mechanisms have been considered to perform the combination, which are neural networks and Multiple Kernel Learning. To this end, we use a hybrid architecture for biomedical entity recognition which integrates dictionary look-up (also known as gazetteers) with machine learning techniques. Results on the CRAFT corpus clearly show the benefits of the proposed algorithm in terms of F1 score. Conclusions Our experiments show that the principled combination of general, domain specific, word-, and character-level representations improves the performance of entity recognition. We also discussed the contribution of each representation in the final solution.


2017 ◽  
Vol 26 (01) ◽  
pp. 214-227 ◽  
Author(s):  
G. Gonzalez-Hernandez ◽  
A. Sarker ◽  
K. O’Connor ◽  
G. Savova

Summary Background: Natural Language Processing (NLP) methods are increasingly being utilized to mine knowledge from unstructured health-related texts. Recent advances in noisy text processing techniques are enabling researchers and medical domain experts to go beyond the information encapsulated in published texts (e.g., clinical trials and systematic reviews) and structured questionnaires, and obtain perspectives from other unstructured sources such as Electronic Health Records (EHRs) and social media posts. Objectives: To review the recently published literature discussing the application of NLP techniques for mining health-related information from EHRs and social media posts. Methods: Literature review included the research published over the last five years based on searches of PubMed, conference proceedings, and the ACM Digital Library, as well as on relevant publications referenced in papers. We particularly focused on the techniques employed on EHRs and social media data. Results: A set of 62 studies involving EHRs and 87 studies involving social media matched our criteria and were included in this paper. We present the purposes of these studies, outline the key NLP contributions, and discuss the general trends observed in the field, the current state of research, and important outstanding problems. Conclusions: Over the recent years, there has been a continuing transition from lexical and rule-based systems to learning-based approaches, because of the growth of annotated data sets and advances in data science. For EHRs, publicly available annotated data is still scarce and this acts as an obstacle to research progress. On the contrary, research on social media mining has seen a rapid growth, particularly because the large amount of unlabeled data available via this resource compensates for the uncertainty inherent to the data. Effective mechanisms to filter out noise and for mapping social media expressions to standard medical concepts are crucial and latent research problems. Shared tasks and other competitive challenges have been driving factors behind the implementation of open systems, and they are likely to play an imperative role in the development of future systems.


Sign in / Sign up

Export Citation Format

Share Document