scholarly journals Prediction of suicidal ideation in young people from the analysis of texts in social networks written in Mexican spanish: a review of the state of the art

2020 ◽  
pp. 29-33
Author(s):  
Gabriel AGUILERA-GONZÁLEZ ◽  
Christian PADILLA-NAVARRO ◽  
Carlos ZARATE-TREJO ◽  
Georges KHALAF

Suicide prevention is one of the great issues of the current era. Institutions such as the World Health Organization, have continued to search for all possible alternatives for early detection and timely prevention. Suicide rates have grown more and more in the world, and Mexico, although it is not the country with the most suicides, is one of the countries with the highest growth in recent years. At present, the use of social networks has generated great changes in the way we communicate. Expressing yourself through a social network begins to be more common than expressing ourselves to human beings. Several studies, which will be presented later, show that it is possible to determine from the content of social networks: cases of depression, risk of suicide, and other mental problems. The use of technological tools, such as Natural Language Processing, has served as an effective ally for the early detection of risks, such as abuse, bullying or even detecting emotional problems. The present research seeks to carry out an in-depth analysis in the state of the art of the application of Natural Language Processing as an ally for the detection of suicide risk from the analysis of texts for Mexican Spanish in Social Networks.

Author(s):  
Olaide Nathaniel Oyelade ◽  
Absalom E. Ezugwu

Coronavirus, also known as COVID-19, has been declared a pandemic by the World Health Organization (WHO). At the time of conducting this study, it had recorded over 1.6million cases while more than 105,000 have died due to it, with these figures rising on a daily basis across the globe. The burden of this highly contagious respiratory disease is that it presents itself in both symptomatic and asymptomatic patterns in those already infected, thereby leading to an exponential rise in the number of contractions of the disease and fatalities. It is therefore crucial to expedite the process of early detection and diagnosis of the disease across the world. The case-based reasoning (CBR) model is an effective paradigm that allows for the utilization of cases’ specific knowledge previously experienced, concrete problem situations or specific patient cases for solving new cases. This study therefore aims to leverage the very rich database of cases of COVID-19 to interpret and solve new cases even at their early stage to the advanced stage. The approach adopted in this study employs a natural language processing (NLP) technique to parse records of cases and thereafter formalize each case which is represented as a mini-ontology file. The formalized case is therefore parsed into a CBR model to allow for classification of the case into positive or negative to COVID-19. Meanwhile, feature extraction for each case is done by classifying tokens extracted by the NLP approach into special, temporal and thematic classes before encoding them using an ontology modeling method. The CBR model therefore leverages on the formalized features to compute the similarity of the new case with extracted similar cases from the archive of the CBR model. The proposed framework was populated with 68 cases obtained from the Italian Society of Medical and Interventional Radiology (SIRM) repository. Results obtained revealed that the proposed approach leverages on locations (spatial) and time (temporal) of contagion to successfully detect cases even in their early stages of two days onward before the incubation period of fourteen days. The proposed framework achieved an accuracy of 97.10%, sensitivity of 0.98 and specificity of .066. The study found that the proposed model can assist physicians to easily diagnose and isolate cases, thereby minimizing the rate of contagion and reducing false diagnosis as observed in some parts of the globe.


2021 ◽  
Vol 3 ◽  
Author(s):  
Marieke van Erp ◽  
Christian Reynolds ◽  
Diana Maynard ◽  
Alain Starke ◽  
Rebeca Ibáñez Martín ◽  
...  

In this paper, we discuss the use of natural language processing and artificial intelligence to analyze nutritional and sustainability aspects of recipes and food. We present the state-of-the-art and some use cases, followed by a discussion of challenges. Our perspective on addressing these is that while they typically have a technical nature, they nevertheless require an interdisciplinary approach combining natural language processing and artificial intelligence with expert domain knowledge to create practical tools and comprehensive analysis for the food domain.


2021 ◽  
Author(s):  
AISDL

The meteoric rise of social media news during the ongoing COVID-19 is worthy of advanced research. Freedom of speech in many parts of the world, especially the developed countries and liberty of socialization, calls for noteworthy information sharing during the panic pandemic. However, as a communication intervention during crises in the past, social media use is remarkable; the Tweets generated via Twitter during the ongoing COVID-19 is incomparable with the former records. This study examines social media news trends and compares the Tweets on COVID-19 as a corpus from Twitter. By deploying Natural Language Processing (NLP) methods on tweets, we were able to extract and quantify the similarities between some tweets over time, which means that some people say the same thing about the pandemic while other Twitter users view it differently. The tools we used are Spacy, Networkx, WordCloud, and Re. This study contributes to the social media literature by understanding the similarity and divergence of COVID-19 tweets of the public and health agencies such as the World Health Organization (WHO). The study also sheds more light on the COVID-19 sparse and densely text network and their implications for the policymakers. The study explained the limitations and proposed future studies.


2021 ◽  
pp. 1-13
Author(s):  
Deguang Chen ◽  
Ziping Ma ◽  
Lin Wei ◽  
Yanbin Zhu ◽  
Jinlin Ma ◽  
...  

Text-based reading comprehension models have great research significance and market value and are one of the main directions of natural language processing. Reading comprehension models of single-span answers have recently attracted more attention and achieved significant results. In contrast, multi-span answer models for reading comprehension have been less investigated and their performances need improvement. To address this issue, in this paper, we propose a text-based multi-span network for reading comprehension, ALBERT_SBoundary, and build a multi-span answer corpus, MultiSpan_NMU. We also conduct extensive experiments on the public multi-span corpus, MultiSpan_DROP, and our multi-span answer corpus, MultiSpan_NMU, and compare the proposed method with the state-of-the-art. The experimental results show that our proposed method achieves F1 scores of 84.10 and 92.88 on MultiSpan_DROP and MultiSpan_NMU datasets, respectively, while it also has fewer parameters and a shorter training time.


Author(s):  
Zixuan Ke ◽  
Vincent Ng

Despite being investigated for over 50 years, the task of automated essay scoring is far from being solved. Nevertheless, it continues to draw a lot of attention in the natural language processing community in part because of its commercial and educational values as well as the associated research challenges. This paper presents an overview of the major milestones made in automated essay scoring research since its inception.


Author(s):  
Amal Zouaq

This chapter gives an overview over the state-of-the-art in natural language processing for ontology learning. It presents two main NLP techniques for knowledge extraction from text, namely shallow techniques and deep techniques, and explains their usefulness for each step of the ontology learning process. The chapter also advocates the interest of deeper semantic analysis methods for ontology learning. In fact, there have been very few attempts to create ontologies using deep NLP. After a brief introduction to the main semantic analysis approaches, the chapter focuses on lexico-syntactic patterns based on dependency grammars and explains how these patterns can be considered as a step towards deeper semantic analysis. Finally, the chapter addresses the “ontologization” task that is the ability to filter important concepts and relationships among the mass of extracted knowledge.


2012 ◽  
Vol 20 (1) ◽  
pp. 69-97 ◽  
Author(s):  
GENNADI LEMBERSKY ◽  
DANNY SHACHAM ◽  
SHULY WINTNER

AbstractMorphological analysis and disambiguation are crucial stages in a variety of natural language processing applications, especially when languages with complex morphology are concerned. We present a system which disambiguates the output of a morphological analyzer for Hebrew. It consists of several simple classifiers and a module that combines them under the constraints imposed by the analyzer. We explore several approaches to classifier combination, as well as a back-off mechanism that relies on a large unannotated corpus. Our best result, around 83 percent accuracy, compares favorably with the state of the art on this task.


2019 ◽  
Vol 53 (2) ◽  
pp. 3-10
Author(s):  
Muthu Kumar Chandrasekaran ◽  
Philipp Mayr

The 4 th joint BIRNDL workshop was held at the 42nd ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2019) in Paris, France. BIRNDL 2019 intended to stimulate IR researchers and digital library professionals to elaborate on new approaches in natural language processing, information retrieval, scientometrics, and recommendation techniques that can advance the state-of-the-art in scholarly document understanding, analysis, and retrieval at scale. The workshop incorporated different paper sessions and the 5 th edition of the CL-SciSumm Shared Task.


Sign in / Sign up

Export Citation Format

Share Document