scholarly journals SPUCL (Scientific Publication Classifier): A Human-Readable Labelling System for Scientific Publications

2021 ◽  
Vol 11 (19) ◽  
pp. 9154
Author(s):  
Noemi Scarpato ◽  
Alessandra Pieroni ◽  
Michela Montorsi

To assess critically the scientific literature is a very challenging task; in general it requires analysing a lot of documents to define the state-of-the-art of a research field and classifying them. The documents classifier systems have tried to address this problem by different techniques such as probabilistic, machine learning and neural networks models. One of the most popular document classification approaches is the LDA (Latent Dirichlet Allocation), a probabilistic topic model. One of the main issues of the LDA approach is that the retrieved topics are a collection of terms with their probabilities and it does not have a human-readable form. This paper defines an approach to make LDA topics comprehensible for humans by the exploitation of the Word2Vec approach.

2021 ◽  
Author(s):  
Nicholas Buhagiar ◽  
Bahram Zahir ◽  
Abdolreza Abhari

The probabilistic topic model Latent Dirichlet Allocation (LDA) was deployed to model the themes of discourse in discussion threads on the social media aggregation website Reddit. Abstracting discussion threads as vectors of topic weights, these vectors were fed into several neural network architectures, each with a different number of hidden layers, to train machine learning models that could identify which discussion would be of interest for a given user to contribute. Using accuracy as the evaluation metric to determine which model framework achieved the best performance on a given user’s validation set, these selected models achieved an average accuracy of 66.1% on the test data for a sample set of 30 users. Using the predicted probabilities of interest made by these neural networks, recommender systems were further built and analyzed for each user.


2017 ◽  
Vol 5 ◽  
pp. 191-204 ◽  
Author(s):  
Jooyeon Kim ◽  
Dongwoo Kim ◽  
Alice Oh

Much of scientific progress stems from previously published findings, but searching through the vast sea of scientific publications is difficult. We often rely on metrics of scholarly authority to find the prominent authors but these authority indices do not differentiate authority based on research topics. We present Latent Topical-Authority Indexing (LTAI) for jointly modeling the topics, citations, and topical authority in a corpus of academic papers. Compared to previous models, LTAI differs in two main aspects. First, it explicitly models the generative process of the citations, rather than treating the citations as given. Second, it models each author’s influence on citations of a paper based on the topics of the cited papers, as well as the citing papers. We fit LTAI into four academic corpora: CORA, Arxiv Physics, PNAS, and Citeseer. We compare the performance of LTAI against various baselines, starting with the latent Dirichlet allocation, to the more advanced models including author-link topic model and dynamic author citation topic model. The results show that LTAI achieves improved accuracy over other similar models when predicting words, citations and authors of publications.


2021 ◽  
Author(s):  
Nicholas Buhagiar ◽  
Bahram Zahir ◽  
Abdolreza Abhari

The probabilistic topic model Latent Dirichlet Allocation (LDA) was deployed to model the themes of discourse in discussion threads on the social media aggregation website Reddit. Abstracting discussion threads as vectors of topic weights, these vectors were fed into several neural network architectures, each with a different number of hidden layers, to train machine learning models that could identify which discussion would be of interest for a given user to contribute. Using accuracy as the evaluation metric to determine which model framework achieved the best performance on a given user’s validation set, these selected models achieved an average accuracy of 66.1% on the test data for a sample set of 30 users. Using the predicted probabilities of interest made by these neural networks, recommender systems were further built and analyzed for each user.


2020 ◽  
Vol 12 (12) ◽  
pp. 4830 ◽  
Author(s):  
Cecilia Elizabeth Bayas Aldaz ◽  
Jesus Rodriguez-Pomeda ◽  
Leyla Angélica Sandoval Hamón ◽  
Fernando Casani

This article provides a procedure to universities for understanding the social perception of their activities in the sustainability field, through the analysis of news published in the printed media. It identifies the Spanish news sources that have covered this issue the most and the topics that appear in that news coverage. Using a probabilistic topic model called Latent Dirichlet Allocation, the study includes the nine dominant topics within a corpus with more than seventeen thousand published news items (totaling approximately five and a quarter million words) from a database of almost thirteen hundred national press sources between 2014 and 2017. The study identifies the news sources that published the most news on the issue. It is also found that the amount of news on sustainability and universities declined during the covered period. The nine identified topics point towards the relevance of higher education institutions’ activities as drivers of sustainability. The social perception encapsulated within the topics signals how the public is interested in these activities. Therefore, we find some interesting relationships between sustainable development, higher education institutions’ missions and behaviors, governmental policies, university funding and governance, social and economic innovation, and green campuses in terms of the overall goal of sustainability.


2021 ◽  
Vol 13 (2) ◽  
pp. 763
Author(s):  
Simona Fiandrino ◽  
Alberto Tonelli

The recent Review of the Non-Financial Reporting Directive (NFRD) aims to enhance adequate non-financial information (NFI) disclosure and improve accountability for stakeholders. This study focuses on this regulatory intervention and has a twofold objective: First, it aims to understand the main underlying issues at stake; second, it suggests areas of possible amendment considering the current debates on sustainability accounting and accounting for stakeholders. In keeping with these aims, the research analyzes the documents annexed to the contribution on the Review of the NFRD by conducting a text-mining analysis with latent Dirichlet allocation (LDA) probabilistic topic model (PTM). Our findings highlight four main topics at the core of the current debate: quality of NFI, standardization, materiality, and assurance. The research suggests ways of improving managerial policies to achieve more comparable, relevant, and reliable information by bringing value creation for stakeholders into accounting. It further addresses an integrated logic of accounting for stakeholders that contributes to sustainable development.


2017 ◽  
Author(s):  
Redhouane Abdellaoui ◽  
Pierre Foulquié ◽  
Nathalie Texier ◽  
Carole Faviez ◽  
Anita Burgun ◽  
...  

BACKGROUND Medication nonadherence is a major impediment to the management of many health conditions. A better understanding of the factors underlying noncompliance to treatment may help health professionals to address it. Patients use peer-to-peer virtual communities and social media to share their experiences regarding their treatments and diseases. Using topic models makes it possible to model themes present in a collection of posts, thus to identify cases of noncompliance. OBJECTIVE The aim of this study was to detect messages describing patients’ noncompliant behaviors associated with a drug of interest. Thus, the objective was the clustering of posts featuring a homogeneous vocabulary related to nonadherent attitudes. METHODS We focused on escitalopram and aripiprazole used to treat depression and psychotic conditions, respectively. We implemented a probabilistic topic model to identify the topics that occurred in a corpus of messages mentioning these drugs, posted from 2004 to 2013 on three of the most popular French forums. Data were collected using a Web crawler designed by Kappa Santé as part of the Detec’t project to analyze social media for drug safety. Several topics were related to noncompliance to treatment. RESULTS Starting from a corpus of 3650 posts related to an antidepressant drug (escitalopram) and 2164 posts related to an antipsychotic drug (aripiprazole), the use of latent Dirichlet allocation allowed us to model several themes, including interruptions of treatment and changes in dosage. The topic model approach detected cases of noncompliance behaviors with a recall of 98.5% (272/276) and a precision of 32.6% (272/844). CONCLUSIONS Topic models enabled us to explore patients’ discussions on community websites and to identify posts related with noncompliant behaviors. After a manual review of the messages in the noncompliance topics, we found that noncompliance to treatment was present in 6.17% (276/4469) of the posts.


2019 ◽  
Vol 26 (7-8) ◽  
pp. 414-432 ◽  
Author(s):  
George Drosatos ◽  
Eleni Kaldoudi

Introduction eHealth emerged as an interdisciplinary research area about 70 years ago. This study employs probabilistic techniques to semantically analyse scientific literature related to the field of eHealth in order to identify topics and trends and discuss their comparative evolution. Methods Authors collected titles and abstracts of published literature on eHealth as indexed in PubMed. Basic statistical and bibliometric techniques were applied to overall describe the collected corpus; Latent Dirichlet Allocation was employed for unsupervised topics identification; topics trends analysis was performed, and correlation graphs were plotted were relevant. Results A total of 30,425 records on eHealth were retrieved from PubMed (all records till 31 December 2017, search on 8 May 2018) and 23,988 of these were included to the study corpus. eHealth domain shows a growth higher than the growth of the entire PubMed corpus, with a mean increase of eHealth corpus proportion of about 7% per year for the last 20 years. Probabilistic topics modelling identified 100 meaningful topics, which were organised by the authors in nine different categories: general; service model; disease; medical specialty; behaviour and lifestyle; education; technology; evaluation; and regulatory issues. Discussion Trends analysis shows a continuous shift in focus. Early emphasis on medical image transmission and system integration has been replaced by increased focus on standards, wearables and sensor devices, now giving way to mobile applications, social media and data analytics. Attention on disease is also shifting, from initial popularity of surgery, trauma and acute heart disease, to the emergence of chronic disease support, and the recent attention to cancer, infectious disease, mental disorders, paediatrics and perinatal care; most interestingly the current swift increase is in research related to lifestyle and behaviour change. The steady growth of all topics related to assessment and various systematic evaluation techniques indicates a maturing research field that moves towards real world application.


Author(s):  
Min Tang ◽  
Jian Jin ◽  
Ying Liu ◽  
Chunping Li ◽  
Weiwen Zhang

Analyzing product online reviews has drawn much interest in the academic field. In this research, a new probabilistic topic model, called tag sentiment aspect models (TSA), is proposed on the basis of Latent Dirichlet allocation (LDA), which aims to reveal latent aspects and corresponding sentiment in a review simultaneously. Unlike other topic models which consider words in online reviews only, syntax tags are taken as visual information and, in this research, as a kind of widely used syntax information, part-of-speech (POS) tags are first reckoned. Specifically, POS tags are integrated into three versions of implementation in consideration of the fact that words with different POS tags might be utilized to express consumers' opinions. Also, the proposed TSA is one unsupervised approach and only a small number of positive and negative words are required to confine different priors for training. Finally, two big datasets regarding digital SLR and laptop are utilized to evaluate the performance of the proposed model in terms of sentiment classification and aspect extraction. Comparative experiments show that the new model can not only achieve promising results on sentiment classification but also leverage the performance on aspect extraction.


2020 ◽  
Vol 36 (18) ◽  
pp. 4757-4764
Author(s):  
Liran Juan ◽  
Yongtian Wang ◽  
Jingyi Jiang ◽  
Qi Yang ◽  
Guohua Wang ◽  
...  

Abstract Motivation Evaluating genome similarity among individuals is an essential step in data analysis. Advanced sequencing technology detects more and rarer variants for massive individual genomes, thus enabling individual-level genome similarity evaluation. However, the current methodologies, such as the principal component analysis (PCA), lack the capability to fully leverage rare variants and are also difficult to interpret in terms of population genetics. Results Here, we introduce a probabilistic topic model, latent Dirichlet allocation, to evaluate individual genome similarity. A total of 2535 individuals from the 1000 Genomes Project (KGP) were used to demonstrate our method. Various aspects of variant choice and model parameter selection were studied. We found that relatively rare (0.001<allele frequency < 0.175) and sparse (average interval > 20 000 bp) variants are more efficient for genome similarity evaluation. At least 100 000 such variants are necessary. In our results, the populations show significantly less mixed and more cohesive visualization than the PCA results. The global similarities among the KGP genomes are consistent with known geographical, historical and cultural factors. Availability and implementation The source code and data access are available at: https://github.com/lrjuan/LDA_genome. Supplementary information Supplementary data are available at Bioinformatics online.


1996 ◽  
Vol 33 (4-5) ◽  
pp. 63-72
Author(s):  
Federico Preti

Monitoring and modelling are two complementary instruments necessary for the analysis of pollution phenomena, such as groundwater contamination and lakes eutrophication, often generated by diffuse (nonpoint) sources (NPS). A review of scientific literature has been conducted to obtain the information necessary to develop a correct methodology relative to environmental field monitoring and modelling agricultural nonpoint pollution. A questionnaire has been handed out to several researchers who are involved in this research field in order to learn of other pertinent activities being undertaken and to facilitate the exchange of information. Testing and verification of a methodology for the analysis of contamination caused by the use of agrochemicals, based on field monitoring studies and the application of a distributed nonpoint pollution model, have been conducted in Italy. Based on the research developed and practical experience, some of the main guidelines for conducting studies of pollution processes caused by agriculture as well as a summary of theoretical and practical aspects encountered in the design of field and basin scale model validation studies and in the use of published experimental results to test models can be proposed.


Sign in / Sign up

Export Citation Format

Share Document