scholarly journals Latent Dirichlet Allocation and t-Distributed Stochastic Neighbor Embedding Enhance Scientific Reading Comprehension of Articles Related to Enterprise Architecture

AI ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 179-194
Author(s):  
Nils Horn ◽  
Fabian Gampfer ◽  
Rüdiger Buchkremer

As the amount of scientific information increases steadily, it is crucial to improve fast-reading comprehension. To grasp many scientific articles in a short period, artificial intelligence becomes essential. This paper aims to apply artificial intelligence methodologies to examine broad topics such as enterprise architecture in scientific articles. Analyzing abstracts with latent dirichlet allocation or inverse document frequency appears to be more beneficial than exploring full texts. Furthermore, we demonstrate that t-distributed stochastic neighbor embedding is well suited to explore the degree of connectivity to neighboring topics, such as complexity theory. Artificial intelligence produces results that are similar to those obtained by manual reading. Our full-text study confirms enterprise architecture trends such as sustainability and modeling languages.

10.2196/15511 ◽  
2019 ◽  
Vol 21 (11) ◽  
pp. e15511 ◽  
Author(s):  
Bach Xuan Tran ◽  
Son Nghiem ◽  
Oz Sahin ◽  
Tuan Manh Vu ◽  
Giang Hai Ha ◽  
...  

Background Artificial intelligence (AI)–based technologies develop rapidly and have myriad applications in medicine and health care. However, there is a lack of comprehensive reporting on the productivity, workflow, topics, and research landscape of AI in this field. Objective This study aimed to evaluate the global development of scientific publications and constructed interdisciplinary research topics on the theory and practice of AI in medicine from 1977 to 2018. Methods We obtained bibliographic data and abstract contents of publications published between 1977 and 2018 from the Web of Science database. A total of 27,451 eligible articles were analyzed. Research topics were classified by latent Dirichlet allocation, and principal component analysis was used to identify the construct of the research landscape. Results The applications of AI have mainly impacted clinical settings (enhanced prognosis and diagnosis, robot-assisted surgery, and rehabilitation), data science and precision medicine (collecting individual data for precision medicine), and policy making (raising ethical and legal issues, especially regarding privacy and confidentiality of data). However, AI applications have not been commonly used in resource-poor settings due to the limit in infrastructure and human resources. Conclusions The application of AI in medicine has grown rapidly and focuses on three leading platforms: clinical practices, clinical material, and policies. AI might be one of the methods to narrow down the inequality in health care and medicine between developing and developed countries. Technology transfer and support from developed countries are essential measures for the advancement of AI application in health care in developing countries.


2021 ◽  
Vol 13 (19) ◽  
pp. 10856
Author(s):  
I-Cheng Chang ◽  
Tai-Kuei Yu ◽  
Yu-Jie Chang ◽  
Tai-Yi Yu

Facing the big data wave, this study applied artificial intelligence to cite knowledge and find a feasible process to play a crucial role in supplying innovative value in environmental education. Intelligence agents of artificial intelligence and natural language processing (NLP) are two key areas leading the trend in artificial intelligence; this research adopted NLP to analyze the research topics of environmental education research journals in the Web of Science (WoS) database during 2011–2020 and interpret the categories and characteristics of abstracts for environmental education papers. The corpus data were selected from abstracts and keywords of research journal papers, which were analyzed with text mining, cluster analysis, latent Dirichlet allocation (LDA), and co-word analysis methods. The decisions regarding the classification of feature words were determined and reviewed by domain experts, and the associated TF-IDF weights were calculated for the following cluster analysis, which involved a combination of hierarchical clustering and K-means analysis. The hierarchical clustering and LDA decided the number of required categories as seven, and the K-means cluster analysis classified the overall documents into seven categories. This study utilized co-word analysis to check the suitability of the K-means classification, analyzed the terms with high TF-IDF wights for distinct K-means groups, and examined the terms for different topics with the LDA technique. A comparison of the results demonstrated that most categories that were recognized with K-means and LDA methods were the same and shared similar words; however, two categories had slight differences. The involvement of field experts assisted with the consistency and correctness of the classified topics and documents.


10.2196/14401 ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. e14401 ◽  
Author(s):  
Bach Xuan Tran ◽  
Carl A Latkin ◽  
Noha Sharafeldin ◽  
Katherina Nguyen ◽  
Giang Thu Vu ◽  
...  

Background Artificial intelligence (AI)–based therapeutics, devices, and systems are vital innovations in cancer control; particularly, they allow for diagnosis, screening, precise estimation of survival, informing therapy selection, and scaling up treatment services in a timely manner. Objective The aim of this study was to analyze the global trends, patterns, and development of interdisciplinary landscapes in AI and cancer research. Methods An exploratory factor analysis was conducted to identify research domains emerging from abstract contents. The Jaccard similarity index was utilized to identify the most frequently co-occurring terms. Latent Dirichlet Allocation was used for classifying papers into corresponding topics. Results From 1991 to 2018, the number of studies examining the application of AI in cancer care has grown to 3555 papers covering therapeutics, capacities, and factors associated with outcomes. Topics with the highest volume of publications include (1) machine learning, (2) comparative effectiveness evaluation of AI-assisted medical therapies, and (3) AI-based prediction. Noticeably, this classification has revealed topics examining the incremental effectiveness of AI applications, the quality of life, and functioning of patients receiving these innovations. The growing research productivity and expansion of multidisciplinary approaches are largely driven by machine learning, artificial neural networks, and AI in various clinical practices. Conclusions The research landscapes show that the development of AI in cancer care is focused on not only improving prediction in cancer screening and AI-assisted therapeutics but also on improving other corresponding areas such as precision and personalized medicine and patient-reported outcomes.


Author(s):  
Bach Xuan Tran ◽  
Roger S. McIntyre ◽  
Carl A. Latkin ◽  
Hai Thanh Phan ◽  
Giang Thu Vu ◽  
...  

Artificial intelligence (AI)-based techniques have been widely applied in depression research and treatment. Nonetheless, there is currently no systematic review or bibliometric analysis in the medical literature about the applications of AI in depression. We performed a bibliometric analysis of the current research landscape, which objectively evaluates the productivity of global researchers or institutions in this field, along with exploratory factor analysis (EFA) and latent dirichlet allocation (LDA). From 2010 onwards, the total number of papers and citations on using AI to manage depressive disorder have risen considerably. In terms of global AI research network, researchers from the United States were the major contributors to this field. Exploratory factor analysis showed that the most well-studied application of AI was the utilization of machine learning to identify clinical characteristics in depression, which accounted for more than 60% of all publications. Latent dirichlet allocation identified specific research themes, which include diagnosis accuracy, structural imaging techniques, gene testing, drug development, pattern recognition, and electroencephalography (EEG)-based diagnosis. Although the rapid development and widespread use of AI provide various benefits for both health providers and patients, interventions to enhance privacy and confidentiality issues are still limited and require further research.


2019 ◽  
Author(s):  
Bach Xuan Tran ◽  
Carl A. Latkin ◽  
Noha Sharafeldin ◽  
Katherina Nguyen ◽  
Giang Thu Vu ◽  
...  

BACKGROUND Artificial Intelligence (AI) - based therapeutics, devices and systems are vital innovations in cancer control. OBJECTIVE This study analyzes the global trends, patterns, and development of interdisciplinary landscapes in AI and cancer research. METHODS Exploratory factor analysis was applied to identify research domains emerging from contents of the abstracts. Jaccard’s similarity index was utilized to identify terms most frequently co-occurring with each other. Latent Dirichlet Allocation was used for classifying papers into corresponding topics. RESULTS The number of studies applying AI to cancer during 1991-2018 has been grown with 3,555 papers covering therapeutics, capacities, and factors associated with outcomes. Topics with the highest volumes of publications include 1) Machine learning, 2) Comparative Effectiveness Evaluation of AI-assisted medical therapies, 3) AI-based Prediction. Noticeably, this classification has revealed topics examining the incremental effectiveness of AI applications, the quality of life and functioning of patients receiving these innovations. The growing research productivity and expansion of multidisciplinary approaches, largely driven by machine learning, artificial neutral network, and artificial intelligence in various clinical practices. CONCLUSIONS The research landscapes show that the development of AI in cancer is focused not only on improving prediction in cancer screening and AI-assisted therapeutics, but also other corresponding areas such as Precision and Personalized Medicine and patient-reported outcomes.


Mathematics ◽  
2021 ◽  
Vol 9 (18) ◽  
pp. 2281
Author(s):  
Karime Montes Escobar ◽  
José Luis Vicente-Villardon ◽  
Javier de la Hoz-M ◽  
Lelly María Useche-Castro ◽  
Daniel Fabricio Alarcón Cano ◽  
...  

Background: Neuroendocrine tumors (NETs) are severe and relatively rare and may affect any organ of the human body. The prevalence of NETs has increased in recent years; however, there seem to be more data on particular types, even though, despite the efforts of different guidelines, there is no consensus on how to identify different types of NETs. In this review, we investigated the countries that published the most articles about NETs, the most frequent organs affected, and the most common related topics. Methods: This work used the Latent Dirichlet Allocation (LDA) method to identify and interpret scientific information in relation to the categories in a set of documents. The HJ-Biplot method was also used to determine the relationship between the analyzed topics, by taking into consideration the years under study. Results: In this study, a literature review was conducted, from which a total of 7658 abstracts of scientific articles published between 1981 and 2020 were extracted. The United States, Germany, United Kingdom, France, and Italy published the majority of studies on NETs, of which pancreatic tumors were the most studied. The five most frequent topics were t_21 (clinical benefit), t_11 (pancreatic neuroendocrine tumors), t_13 (patients one year after treatment), t_17 (prognosis of survival before and after resection), and t_3 (markers for carcinomas). Finally, the results were put through a two-way multivariate analysis (HJ-Biplot), which generated a new interpretation: we grouped topics by year and discovered which NETs were the most relevant for which years. Conclusions: The most frequent topics found in our review highlighted the severity of NETs: patients have a poor prognosis of survival and a high probability of tumor recurrence.


2021 ◽  
Vol 6 (1) ◽  
pp. 17
Author(s):  
Kartika Rizqi Nastiti ◽  
Ahmad Fathan Hidayatullah ◽  
Ahmad Rafie Pratama

Before conducting a research project, researchers must find the trends and state of the art in their research field. However, that is not necessarily an easy job for researchers, partly due to the lack of specific tools to filter the required information by time range. This study aims to provide a solution to that problem by performing a topic modeling approach to the scraped data from Google Scholar between 2010 and 2019. We utilized Latent Dirichlet Allocation (LDA) combined with Term Frequency-Indexed Document Frequency (TF-IDF) to build topic models and employed the coherence score method to determine how many different topics there are for each year’s data. We also provided a visualization of the topic interpretation and word distribution for each topic as well as its relevance using word cloud and PyLDAvis. In the future, we expect to add more features to show the relevance and interconnections between each topic to make it even easier for researchers to use this tool in their research projects.


Author(s):  
Priyanka R. Patil ◽  
Shital A. Patil

Similarity View is an application for visually comparing and exploring multiple models of text and collection of document. Friendbook finds ways of life of clients from client driven sensor information, measures the closeness of ways of life amongst clients, and prescribes companions to clients if their ways of life have high likeness. Roused by demonstrate a clients day by day life as life records, from their ways of life are separated by utilizing the Latent Dirichlet Allocation Algorithm. Manual techniques can't be utilized for checking research papers, as the doled out commentator may have lacking learning in the exploration disciplines. For different subjective views, causing possible misinterpretations. An urgent need for an effective and feasible approach to check the submitted research papers with support of automated software. A method like text mining method come to solve the problem of automatically checking the research papers semantically. The proposed method to finding the proper similarity of text from the collection of documents by using Latent Dirichlet Allocation (LDA) algorithm and Latent Semantic Analysis (LSA) with synonym algorithm which is used to find synonyms of text index wise by using the English wordnet dictionary, another algorithm is LSA without synonym used to find the similarity of text based on index. LSA with synonym rate of accuracy is greater when the synonym are consider for matching.


Sign in / Sign up

Export Citation Format

Share Document