scholarly journals Modeling Research Topics for Artificial Intelligence Applications in Medicine: Latent Dirichlet Allocation Application Study

10.2196/15511 ◽  
2019 ◽  
Vol 21 (11) ◽  
pp. e15511 ◽  
Author(s):  
Bach Xuan Tran ◽  
Son Nghiem ◽  
Oz Sahin ◽  
Tuan Manh Vu ◽  
Giang Hai Ha ◽  
...  

Background Artificial intelligence (AI)–based technologies develop rapidly and have myriad applications in medicine and health care. However, there is a lack of comprehensive reporting on the productivity, workflow, topics, and research landscape of AI in this field. Objective This study aimed to evaluate the global development of scientific publications and constructed interdisciplinary research topics on the theory and practice of AI in medicine from 1977 to 2018. Methods We obtained bibliographic data and abstract contents of publications published between 1977 and 2018 from the Web of Science database. A total of 27,451 eligible articles were analyzed. Research topics were classified by latent Dirichlet allocation, and principal component analysis was used to identify the construct of the research landscape. Results The applications of AI have mainly impacted clinical settings (enhanced prognosis and diagnosis, robot-assisted surgery, and rehabilitation), data science and precision medicine (collecting individual data for precision medicine), and policy making (raising ethical and legal issues, especially regarding privacy and confidentiality of data). However, AI applications have not been commonly used in resource-poor settings due to the limit in infrastructure and human resources. Conclusions The application of AI in medicine has grown rapidly and focuses on three leading platforms: clinical practices, clinical material, and policies. AI might be one of the methods to narrow down the inequality in health care and medicine between developing and developed countries. Technology transfer and support from developed countries are essential measures for the advancement of AI application in health care in developing countries.

10.2196/14401 ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. e14401 ◽  
Author(s):  
Bach Xuan Tran ◽  
Carl A Latkin ◽  
Noha Sharafeldin ◽  
Katherina Nguyen ◽  
Giang Thu Vu ◽  
...  

Background Artificial intelligence (AI)–based therapeutics, devices, and systems are vital innovations in cancer control; particularly, they allow for diagnosis, screening, precise estimation of survival, informing therapy selection, and scaling up treatment services in a timely manner. Objective The aim of this study was to analyze the global trends, patterns, and development of interdisciplinary landscapes in AI and cancer research. Methods An exploratory factor analysis was conducted to identify research domains emerging from abstract contents. The Jaccard similarity index was utilized to identify the most frequently co-occurring terms. Latent Dirichlet Allocation was used for classifying papers into corresponding topics. Results From 1991 to 2018, the number of studies examining the application of AI in cancer care has grown to 3555 papers covering therapeutics, capacities, and factors associated with outcomes. Topics with the highest volume of publications include (1) machine learning, (2) comparative effectiveness evaluation of AI-assisted medical therapies, and (3) AI-based prediction. Noticeably, this classification has revealed topics examining the incremental effectiveness of AI applications, the quality of life, and functioning of patients receiving these innovations. The growing research productivity and expansion of multidisciplinary approaches are largely driven by machine learning, artificial neural networks, and AI in various clinical practices. Conclusions The research landscapes show that the development of AI in cancer care is focused on not only improving prediction in cancer screening and AI-assisted therapeutics but also on improving other corresponding areas such as precision and personalized medicine and patient-reported outcomes.


2019 ◽  
Author(s):  
Bach Xuan Tran ◽  
Carl A. Latkin ◽  
Noha Sharafeldin ◽  
Katherina Nguyen ◽  
Giang Thu Vu ◽  
...  

BACKGROUND Artificial Intelligence (AI) - based therapeutics, devices and systems are vital innovations in cancer control. OBJECTIVE This study analyzes the global trends, patterns, and development of interdisciplinary landscapes in AI and cancer research. METHODS Exploratory factor analysis was applied to identify research domains emerging from contents of the abstracts. Jaccard’s similarity index was utilized to identify terms most frequently co-occurring with each other. Latent Dirichlet Allocation was used for classifying papers into corresponding topics. RESULTS The number of studies applying AI to cancer during 1991-2018 has been grown with 3,555 papers covering therapeutics, capacities, and factors associated with outcomes. Topics with the highest volumes of publications include 1) Machine learning, 2) Comparative Effectiveness Evaluation of AI-assisted medical therapies, 3) AI-based Prediction. Noticeably, this classification has revealed topics examining the incremental effectiveness of AI applications, the quality of life and functioning of patients receiving these innovations. The growing research productivity and expansion of multidisciplinary approaches, largely driven by machine learning, artificial neutral network, and artificial intelligence in various clinical practices. CONCLUSIONS The research landscapes show that the development of AI in cancer is focused not only on improving prediction in cancer screening and AI-assisted therapeutics, but also other corresponding areas such as Precision and Personalized Medicine and patient-reported outcomes.


Author(s):  
Giang Thu Vu ◽  
Bach Xuan Tran ◽  
Roger S. McIntyre ◽  
Hai Quang Pham ◽  
Hai Thanh Phan ◽  
...  

The rising prevalence and global burden of diabetes fortify the need for more comprehensive and effective management to prevent, monitor, and treat diabetes and its complications. Applying artificial intelligence in complimenting the diagnosis, management, and prediction of the diabetes trajectory has been increasingly common over the years. This study aims to illustrate an inclusive landscape of application of artificial intelligence in diabetes through a bibliographic analysis and offers future direction for research. Bibliometrics analysis was combined with exploratory factor analysis and latent Dirichlet allocation to uncover emergent research domains and topics related to artificial intelligence and diabetes. Data were extracted from the Web of Science Core Collection database. The results showed a rising trend in the number of papers and citations concerning AI applications in diabetes, especially since 2010. The nucleus driving the research and development of AI in diabetes is centered around developed countries, mainly consisting of the United States, which contributed 44.1% of the publications. Our analyses uncovered the top five emerging research domains to be: (i) use of artificial intelligence in diagnosis of diabetes, (ii) risk assessment of diabetes and its complications, (iii) role of artificial intelligence in novel treatments and monitoring in diabetes, (iv) application of telehealth and wearable technology in the daily management of diabetes, and (v) robotic surgical outcomes with diabetes as a comorbid. Despite the benefits of artificial intelligence, challenges with system accuracy, validity, and confidentiality breach will need to be tackled before being widely applied for patients’ benefits.


2021 ◽  
Vol 13 (19) ◽  
pp. 10856
Author(s):  
I-Cheng Chang ◽  
Tai-Kuei Yu ◽  
Yu-Jie Chang ◽  
Tai-Yi Yu

Facing the big data wave, this study applied artificial intelligence to cite knowledge and find a feasible process to play a crucial role in supplying innovative value in environmental education. Intelligence agents of artificial intelligence and natural language processing (NLP) are two key areas leading the trend in artificial intelligence; this research adopted NLP to analyze the research topics of environmental education research journals in the Web of Science (WoS) database during 2011–2020 and interpret the categories and characteristics of abstracts for environmental education papers. The corpus data were selected from abstracts and keywords of research journal papers, which were analyzed with text mining, cluster analysis, latent Dirichlet allocation (LDA), and co-word analysis methods. The decisions regarding the classification of feature words were determined and reviewed by domain experts, and the associated TF-IDF weights were calculated for the following cluster analysis, which involved a combination of hierarchical clustering and K-means analysis. The hierarchical clustering and LDA decided the number of required categories as seven, and the K-means cluster analysis classified the overall documents into seven categories. This study utilized co-word analysis to check the suitability of the K-means classification, analyzed the terms with high TF-IDF wights for distinct K-means groups, and examined the terms for different topics with the LDA technique. A comparison of the results demonstrated that most categories that were recognized with K-means and LDA methods were the same and shared similar words; however, two categories had slight differences. The involvement of field experts assisted with the consistency and correctness of the classified topics and documents.


2020 ◽  
Author(s):  
Enrico Santus ◽  
Nicola Marino ◽  
Davide Cirillo ◽  
Emmanuele Chersoni ◽  
Arnau Montagud ◽  
...  

UNSTRUCTURED Artificial intelligence (AI) technologies can play a key role in preventing, detecting, and monitoring epidemics. In this paper, we provide an overview of the recently published literature on the COVID-19 pandemic in four strategic areas: (1) triage, diagnosis, and risk prediction; (2) drug repurposing and development; (3) pharmacogenomics and vaccines; and (4) mining of the medical literature. We highlight how AI-powered health care can enable public health systems to efficiently handle future outbreaks and improve patient outcomes.


Author(s):  
Bach Xuan Tran ◽  
Roger S. McIntyre ◽  
Carl A. Latkin ◽  
Hai Thanh Phan ◽  
Giang Thu Vu ◽  
...  

Artificial intelligence (AI)-based techniques have been widely applied in depression research and treatment. Nonetheless, there is currently no systematic review or bibliometric analysis in the medical literature about the applications of AI in depression. We performed a bibliometric analysis of the current research landscape, which objectively evaluates the productivity of global researchers or institutions in this field, along with exploratory factor analysis (EFA) and latent dirichlet allocation (LDA). From 2010 onwards, the total number of papers and citations on using AI to manage depressive disorder have risen considerably. In terms of global AI research network, researchers from the United States were the major contributors to this field. Exploratory factor analysis showed that the most well-studied application of AI was the utilization of machine learning to identify clinical characteristics in depression, which accounted for more than 60% of all publications. Latent dirichlet allocation identified specific research themes, which include diagnosis accuracy, structural imaging techniques, gene testing, drug development, pattern recognition, and electroencephalography (EEG)-based diagnosis. Although the rapid development and widespread use of AI provide various benefits for both health providers and patients, interventions to enhance privacy and confidentiality issues are still limited and require further research.


AI ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 179-194
Author(s):  
Nils Horn ◽  
Fabian Gampfer ◽  
Rüdiger Buchkremer

As the amount of scientific information increases steadily, it is crucial to improve fast-reading comprehension. To grasp many scientific articles in a short period, artificial intelligence becomes essential. This paper aims to apply artificial intelligence methodologies to examine broad topics such as enterprise architecture in scientific articles. Analyzing abstracts with latent dirichlet allocation or inverse document frequency appears to be more beneficial than exploring full texts. Furthermore, we demonstrate that t-distributed stochastic neighbor embedding is well suited to explore the degree of connectivity to neighboring topics, such as complexity theory. Artificial intelligence produces results that are similar to those obtained by manual reading. Our full-text study confirms enterprise architecture trends such as sustainability and modeling languages.


10.2196/13043 ◽  
2019 ◽  
Vol 21 (4) ◽  
pp. e13043 ◽  
Author(s):  
Jacob McPadden ◽  
Thomas JS Durant ◽  
Dustin R Bunch ◽  
Andreas Coppi ◽  
Nathaniel Price ◽  
...  

2018 ◽  
Author(s):  
Jacob McPadden ◽  
Thomas JS Durant ◽  
Dustin R Bunch ◽  
Andreas Coppi ◽  
Nathaniel Price ◽  
...  

BACKGROUND Health care data are increasing in volume and complexity. Storing and analyzing these data to implement precision medicine initiatives and data-driven research has exceeded the capabilities of traditional computer systems. Modern big data platforms must be adapted to the specific demands of health care and designed for scalability and growth. OBJECTIVE The objectives of our study were to (1) demonstrate the implementation of a data science platform built on open source technology within a large, academic health care system and (2) describe 2 computational health care applications built on such a platform. METHODS We deployed a data science platform based on several open source technologies to support real-time, big data workloads. We developed data-acquisition workflows for Apache Storm and NiFi in Java and Python to capture patient monitoring and laboratory data for downstream analytics. RESULTS Emerging data management approaches, along with open source technologies such as Hadoop, can be used to create integrated data lakes to store large, real-time datasets. This infrastructure also provides a robust analytics platform where health care and biomedical research data can be analyzed in near real time for precision medicine and computational health care use cases. CONCLUSIONS The implementation and use of integrated data science platforms offer organizations the opportunity to combine traditional datasets, including data from the electronic health record, with emerging big data sources, such as continuous patient monitoring and real-time laboratory results. These platforms can enable cost-effective and scalable analytics for the information that will be key to the delivery of precision medicine initiatives. Organizations that can take advantage of the technical advances found in data science platforms will have the opportunity to provide comprehensive access to health care data for computational health care and precision medicine research.


2018 ◽  
Vol 36 (3) ◽  
pp. 400-410 ◽  
Author(s):  
Debin Fang ◽  
Haixia Yang ◽  
Baojun Gao ◽  
Xiaojun Li

Purpose Discovering the research topics and trends from a large quantity of library electronic references is essential for scientific research. Current research of this kind mainly depends on human justification. The purpose of this paper is to demonstrate how to identify research topics and evolution in trends from library electronic references efficiently and effectively by employing automatic text analysis algorithms. Design/methodology/approach The authors used the latent Dirichlet allocation (LDA), a probabilistic generative topic model to extract the latent topic from the large quantity of research abstracts. Then, the authors conducted a regression analysis on the document-topic distributions generated by LDA to identify hot and cold topics. Findings First, this paper discovers 32 significant research topics from the abstracts of 3,737 articles published in the six top accounting journals during the period of 1992-2014. Second, based on the document-topic distributions generated by LDA, the authors identified seven hot topics and six cold topics from the 32 topics. Originality/value The topics discovered by LDA are highly consistent with the topics identified by human experts, indicating the validity and effectiveness of the methodology. Therefore, this paper provides novel knowledge to the accounting literature and demonstrates a methodology and process for topic discovery with lower cost and higher efficiency than the current methods.


Sign in / Sign up

Export Citation Format

Share Document