scholarly journals Application of Artificial Intelligence Methods to Pharmacy Data for Cancer Surveillance and Epidemiology Research: A Systematic Review

2020 ◽  
pp. 1051-1058
Author(s):  
Andrew E. Grothen ◽  
Bethany Tennant ◽  
Catherine Wang ◽  
Andrea Torres ◽  
Bonny Bloodgood Sheppard ◽  
...  

PURPOSE The implementation and utilization of electronic health records is generating a large volume and variety of data, which are difficult to process using traditional techniques. However, these data could help answer important questions in cancer surveillance and epidemiology research. Artificial intelligence (AI) data processing methods are capable of evaluating large volumes of data, yet current literature on their use in this context of pharmacy informatics is not well characterized. METHODS A systematic literature review was conducted to evaluate relevant publications within four domains (cancer, pharmacy, AI methods, population science) across PubMed, EMBASE, Scopus, and the Cochrane Library and included all publications indexed between July 17, 2008, and December 31, 2018. The search returned 3,271 publications, which were evaluated for inclusion. RESULTS There were 36 studies that met criteria for full-text abstraction. Of those, only 45% specifically identified the pharmacy data source, and 55% specified drug agents or drug classes. Multiple AI methods were used; 25% used machine learning (ML), 67% used natural language processing (NLP), and 8% combined ML and NLP. CONCLUSION This review demonstrates that the application of AI data methods for pharmacy informatics and cancer epidemiology research is expanding. However, the data sources and representations are often missing, challenging study replicability. In addition, there is no consistent format for reporting results, and one of the preferred metrics, F-score, is often missing. There is a resultant need for greater transparency of original data sources and performance of AI methods with pharmacy data to improve the translation of these results into meaningful outcomes.

AI Magazine ◽  
2019 ◽  
Vol 40 (3) ◽  
pp. 67-78
Author(s):  
Guy Barash ◽  
Mauricio Castillo-Effen ◽  
Niyati Chhaya ◽  
Peter Clark ◽  
Huáscar Espinoza ◽  
...  

The workshop program of the Association for the Advancement of Artificial Intelligence’s 33rd Conference on Artificial Intelligence (AAAI-19) was held in Honolulu, Hawaii, on Sunday and Monday, January 27–28, 2019. There were fifteen workshops in the program: Affective Content Analysis: Modeling Affect-in-Action, Agile Robotics for Industrial Automation Competition, Artificial Intelligence for Cyber Security, Artificial Intelligence Safety, Dialog System Technology Challenge, Engineering Dependable and Secure Machine Learning Systems, Games and Simulations for Artificial Intelligence, Health Intelligence, Knowledge Extraction from Games, Network Interpretability for Deep Learning, Plan, Activity, and Intent Recognition, Reasoning and Learning for Human-Machine Dialogues, Reasoning for Complex Question Answering, Recommender Systems Meet Natural Language Processing, Reinforcement Learning in Games, and Reproducible AI. This report contains brief summaries of the all the workshops that were held.


2021 ◽  
pp. 1-13
Author(s):  
Lamiae Benhayoun ◽  
Daniel Lang

BACKGROUND: The renewed advent of Artificial Intelligence (AI) is inducing profound changes in the classic categories of technology professions and is creating the need for new specific skills. OBJECTIVE: Identify the gaps in terms of skills between academic training on AI in French engineering and Business Schools, and the requirements of the labour market. METHOD: Extraction of AI training contents from the schools’ websites and scraping of a job advertisements’ website. Then, analysis based on a text mining approach with a Python code for Natural Language Processing. RESULTS: Categorization of occupations related to AI. Characterization of three classes of skills for the AI market: Technical, Soft and Interdisciplinary. Skills’ gaps concern some professional certifications and the mastery of specific tools, research abilities, and awareness of ethical and regulatory dimensions of AI. CONCLUSIONS: A deep analysis using algorithms for Natural Language Processing. Results that provide a better understanding of the AI capability components at the individual and the organizational levels. A study that can help shape educational programs to respond to the AI market requirements.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4496
Author(s):  
Vlad Pandelea ◽  
Edoardo Ragusa ◽  
Tommaso Apicella ◽  
Paolo Gastaldo ◽  
Erik Cambria

Emotion recognition, among other natural language processing tasks, has greatly benefited from the use of large transformer models. Deploying these models on resource-constrained devices, however, is a major challenge due to their computational cost. In this paper, we show that the combination of large transformers, as high-quality feature extractors, and simple hardware-friendly classifiers based on linear separators can achieve competitive performance while allowing real-time inference and fast training. Various solutions including batch and Online Sequential Learning are analyzed. Additionally, our experiments show that latency and performance can be further improved via dimensionality reduction and pre-training, respectively. The resulting system is implemented on two types of edge device, namely an edge accelerator and two smartphones.


2021 ◽  
Vol 11 (3) ◽  
pp. 359
Author(s):  
Katharina Hogrefe ◽  
Georg Goldenberg ◽  
Ralf Glindemann ◽  
Madleen Klonowski ◽  
Wolfram Ziegler

Assessment of semantic processing capacities often relies on verbal tasks which are, however, sensitive to impairments at several language processing levels. Especially for persons with aphasia there is a strong need for a tool that measures semantic processing skills independent of verbal abilities. Furthermore, in order to assess a patient’s potential for using alternative means of communication in cases of severe aphasia, semantic processing should be assessed in different nonverbal conditions. The Nonverbal Semantics Test (NVST) is a tool that captures semantic processing capacities through three tasks—Semantic Sorting, Drawing, and Pantomime. The main aim of the current study was to investigate the relationship between the NVST and measures of standard neurolinguistic assessment. Fifty-one persons with aphasia caused by left hemisphere brain damage were administered the NVST as well as the Aachen Aphasia Test (AAT). A principal component analysis (PCA) was conducted across all AAT and NVST subtests. The analysis resulted in a two-factor model that captured 69% of the variance of the original data, with all linguistic tasks loading high on one factor and the NVST subtests loading high on the other. These findings suggest that nonverbal tasks assessing semantic processing capacities should be administered alongside standard neurolinguistic aphasia tests.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


2021 ◽  
pp. 016555152098549
Author(s):  
Donghee Shin

The recent proliferation of artificial intelligence (AI) gives rise to questions on how users interact with AI services and how algorithms embody the values of users. Despite the surging popularity of AI, how users evaluate algorithms, how people perceive algorithmic decisions, and how they relate to algorithmic functions remain largely unexplored. Invoking the idea of embodied cognition, we characterize core constructs of algorithms that drive the value of embodiment and conceptualizes these factors in reference to trust by examining how they influence the user experience of personalized recommendation algorithms. The findings elucidate the embodied cognitive processes involved in reasoning algorithmic characteristics – fairness, accountability, transparency, and explainability – with regard to their fundamental linkages with trust and ensuing behaviors. Users use a dual-process model, whereby a sense of trust built on a combination of normative values and performance-related qualities of algorithms. Embodied algorithmic characteristics are significantly linked to trust and performance expectancy. Heuristic and systematic processes through embodied cognition provide a concise guide to its conceptualization of AI experiences and interaction. The identified user cognitive processes provide information on a user’s cognitive functioning and patterns of behavior as well as a basis for subsequent metacognitive processes.


2021 ◽  
pp. 002203452110138
Author(s):  
C.M. Mörch ◽  
S. Atsu ◽  
W. Cai ◽  
X. Li ◽  
S.A. Madathil ◽  
...  

Dentistry increasingly integrates artificial intelligence (AI) to help improve the current state of clinical dental practice. However, this revolutionary technological field raises various complex ethical challenges. The objective of this systematic scoping review is to document the current uses of AI in dentistry and the ethical concerns or challenges they imply. Three health care databases (MEDLINE [PubMed], SciVerse Scopus, and Cochrane Library) and 2 computer science databases (ArXiv, IEEE Xplore) were searched. After identifying 1,553 records, the documents were filtered, and a full-text screening was performed. In total, 178 studies were retained and analyzed by 8 researchers specialized in dentistry, AI, and ethics. The team used Covidence for data extraction and Dedoose for the identification of ethics-related information. PRISMA guidelines were followed. Among the included studies, 130 (73.0%) studies were published after 2016, and 93 (52.2%) were published in journals specialized in computer sciences. The technologies used were neural learning techniques for 75 (42.1%), traditional learning techniques for 76 (42.7%), or a combination of several technologies for 20 (11.2%). Overall, 7 countries contributed to 109 (61.2%) studies. A total of 53 different applications of AI in dentistry were identified, involving most dental specialties. The use of initial data sets for internal validation was reported in 152 (85.4%) studies. Forty-five ethical issues (related to the use AI in dentistry) were reported in 22 (12.4%) studies around 6 principles: prudence (10 times), equity (8), privacy (8), responsibility (6), democratic participation (4), and solidarity (4). The ratio of studies mentioning AI-related ethical issues has remained similar in the past years, showing that there is no increasing interest in the field of dentistry on this topic. This study confirms the growing presence of AI in dentistry and highlights a current lack of information on the ethical challenges surrounding its use. In addition, the scarcity of studies sharing their code could prevent future replications. The authors formulate recommendations to contribute to a more responsible use of AI technologies in dentistry.


2021 ◽  
pp. 002073142110174
Author(s):  
Md Mijanur Rahman ◽  
Fatema Khatun ◽  
Ashik Uzzaman ◽  
Sadia Islam Sami ◽  
Md Al-Amin Bhuiyan ◽  
...  

The novel coronavirus disease (COVID-19) has spread over 219 countries of the globe as a pandemic, creating alarming impacts on health care, socioeconomic environments, and international relationships. The principal objective of the study is to provide the current technological aspects of artificial intelligence (AI) and other relevant technologies and their implications for confronting COVID-19 and preventing the pandemic’s dreadful effects. This article presents AI approaches that have significant contributions in the fields of health care, then highlights and categorizes their applications in confronting COVID-19, such as detection and diagnosis, data analysis and treatment procedures, research and drug development, social control and services, and the prediction of outbreaks. The study addresses the link between the technologies and the epidemics as well as the potential impacts of technology in health care with the introduction of machine learning and natural language processing tools. It is expected that this comprehensive study will support researchers in modeling health care systems and drive further studies in advanced technologies. Finally, we propose future directions in research and conclude that persuasive AI strategies, probabilistic models, and supervised learning are required to tackle future pandemic challenges.


Sign in / Sign up

Export Citation Format

Share Document