scholarly journals Technical, Hard and Explainable Question Answering (THE-QA)

Author(s):  
Shailaja Sampat

The ability of an agent to rationally answer questions about a given task is the key measure of its intelligence. While we have obtained phenomenal performance over various language and vision tasks separately, 'Technical, Hard and Explainable Question Answering' (THE-QA) is a new challenging corpus which addresses them jointly. THE-QA is a question answering task involving diagram understanding and reading comprehension. We plan to establish benchmarks over this new corpus using deep learning models guided by knowledge representation methods. The proposed approach will envisage detailed semantic parsing of technical figures and text, which is robust against diverse formats. It will be aided by knowledge acquisition and reasoning module that categorizes different knowledge types, identify sources to acquire that knowledge and perform reasoning to answer the questions correctly. THE-QA data will present a strong challenge to the community for future research and will bridge the gap between state-of-the-art Artificial Intelligence (AI) and 'Human-level' AI.

Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 122 ◽  
Author(s):  
Giuseppe Futia ◽  
Antonio Vetrò

Deep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement. Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the recursive mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI, where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementation of symbolic AI—while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable. In this paper, we review the main XAI approaches existing in the literature, underlying their strengths and limitations, and we propose neural-symbolic integration as a cornerstone to design an AI which is closer to non-insiders comprehension. Within such a general direction, we identify three specific challenges for future research—knowledge matching, cross-disciplinary explanations and interactive explanations.


Healthcare ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 522
Author(s):  
Yassir Edrees Almalki ◽  
Abdul Qayyum ◽  
Muhammad Irfan ◽  
Noman Haider ◽  
Adam Glowacz ◽  
...  

The Coronavirus disease 2019 (COVID-19) is an infectious disease spreading rapidly and uncontrollably throughout the world. The critical challenge is the rapid detection of Coronavirus infected people. The available techniques being utilized are body-temperature measurement, along with anterior nasal swab analysis. However, taking nasal swabs and lab testing are complex, intrusive, and require many resources. Furthermore, the lack of test kits to meet the exceeding cases is also a major limitation. The current challenge is to develop some technology to non-intrusively detect the suspected Coronavirus patients through Artificial Intelligence (AI) techniques such as deep learning (DL). Another challenge to conduct the research on this area is the difficulty of obtaining the dataset due to a limited number of patients giving their consent to participate in the research study. Looking at the efficacy of AI in healthcare systems, it is a great challenge for the researchers to develop an AI algorithm that can help health professionals and government officials automatically identify and isolate people with Coronavirus symptoms. Hence, this paper proposes a novel method CoVIRNet (COVID Inception-ResNet model), which utilizes the chest X-rays to diagnose the COVID-19 patients automatically. The proposed algorithm has different inception residual blocks that cater to information by using different depths feature maps at different scales, with the various layers. The features are concatenated at each proposed classification block, using the average-pooling layer, and concatenated features are passed to the fully connected layer. The efficient proposed deep-learning blocks used different regularization techniques to minimize the overfitting due to the small COVID-19 dataset. The multiscale features are extracted at different levels of the proposed deep-learning model and then embedded into various machine-learning models to validate the combination of deep-learning and machine-learning models. The proposed CoVIR-Net model achieved 95.7% accuracy, and the CoVIR-Net feature extractor with random-forest classifier produced 97.29% accuracy, which is the highest, as compared to existing state-of-the-art deep-learning methods. The proposed model would be an automatic solution for the assessment and classification of COVID-19. We predict that the proposed method will demonstrate an outstanding performance as compared to the state-of-the-art techniques being used currently.


Author(s):  
Nag Nami ◽  
Melody Moh

Intelligent systems are capable of doing tasks on their own with minimal or no human intervention. With the advent of big data and IoT, these intelligence systems have made their ways into most industries and homes. With its recent advancements, deep learning has created a niche in the technology space and is being actively used in big data and IoT systems globally. With the wider adoption, deep learning models unfortunately have become susceptible to attacks. Research has shown that many state-of-the-art accurate models can be vulnerable to attacks by well-crafted adversarial examples. This chapter aims to provide concise, in-depth understanding of attacks and defense of deep learning models. The chapter first presents the key architectures and application domains of deep learning and their vulnerabilities. Next, it illustrates the prominent adversarial examples, including the algorithms and techniques used to generate these attacks. Finally, it describes challenges and mechanisms to counter these attacks, and suggests future research directions.


Information ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 241
Author(s):  
Tedo Vrbanec ◽  
Ana Meštrović

Paraphrase detection is important for a number of applications, including plagiarism detection, authorship attribution, question answering, text summarization, text mining in general, etc. In this paper, we give a performance overview of various types of corpus-based models, especially deep learning (DL) models, with the task of paraphrase detection. We report the results of eight models (LSI, TF-IDF, Word2Vec, Doc2Vec, GloVe, FastText, ELMO, and USE) evaluated on three different public available corpora: Microsoft Research Paraphrase Corpus, Clough and Stevenson and Webis Crowd Paraphrase Corpus 2011. Through a great number of experiments, we decided on the most appropriate approaches for text pre-processing: hyper-parameters, sub-model selection—where they exist (e.g., Skipgram vs. CBOW), distance measures, and semantic similarity/paraphrase detection threshold. Our findings and those of other researchers who have used deep learning models show that DL models are very competitive with traditional state-of-the-art approaches and have potential that should be further developed.


2021 ◽  
Vol 54 (3) ◽  
pp. 1-40
Author(s):  
Shervin Minaee ◽  
Nal Kalchbrenner ◽  
Erik Cambria ◽  
Narjes Nikzad ◽  
Meysam Chenaghlu ◽  
...  

Deep learning--based models have surpassed classical machine learning--based approaches in various text classification tasks, including sentiment analysis, news categorization, question answering, and natural language inference. In this article, we provide a comprehensive review of more than 150 deep learning--based models for text classification developed in recent years, and we discuss their technical contributions, similarities, and strengths. We also provide a summary of more than 40 popular datasets widely used for text classification. Finally, we provide a quantitative analysis of the performance of different deep learning models on popular benchmarks, and we discuss future research directions.


2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip ◽  
...  

This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.


2021 ◽  
Vol 22 (15) ◽  
pp. 7911
Author(s):  
Eugene Lin ◽  
Chieh-Hsin Lin ◽  
Hsien-Yuan Lane

A growing body of evidence currently proposes that deep learning approaches can serve as an essential cornerstone for the diagnosis and prediction of Alzheimer’s disease (AD). In light of the latest advancements in neuroimaging and genomics, numerous deep learning models are being exploited to distinguish AD from normal controls and/or to distinguish AD from mild cognitive impairment in recent research studies. In this review, we focus on the latest developments for AD prediction using deep learning techniques in cooperation with the principles of neuroimaging and genomics. First, we narrate various investigations that make use of deep learning algorithms to establish AD prediction using genomics or neuroimaging data. Particularly, we delineate relevant integrative neuroimaging genomics investigations that leverage deep learning methods to forecast AD on the basis of incorporating both neuroimaging and genomics data. Moreover, we outline the limitations as regards to the recent AD investigations of deep learning with neuroimaging and genomics. Finally, we depict a discussion of challenges and directions for future research. The main novelty of this work is that we summarize the major points of these investigations and scrutinize the similarities and differences among these investigations.


Author(s):  
Wenjia Cai ◽  
Jie Xu ◽  
Ke Wang ◽  
Xiaohong Liu ◽  
Wenqin Xu ◽  
...  

Abstract Anterior segment eye diseases account for a significant proportion of presentations to eye clinics worldwide, including diseases associated with corneal pathologies, anterior chamber abnormalities (e.g. blood or inflammation) and lens diseases. The construction of an automatic tool for the segmentation of anterior segment eye lesions will greatly improve the efficiency of clinical care. With research on artificial intelligence progressing in recent years, deep learning models have shown their superiority in image classification and segmentation. The training and evaluation of deep learning models should be based on a large amount of data annotated with expertise, however, such data are relatively scarce in the domain of medicine. Herein, the authors developed a new medical image annotation system, called EyeHealer. It is a large-scale anterior eye segment dataset with both eye structures and lesions annotated at the pixel level. Comprehensive experiments were conducted to verify its performance in disease classification and eye lesion segmentation. The results showed that semantic segmentation models outperformed medical segmentation models. This paper describes the establishment of the system for automated classification and segmentation tasks. The dataset will be made publicly available to encourage future research in this area.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


Sign in / Sign up

Export Citation Format

Share Document