Reports on the 2013 AAAI Fall Symposium Series

AI Magazine ◽  
2014 ◽  
Vol 35 (2) ◽  
pp. 69-74
Author(s):  
Gully Burns ◽  
Yolanda Gil ◽  
Yan Liu ◽  
Natalia Villanueva-Rosales ◽  
Sebastian Risi ◽  
...  

The Association for the Advancement of Artificial Intelligence was pleased to present the 2013 Fall Symposium Series, held Friday through Sunday, November 15–17, at the Westin Arlington Gateway in Arlington, Virginia near Washington DC USA. The titles of the five symposia were as follows: Discovery Informatics: AI Takes a Science-Centered View on Big Data (FS-13-01); How Should Intelligence be Abstracted in AI Research: MDPs, Symbolic Representations, Artificial Neural Networks, or — ? (FS-13-02); Integrated Cognition (FS-13-03); Semantics for Big Data (FS-13-04); and Social Networks and Social Contagion: Web Analytics and Computational Social Science (FS-13-05). The highlights of each symposium are presented in this report.

AI Magazine ◽  
2012 ◽  
Vol 34 (1) ◽  
pp. 93 ◽  
Author(s):  
Rezarta Islamaj Dogan ◽  
Yolanda Gil ◽  
Haym Hirsh ◽  
Narayanan C. Krishnan ◽  
Michael Lewis ◽  
...  

The Association for the Advancement of Artificial Intelligence was pleased to present the 2012 Fall Symposium Series, held Friday through Sunday, November 2–4, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the eight symposia were as follows: AI for Gerontechnology (FS-12-01), Artificial Intelligence of Humor (FS-12-02), Discovery Informatics: The Role of AI Research in Innovating Scientific Processes (FS-12-03), Human Control of Bio-Inspired Swarms (FS-12-04), Information Retrieval and Knowledge Discovery in Biomedical Text (FS-12-05), Machine Aggregation of Human Judgment (FS-12-06), Robots Learning Interactively from Human Teachers (FS-12-07), and Social Networks and Social Contagion (FS-12-08). The highlights of each symposium are presented in this report.


AI Magazine ◽  
2015 ◽  
Vol 36 (3) ◽  
pp. 107-112
Author(s):  
Adam B. Cohen ◽  
Sonia Chernova ◽  
James Giordano ◽  
Frank Guerin ◽  
Kris Hauser ◽  
...  

The AAAI 2014 Fall Symposium Series was held Thursday through Saturday, November 13–15, at the Westin Arlington Gateway in Arlington, Virginia adjacent to Washington, DC. The titles of the seven symposia were Artificial Intelligence for Human-Robot Interaction, Energy Market Prediction, Expanding the Boundaries of Health Informatics Using AI, Knowledge, Skill, and Behavior Transfer in Autonomous Robots, Modeling Changing Perspectives: Reconceptualizing Sensorimotor Experiences, Natural Language Access to Big Data, and The Nature of Humans and Machines: A Multidisciplinary Discourse. The highlights of each symposium are presented in this report.


Author(s):  
Vishal Babu Siramshetty ◽  
Dac-Trung Nguyen ◽  
Natalia J. Martinez ◽  
Anton Simeonov ◽  
Noel T. Southall ◽  
...  

The rise of novel artificial intelligence methods necessitates a comparison of this wave of new approaches with classical machine learning for a typical drug discovery project. Inhibition of the potassium ion channel, whose alpha subunit is encoded by human Ether-à-go-go-Related Gene (hERG), leads to prolonged QT interval of the cardiac action potential and is a significant safety pharmacology target for the development of new medicines. Several computational approaches have been employed to develop prediction models for assessment of hERG liabilities of small molecules including recent work using deep learning methods. Here we perform a comprehensive comparison of prediction models based on classical (random forests and gradient boosting) and modern (deep neural networks and recurrent neural networks) artificial intelligence methods. The training set (~9000 compounds) was compiled by integrating hERG bioactivity data from ChEMBL database with experimental data generated from an in-house, high-throughput thallium flux assay. We utilized different molecular descriptors including the latent descriptors, which are real-valued continuous vectors derived from chemical autoencoders trained on a large chemical space (> 1.5 million compounds). The models were prospectively validated on ~840 in-house compounds screened in the same thallium flux assay. The deep neural networks performed significantly better than the classical methods with the latent descriptors. The recurrent neural networks that operate on SMILES provided highest model sensitivity. The best models were merged into a consensus model that offered superior performance compared to reference models from academic and commercial domains. Further, we shed light on the potential of artificial intelligence methods to exploit the chemistry big data and generate novel chemical representations useful in predictive modeling and tailoring new chemical space.<br>


2020 ◽  
pp. practneurol-2020-002688
Author(s):  
Stephen D Auger ◽  
Benjamin M Jacobs ◽  
Ruth Dobson ◽  
Charles R Marshall ◽  
Alastair J Noyce

Modern clinical practice requires the integration and interpretation of ever-expanding volumes of clinical data. There is, therefore, an imperative to develop efficient ways to process and understand these large amounts of data. Neurologists work to understand the function of biological neural networks, but artificial neural networks and other forms of machine learning algorithm are likely to be increasingly encountered in clinical practice. As their use increases, clinicians will need to understand the basic principles and common types of algorithm. We aim to provide a coherent introduction to this jargon-heavy subject and equip neurologists with the tools to understand, critically appraise and apply insights from this burgeoning field.


Author(s):  
Mahyuddin K. M. Nasution Et.al

In the era of information technology, the two developing sides are data science and artificial intelligence. In terms of scientific data, one of the tasks is the extraction of social networks from information sources that have the nature of big data. Meanwhile, in terms of artificial intelligence, the presence of contradictory methods has an impact on knowledge. This article describes an unsupervised as a stream of methods for extracting social networks from information sources. There are a variety of possible approaches and strategies to superficial methods as a starting concept. Each method has its advantages, but in general, it contributes to the integration of each other, namely simplifying, enriching, and emphasizing the results.


With the evolution of artificial intelligence to deep learning, the age of perspicacious machines has pioneered that can even mimic as a human. A Conversational software agent is one of the best-suited examples of such intuitive machines which are also commonly known as chatbot actuated with natural language processing. The paper enlisted some existing popular chatbots along with their details, technical specifications, and functionalities. Research shows that most of the customers have experienced penurious service. Also, the inception of meaningful cum instructive feedback endure a demanding and exigent assignment as enactment for chatbots builtout reckon mostly upon templates and hand-written rules. Current chatbot models lack in generating required responses and thus contradict the quality conversation. So involving deep learning amongst these models can overcome this lack and can fill up the paucity with deep neural networks. Some of the deep Neural networks utilized for this till now are Stacked Auto-Encoder, sparse auto-encoders, predictive sparse and denoising auto-encoders. But these DNN are unable to handle big data involving large amounts of heterogeneous data. While Tensor Auto Encoder which overcomes this drawback is time-consuming. This paper has proposed the Chatbot to handle the big data in a manageable time.


2020 ◽  
Vol 4 (2) ◽  
pp. 23-33 ◽  
Author(s):  
I. Bykov

The aim of this work is to study the state of current research in the field of politics and AI. Our research question is about the possibility of using artificial intelligence in order to run political judgments. The main problem of researching artificial intelligence deals with the value-based biases of judgments about the present and the future of these technologies. The article uses the meta-analysis method, which in recent years has become quite widespread in the specialized literature. The article provides an overview of the most cited publications in the Scopus database with the keywords “Artificial Intelligence” and “Politics”. In total, the study has included 76 articles and reports that were indexed by the database over the past 20 years. It is concluded that in recent years there has been a trend towards an increase in the number of publications on the problems of artificial intelligence and politics. However, most of them are only indirectly related to the central problems of political science. The study of the topic of artificial intelligence most closely adjoins the study of the problems of big data and political communication in social networks.


2022 ◽  
pp. 30-57
Author(s):  
Richard S. Segall

The purpose of this chapter is to illustrate how artificial intelligence (AI) technologies have been used for COVID-19 detection and analysis. Specifically, the use of neural networks (NN) and machine learning (ML) are described along with which countries are creating these techniques and how these are being used for COVID-19 diagnosis and detection. Illustrations of multi-layer convolutional neural networks (CNN), recurrent neural networks (RNN), and deep neural networks (DNN) are provided to show how these are used for COVID-19 detection and prediction. A summary of big data analytics for COVID-19 and some available COVID-19 open-source data sets and repositories and their characteristics for research and analysis are also provided. An example is also shown for artificial intelligence (AI) and neural network (NN) applications using real-time COVID-19 data.


Author(s):  
В. Б. Бетелин ◽  
В. А. Галкин ◽  
А. О. Дубовик

Искусственные нейронные сети (ИНС) в настоящее время являются полем интенсивных исследований. Они зарекомендовали себя при решении задач распознавания образов, аудио и текстовой информации. Планируется их применение в медицине, в беспилотных автомобилях и летательных аппаратах. Однако крайне мало научных работ посвящено обсуждению возможности построения искусственного интеллекта (ИИ), способного эффективно решать очерченный круг задач. Отсутствует гарантия штатного функционирования ИИ в любой реальной, а не специально созданной ситуации. В данной работе предпринимается попытка обоснования ненадежности функционирования современных искусственных нейронных сетей. Показывается, что задача построения интерполяционных многочленов является прообразом проблем, возникающих при создании ИНС. Известны примеры К.Д.Т. Рунге, С.Н. Бернштейна и общая теорема Фабера о том, что для любого наперед заданного натурального числа, соответствующего количеству узлов в интерполяционной таблице, найдется точка из области интерполяции и непрерывная функция, что интерполяционный многочлен не сходится к значению функции в этой точке при неограниченном росте числа узлов. Отсюда следует невозможность обеспечения эффективной работы ИИ лишь за счет неограниченного роста числа нейронов и объемов данных (Big Data), используемых в качестве обучающих выборок. Artificial neural networks (ANN) are currently a field of intensive research. They are a proven pattern/audio/text recognition tool. ANNs will be used in medicine, autonomous vehicles, and drones. Still, very few works discuss building artificial intelligence (AI) that can effectively solve the mentioned problems. There is no guarantee that AI will operate properly in any reallife, not simulated situation. In this work, an attempt is made to prove the unreliability of modern artificial neural networks. It is shown that constructing interpolation polynomials is a prototype of the problems associated with the ANN generation. There are examples by C.D.T. Runge, S.N. Bernstein, and the general Faber theorem stating that for any predetermined natural number corresponding to the number of nodes in the lookup table there is a point from the interpolation region and a continuous function that the interpolation polynomial does not converge to the value of the function at this point as the number of nodes increases indefinitely. This means the impossibility of ensuring efficient AI operation only by an unlimited increase in the number of neurons and data volumes (Big Data) used as training datasets.


2019 ◽  
Vol 1 (1) ◽  
pp. 26-37 ◽  
Author(s):  
Petar Jandrić

This article situates contemporary critical media literacy into a postdigital context. It examines recent advances in data literacy, with an accent to Big Data literacy and data bias, and expands them with insights from critical algorithm studies and the critical posthumanist perspective to education. The article briefly outlines differences between older software technologies and artificial intelligence (AI), and introduces associated concepts such as machine learning, neural networks, deep learning, and AI bias. Finally, it explores the complex interplay between Big Data and AI and teases out three urgent challenges for postdigital critical media literacy. (1) Critical media literacy needs to reinvent existing theories and practices for the postdigital context. (2) Reinvented theories and practices need to find a new balance between the technological aspects of data and AI literacy with the political aspects of data and AI literacy, and learn how to deal with non-predictability. (3) Critical media literacy needs to embrace the posthumanist challenge; we also need to start thinking what makes AIs literate and develop ways of raising literate thinking machines. In our postdigital age, critical media literacy has a crucial role in conceptualisation, development, and understanding of new forms of intelligence we would like to live with in the future.


Sign in / Sign up

Export Citation Format

Share Document