Reports of the Workshops Held at the 2019 AAAI Conference on Artificial Intelligence

AI Magazine ◽  
2019 ◽  
Vol 40 (3) ◽  
pp. 67-78
Author(s):  
Guy Barash ◽  
Mauricio Castillo-Effen ◽  
Niyati Chhaya ◽  
Peter Clark ◽  
Huáscar Espinoza ◽  
...  

The workshop program of the Association for the Advancement of Artificial Intelligence’s 33rd Conference on Artificial Intelligence (AAAI-19) was held in Honolulu, Hawaii, on Sunday and Monday, January 27–28, 2019. There were fifteen workshops in the program: Affective Content Analysis: Modeling Affect-in-Action, Agile Robotics for Industrial Automation Competition, Artificial Intelligence for Cyber Security, Artificial Intelligence Safety, Dialog System Technology Challenge, Engineering Dependable and Secure Machine Learning Systems, Games and Simulations for Artificial Intelligence, Health Intelligence, Knowledge Extraction from Games, Network Interpretability for Deep Learning, Plan, Activity, and Intent Recognition, Reasoning and Learning for Human-Machine Dialogues, Reasoning for Complex Question Answering, Recommender Systems Meet Natural Language Processing, Reinforcement Learning in Games, and Reproducible AI. This report contains brief summaries of the all the workshops that were held.

AI Magazine ◽  
2018 ◽  
Vol 39 (4) ◽  
pp. 45-56
Author(s):  
Bruno Bouchard ◽  
Kevin Bouchard ◽  
Noam Brown ◽  
Niyati Chhaya ◽  
Eitan Farchi ◽  
...  

The AAAI-18 workshop program included 15 workshops covering a wide range of topics in AI. Workshops were held Sunday and Monday, February 2–7, 2018, at the Hilton New Orleans Riverside in New Orleans, Louisiana, USA. This report contains summaries of the Affective Content Analysis workshop; the Artificial Intelligence Applied to Assistive Technologies and Smart Environments; the AI and Marketing Science workshop; the Artificial Intelligence for Cyber Security workshop; the AI for Imperfect-Information Games; the Declarative Learning Based Programming workshop; the Engineering Dependable and Secure Machine Learning Systems workshop; the Health Intelligence workshop; the Knowledge Extraction from Games workshop; the Plan, Activity, and Intent Recognition workshop; the Planning and Inference workshop; the Preference Handling workshop; the Reasoning and Learning for Human-Machine Dialogues workshop; and the the AI Enhanced Internet of Things Data Processing for Intelligent Applications workshop.


Artificial Intelligence (AI) is a buzz word in the cyber world. It is still a developing science in multiple facets according to the challenges thrown by 21st century. Use of AI has become inseparable from human life. In this day and age one cannot imagine a world without AI as it has much significant impact on human life. The main objective of AI is to develop the technology based activities which represents the human knowledge in order to solve problems. Simply AI is study of how an individual think, work, learn and decide in any scenario of life, whether it may be related to problem solving or learning new things or thinking rationally or to arrive at a solution etc. AI is in every area of human life, naming a few it is into gaming, language processing, speech recognition, expert system, vision system, hand writing recognition, intelligence robots, financial transactions and what not, every activity of human life has become a subset of AI. In spite of numerous uses, AI can also used for destroying the human life, that is the reason human inference is required to monitor the AI activities. Cyber crimes has become quite common and become a daily news item. It is not just a problem faced in one country, it is across the world. Without strong security measures, AI is meaningless as it can be easily accessible by others. It has become a big threat for governments, banks, multinational companies through online attacks by hackers. Lot of individual and organizational data is exploited by hackers and it becomes a big threat to the cyber world. In this connection research in the area of AI and cyber security has gained more importance in the recent times and it is ever lasting also as it is a dynamic and sensitive issue linked to human life.


2019 ◽  
Vol 8 (4) ◽  
pp. 8643-8645

Artificial Intelligence (AI) is a buzz word in the cyber world. It is still a developing science in multiple facets according to the challenges thrown by 21st century. Use of AI has become inseparable from human life. In this day and age one cannot imagine a world without AI as it has much significant impact on human life. The main objective of AI is to develop the technology based activities which represents the human knowledge in order to solve problems. Simply AI is study of how an individual think, work, learn and decide in any scenario of life, whether it may be related to problem solving or learning new things or thinking rationally or to arrive at a solution etc. AI is in every area of human life, naming a few it is into gaming, language processing, speech recognition, expert system, vision system, hand writing recognition, intelligence robots, financial transactions and what not, every activity of human life has become a subset of AI. In spite of numerous uses, AI can also used for destroying the human life, that is the reason human inference is required to monitor the AI activities. Cyber crimes has become quite common and become a daily news item. It is not just a problem faced in one country, it is across the world. Without strong security measures, AI is meaningless as it can be easily accessible by others. It has become a big threat for governments, banks, multinational companies through online attacks by hackers. Lot of individual and organizational data is exploited by hackers and it becomes a big threat to the cyber world. In this connection research in the area of AI and cyber security has gained more importance in the recent times and it is ever lasting also as it is a dynamic and sensitive issue linked to human life.


Author(s):  
Yorick Wilks

This chapter argues, briefly, that much work in formal Computational Semantics (alias CompSem) is not computational at all, and does not attempt to be; there is some mis-description going on here on a large and long-term scale. The aim of this chapter is to show that such work is not just misdescribed, but loses value because of the scientific importance of implementation and validation in this, as in all parts of Artificial Intelligence. It is the raison d’etre of this subject. Moreover, the examples used to support formal CompSem’s value for the representation of the meaning of language strings often have no place in normal English usage, nor in corpora. This fact, if true, should be better understood as should how this paradoxical situation has arisen and is tolerated. Recent large-scale developments in Natural Language Processing (NLP), such as machine translation or question answering, which are quite successful and undeniably both semantic and computational, have made no use of formal CompSem techniques. Most importantly, the Semantic Web (and Information Extraction techniques generally) now offer the possibility of the large scale use of language data so as to achieve concrete results achieved by methods usually deemed impossible by formal semanticists, such as annotation methods, which are fundamentally forms of Lewis’ (1970) “markerese,” the term he coined to dismiss methods that involve symbolic “mark up” of texts, rather than using formal logic to represent meaning.


2021 ◽  
Author(s):  
Olga iCognito group ◽  
Andrey Zakharov

BACKGROUND In recent years there has been a growth of psychological chatbots performing important functions from checking symptoms to providing psychoeducation and guiding self-help exercises. Technologically these chatbots are based on traditional decision-tree algorithms with limited keyword recognition. A key challenge to the development of conversational artificial intelligence is intent recognition or understanding the goal that the user wants to accomplish. The user query on psychological topic is often emotional, highly contextual and non goal-oriented, and therefore may contain vague, mixed or multiple intents. OBJECTIVE In this study we attempt to identify and categorize user intents with relation to psychological topics. METHODS We collected a dataset of 43 000 logs from the iCognito Anti-depression chatbot which consists of user answers to the chatbot questions about the reason of their emotional distress. The data was labeled manually. The BERT model was used for classification. RESULTS We have identified 24 classes of user intents that can be grouped into larger categories, such as: a) intents to improve emotional state; b) intents to improve interpersonal relations; c) intents to improve physical condition; d) intents to solve practical problems; e) intents to make a decision; f) intents to harm oneself or commit suicide; g) intent to blame or criticize oneself. CONCLUSIONS This classification may be used for the development of conversational artificial intelligence in the field of psychotherapy.


Author(s):  
Miss. Aliya Anam Shoukat Ali

Natural Language Processing (NLP) could be a branch of Artificial Intelligence (AI) that allows machines to know the human language. Its goal is to form systems that can make sense of text and automatically perform tasks like translation, spell check, or topic classification. Natural language processing (NLP) has recently gained much attention for representing and analysing human language computationally. It's spread its applications in various fields like computational linguistics, email spam detection, information extraction, summarization, medical, and question answering etc. The goal of the Natural Language Processing is to style and build software system which will analyze, understand, and generate languages that humans use naturally, so as that you just could also be ready to address your computer as if you were addressing another person. Because it’s one amongst the oldest area of research in machine learning it’s employed in major fields like artificial intelligence speech recognition and text processing. Natural language processing has brought major breakthrough within the sector of COMPUTATION AND AI.


Author(s):  
Juveriya Afreen

Abstract-- With increase in complexity of data, security, it is difficult for the individuals to prevent the offence. Thus, by using any automation or software it’s not possible by only using huge fixed algorithms to overcome this. Thus, we need to look for something which is robust and feasible enough. Hence AI plays an epitome role to defense such violations. In this paper we basically look how human reasoning along with AI can be applied to uplift cyber security.


2021 ◽  
pp. 1-13
Author(s):  
Lamiae Benhayoun ◽  
Daniel Lang

BACKGROUND: The renewed advent of Artificial Intelligence (AI) is inducing profound changes in the classic categories of technology professions and is creating the need for new specific skills. OBJECTIVE: Identify the gaps in terms of skills between academic training on AI in French engineering and Business Schools, and the requirements of the labour market. METHOD: Extraction of AI training contents from the schools’ websites and scraping of a job advertisements’ website. Then, analysis based on a text mining approach with a Python code for Natural Language Processing. RESULTS: Categorization of occupations related to AI. Characterization of three classes of skills for the AI market: Technical, Soft and Interdisciplinary. Skills’ gaps concern some professional certifications and the mastery of specific tools, research abilities, and awareness of ethical and regulatory dimensions of AI. CONCLUSIONS: A deep analysis using algorithms for Natural Language Processing. Results that provide a better understanding of the AI capability components at the individual and the organizational levels. A study that can help shape educational programs to respond to the AI market requirements.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


Sign in / Sign up

Export Citation Format

Share Document