Natural Language Processing
Latest Publications


TOTAL DOCUMENTS

78
(FIVE YEARS 78)

H-INDEX

1
(FIVE YEARS 1)

Published By IGI Global

9781799809517, 9781799809524

2020 ◽  
pp. 1686-1704
Author(s):  
Emna Hkiri ◽  
Souheyl Mallat ◽  
Mounir Zrigui

The event extraction task consists in determining and classifying events within an open-domain text. It is very new for the Arabic language, whereas it attained its maturity for some languages such as English and French. Events extraction was also proved to help Natural Language Processing tasks such as Information Retrieval and Question Answering, text mining, machine translation etc… to obtain a higher performance. In this article, we present an ongoing effort to build a system for event extraction from Arabic texts using Gate platform and other tools.


2020 ◽  
pp. 1652-1666
Author(s):  
Paolo Sernani ◽  
Andrea Claudi ◽  
Aldo Franco Dragoni

World population is shifting towards older ages: according to recent estimates there will be 1.5 billion people over 65 years old in 2050. Local governments, international institutions, care organizations and industry are fostering the research community to find solutions to face the unprecedented challenges raised by population ageing. A combination of Artificial Intelligence and NetMedicine could be ideal to face these challenges: they provide the means to develop an intelligent system and simultaneously to distribute it over a network, allowing the communication over the internet, if needed. Hence, the authors present a Multi-Agent Architecture for Ambient Assisted Living (AAL): it is the model for a system to manage a distributed sensor network composed by ambient and biometric sensors. The system should analyse data and pro-actively decide to trigger alarms if anomalies are detected. The authors tested the architecture implementing a prototypical Multi-Agent System (MAS), based on Belief-Desire-Intention (BDI) paradigm: the Virtual Carer.


2020 ◽  
pp. 1564-1619
Author(s):  
Jeremy Horne

In the last half century, we have gone from storing data on 5¼ inch floppy diskettes to the cloud and now use fog computing. But one should ask why so much data is being collected. Part of the answer is simple in light of scientific projects, but why is there so much data on us? Then, we ask about its “interface” through fog computing. Such questions prompt this article on the philosophy of big data and fog computing. After some background on definitions, origins and contemporary applications, the main discussion begins with thinking about modern data collection, management, and applications from a complexity standpoint. Big data is turned into knowledge, but knowledge is extrapolated from the past and used to manage the future. Yet it is questionable whether humans have the capacity to manage contemporary technological and social complexity evidenced by our world in crisis and possibly on the brink of extinction. Such calls for a new way of studying societies from a scientific point of view. We are at the center of the observation from which big data emerge and are manipulated, the overall human project being not only to create an artificial brain with an attendant mind, but a society that might be able to survive what “natural” humans cannot.


2020 ◽  
pp. 1533-1563
Author(s):  
Eduardo C. Contreras ◽  
Gustavo J. Puente

A large part of the population in countries in process of development ignores what Rheumatic Diseases are, and general practitioners are in most cases unaware of enough information to identify them and the treatments to successfully control them. A proposal to help those general practitioners to detect if an articular condition belongs to a Rheumatic Disease case is to present them the clinical semiology that should lead them to redirect the given conditions to a specialist on the subject, a rheumatologist. The clinical semiology is presented by an automated algorithm inside a goal-based software agent, containing all the necessary information to identify the seven most common inflammatory Rheumatic Diseases, and fourteen of the non-inflammatory ones. The purpose of this tool is to provide the general practitioner with the correct information to redirect the patient with a rheumatologist, in order for it to receive the appropriate medication to be controlled.


2020 ◽  
pp. 1489-1505
Author(s):  
Robert Wahlstedt

Many people as they age face a greater challenge of muscular dexterity around their facial muscles. This results in difficulty producing certain sounds, and sometimes the problem is so severe that they are unintelligible. People who could benefit from the methods in this chapter are those who are hard of hearing and do not have feedback readily accessible and people with ALS. This chapter describes a method that uses a computer learning algorithm that predicts what people are about to say based on earlier content and learns what the natural sound of their voice sounds like. This chapter illustrates speech trajectory and voice shaping. Clear Audio is a biologically inspired framework for studying natural language. Like the story behind Jurassic Park, Clear Audio attempts to make predictions about data from existing data, inspired by biological processes. Its main goal is to give feedback for speech pathology purposes.


2020 ◽  
pp. 1459-1488
Author(s):  
Wendy A. Powell ◽  
Natalie Corbett ◽  
Vaughan Powell

Virtual Humans are here to stay. From the voice in your satNav to Apple's “Siri”, we are accustomed to engaging in some level of conversation with our technology, and it is rapidly becoming apparent that natural language interfaces have potential in a wide range of applications. Whilst audio-only communication has its place, most natural conversations take place face to face, and believable embodiment of virtual humans is the necessary next step for them to be fully integrated into our lives. Much progress has been made in the creation of relatable characters for film, but real-time facial animation presents a unique set of design challenges. This chapter examines the role of the virtual human, its history, and approaches to design and creation. It looks at ways in which they can be brought to life, interacting, assisting and learning. It concludes with a view into popular culture and perceptions of the future, where fact and fiction meet.


2020 ◽  
pp. 1436-1458
Author(s):  
Yuncheng Jiang ◽  
Mingxuan Yang

This article describes how the traditional web search is essentially based on a combination of textual keyword searches with an importance ranking of the documents depending on the link structure of the web. However, one of the dimensions that has not been captured to its full extent is that of semantics. Currently, combining search and semantics gives birth to the idea of the semantic search. The purpose of this article is to present some new methods to semantic search to solve some shortcomings of existing approaches. Concretely, the authors propose two novel methods to semantic search by combining formal concept analysis, rough set theory, and similarity reasoning. In particular, the authors use Wikipedia to compute the similarity of concepts (i.e., keywords). The experimental results show that the authors' proposals perform better than some of the most representative similarity search methods and sustain the intuitions with respect to human judgements.


2020 ◽  
pp. 1298-1313
Author(s):  
Robert Niewiadomski ◽  
Dennis Anderson

Our inventions defined the work we engaged in for centuries; created new industries and employment opportunities around them. They, however, had often unforeseen consequences that affected the way we lived, interacted with each other, and redefined our societal rules. The established narration portrays the impact of major technological leaps in civilization on employment as temporary disruptions: Many finds themselves without employment taken away from them by efficient, laborsaving inventions, but, in the long run, through gradual adaptations, improved education and gaining higher qualifications, everyone benefits. In this chapter, the authors explore the impact of the rapid expansion of artificial intelligence (AI) in relations to the labor market. The authors argue that this rather optimistic, even naïve scenario, collapses while confronted with the exponential growth of AI; in particular, with the potential arrival of syneoids – robotic forms of “strong AI” possessing, or even exceeding, the full range of human cognitive abilities.


2020 ◽  
pp. 1232-1271
Author(s):  
Stuart Armstrong ◽  
Roman V. Yampolskiy

Superintelligent systems are likely to present serious safety issues, since such entities would have great power to control the future according to their possibly misaligned goals or motivation systems. Oracle AIs (OAI) are confined AIs that can only answer questions and do not act in the world, represent one particular solution to this problem. However even Oracles are not particularly safe: humans are still vulnerable to traps, social engineering, or simply becoming dependent on the OAI. But OAIs are still strictly safer than general AIs, and there are many extra layers of precautions we can add on top of these. This paper begins with the definition of the OAI Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay negative effects from a potentially unfriendly superintelligence while allowing for future research and development of superintelligent systems.


2020 ◽  
pp. 1199-1212
Author(s):  
Syeda Erfana Zohora ◽  
A. M. Khan ◽  
Arvind K. Srivastava ◽  
Nhu Gia Nguyen ◽  
Nilanjan Dey

In the last few decades there has been a tremendous amount of research on synthetic emotional intelligence related to affective computing that has significantly advanced from the technological point of view that refers to academic studies, systematic learning and developing knowledge and affective technology to a extensive area of real life time systems coupled with their applications. The objective of this paper is to present a general idea on the area of emotional intelligence in affective computing. The overview of the state of the art in emotional intelligence comprises of basic definitions and terminology, a study of current technological scenario. The paper also proposes research activities with a detailed study of ethical issues, challenges with importance on affective computing. Lastly, we present a broad area of applications such as interactive learning emotional systems, modeling emotional agents with an intention of employing these agents in human computer interactions as well as in education.


Sign in / Sign up

Export Citation Format

Share Document