International Journal of Humanized Computing and Communication
Latest Publications


TOTAL DOCUMENTS

6
(FIVE YEARS 6)

H-INDEX

0
(FIVE YEARS 0)

Published By Institute For Semantic Computing Foundation

2641-953x

Author(s):  
Jeff Stanley ◽  
Ozgur Eris ◽  
Monika Lohani

Increasingly, researchers are creating machines with humanlike social behaviors to elicit desired human responses such as trust and engagement, but a systematic characterization and categorization of such behaviors and their demonstrated effects is missing. This paper proposes a taxonomy of machine behavior based on what has been experimented with and documented in the literature to date. We argue that self-presentation theory, a psychosocial model of human interaction, provides a principled framework to structure existing knowledge in this domain and guide future research and development. We leverage a foundational human self-presentation taxonomy (Jones and Pittman, 1982), which associates human verbal behaviors with strategies, to guide the literature review of human-machine interaction studies we present in this paper. In our review, we identified 36 studies that have examined human-machine interactions with behaviors corresponding to strategies from the taxonomy. We analyzed frequently and infrequently used strategies to identify patterns and gaps, which led to the adaptation of Jones and Pittman’s human self-presentation taxonomy to a machine self-presentation taxonomy. The adapted taxonomy identifies strategies and behaviors machines can employ when presenting themselves to humans in order to elicit desired human responses and attitudes. Drawing from models of human trust we discuss how to apply the taxonomy to affect perceived machine trustworthiness.


Author(s):  
Federico Maria Cau ◽  
Angelo Mereu ◽  
Lucio Davide Spano

In this paper, we present an intelligent support End-User Developers (EUDevs) in creating plot lines for Point and Click games on the web. We introduce a story generator and the associated user interface, which help the EUDev in defining the game plot starting from the images providing the game setting. In particular, we detail a pipeline for creating such game plots starting from 360 degrees images. We identify salient objects in equirectangular images, and we compose the output with other two neural networks for the generation: one generating captions for 2D images and one generating the plot text. The provided suggestions can be further developed by the EUDev, modifying the generated text and saving the result. The interface supports the control of different parameters of the story generator using a user-friendly vocabulary. The results of a user study show good effectiveness and usability of the proposed interface.


Author(s):  
Christian Roatis ◽  
Jorg Denzinger

We present an extension of the shout-ahead agent architecture that allows for adding human user-defined exception rules to the rules created by the hybrid learning approach for this architecture. The user-defined rules can be added after learning as reaction to weaknesses of the learned rules or learning can be performed with the user-defined rules already in place. We applied the extended shout-ahead architecture and the associated learning to a new application area, cooperating controllers for the traffic lights of intersections. In our experimental evaluations, adding user-defined exception rules to the learned rules for several traffic flow instances increased the efficiency of the resulting controllers substantially compared to just using the learned rules. Performing learning with user-defined exception rules already in place decreased the learning time substantially for all flows, but had mixed results with respect to efficiency. We also evaluated user-defined exception rules for a variant of the architecture that is not using communication and saw similar effects as for the variant with communication. For the communicating version, both variants of adding user-defined exception rules create controllers that are much more flexible than what using the original shout-ahead architecture with its learning is able to create as indicated by experiments with variations of flows.


Author(s):  
Pietro Battistoni

In the field of multimodal communication, sign language is and continues to be, one of the most understudied areas. Thanks to the recent advances in the field of deep learning, there are far-reaching implications and applications that neural networks can have for sign language mastering. This paper describes a method for ASL alphabet recognition using Convolutional Neural Networks (CNN), which allows to monitor user’s learning progress. American Sign Language (ASL) alphabet recognition by computer vision is a challenging task due to the complexity in ASL signs, high interclass similarities, large intraclass variations, and constant occlusions. We produced a robust model that classifies letters correctly in a majority of cases. The experimental results encouraged us to investigate the adoption of AI techniques to support learning of a sign language, as a natural language with its own syntax and lexicon. The challenge was to deliver a mobile sign language training solution that users may adopt during their everyday life. To satisfy the indispensable additional computational resources to the locally connected end- user devices, we propose the adoption of a Fog-Computing Architecture.


Filling a vacancy takes a lot of (costly) time. Automated preprocessing of applications using artificial intelligence technology can help to save time, e.g., by analyzing applications using machine learning algorithms. We investigate whether such systems are potentially biased in terms of gender, origin, and nobility. Using a corpus of common German reference letter sentences, we investigate two research questions. First, we test sentiment analysis systems offered by Amazon, Google, IBM and Microsoft. All tested services rate the sentiment of the same template sentences very inconsistently and biased at least with regard to gender. Second, we examine the impact of (im-)balanced training data sets on classifiers, which are trained to estimate the sentiment of sentences from our corpus. This experiment shows that imbalanced data, on the one hand, lead to biased results, but on the other hand, under certain conditions, can lead to fair results.


Author(s):  
Sebastian Weigelt

Systems such as Alexa, Cortana, and Siri appear rather smart. However, they only react to predefined wordings and do not actually grasp the user’s intent. To overcome this limitation, a system must understand the topics the user is talking about. Therefore, we apply unsupervised multi-topic labeling to spoken utterances. Although topic labeling is a well-studied task on textual documents, its potential for spoken input is almost unexplored. Our approach for topic labeling is tailored to spoken utterances; it copes with short and ungrammatical input. The approach is two-tiered. First, we disambiguate word senses. We utilize Wikipedia as pre-labeled corpus to train a naïve-bayes classifier. Second, we build topic graphs based on DBpedia relations. We use two strategies to determine central terms in the graphs, i.e. the shared topics. One focuses on the dominant senses in the utterance and the other covers as many distinct senses as possible. Our approach creates multiple distinct topics per utterance and ranks results. The evaluation shows that the approach is feasible; the word sense disambiguation achieves a recall of 0.799. Concerning topic labeling, in a user study subjects assessed that in 90.9% of the cases at least one proposed topic label among the first four is a good fit. With regard to precision, the subjects judged that 77.2% of the top ranked labels are a good fit or good but somewhat too broad (Fleiss’ kappa κ = 0.27). We illustrate areas of application of topic labeling in the field of programming in spoken language. With topic labeling applied to the spoken input as well as ontologies that model the situational context we are able to select the most appropriate ontologies with an F1-score of 0.907.


Sign in / Sign up

Export Citation Format

Share Document