intelligent interaction
Recently Published Documents


TOTAL DOCUMENTS

115
(FIVE YEARS 33)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-34
Author(s):  
Yu Zhang ◽  
Bob Coecke ◽  
Min Chen

In many applications, while machine learning (ML) can be used to derive algorithmic models to aid decision processes, it is often difficult to learn a precise model when the number of similar data points is limited. One example of such applications is data reconstruction from historical visualizations, many of which encode precious data, but their numerical records are lost. On the one hand, there is not enough similar data for training an ML model. On the other hand, manual reconstruction of the data is both tedious and arduous. Hence, a desirable approach is to train an ML model dynamically using interactive classification, and hopefully, after some training, the model can complete the data reconstruction tasks with less human interference. For this approach to be effective, the number of annotated data objects used for training the ML model should be as small as possible, while the number of data objects to be reconstructed automatically should be as large as possible. In this article, we present a novel technique for the machine to initiate intelligent interactions to reduce the user’s interaction cost in interactive classification tasks. The technique of machine-initiated intelligent interaction (MI3) builds on a generic framework featuring active sampling and default labeling. To demonstrate the MI3 approach, we use the well-known cholera map visualization by John Snow as an example, as it features three instances of MI3 pipelines. The experiment has confirmed the merits of the MI3 approach.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 20
Author(s):  
Boštjan Šumak ◽  
Saša Brdnik ◽  
Maja Pušnik

To equip computers with human communication skills and to enable natural interaction between the computer and a human, intelligent solutions are required based on artificial intelligence (AI) methods, algorithms, and sensor technology. This study aimed at identifying and analyzing the state-of-the-art AI methods and algorithms and sensors technology in existing human–computer intelligent interaction (HCII) research to explore trends in HCII research, categorize existing evidence, and identify potential directions for future research. We conduct a systematic mapping study of the HCII body of research. Four hundred fifty-four studies published in various journals and conferences between 2010 and 2021 were identified and analyzed. Studies in the HCII and IUI fields have primarily been focused on intelligent recognition of emotion, gestures, and facial expressions using sensors technology, such as the camera, EEG, Kinect, wearable sensors, eye tracker, gyroscope, and others. Researchers most often apply deep-learning and instance-based AI methods and algorithms. The support sector machine (SVM) is the most widely used algorithm for various kinds of recognition, primarily an emotion, facial expression, and gesture. The convolutional neural network (CNN) is the often-used deep-learning algorithm for emotion recognition, facial recognition, and gesture recognition solutions.


Author(s):  
Ambika Patidar ◽  
Rishab Koul ◽  
Tanishq Varshney ◽  
Kaushiv Agarwal ◽  
Rutika Patil

Communicating with employees through forums and emails has become an increasingly popular way for many multinational companies to provide human resource services in real time. Today, employee chat service agents are often replaced by conversational software agents or chatbots. These systems are designed to communicate with human users through natural language, generally based on artificial intelligence (AI). Time and cost saving opportunities have led to the widespread deployment of AI-based chatbots. Chatbots are one of the most basic and popular examples of human-computer intelligent interaction (HCI). Designed to convincingly simulate the way humans behave as dialogue partners. In the proposed system, we propose a chat robot that can dynamically respond to employee human resource queries. The proposed HR system is based on the Microsoft Cognitive Services chatbot. This Microsoft Teams-based platform provides a broad foundation of intelligence and is trained based on various data sets provided by the organization's HR.


2021 ◽  
Vol 13 (17) ◽  
pp. 9923
Author(s):  
Shaofeng Wang ◽  
Gaojun Shi ◽  
Mingjie Lu ◽  
Ruyi Lin ◽  
Junfeng Yang

A smart learning environment, featuring personalization, real-time feedback, and intelligent interaction, provides the primary conditions for actively participating in online education. Identifying the factors that influence active online learning in a smart learning environment is critical for proposing targeted improvement strategies and enhancing their active online learning effectiveness. This study constructs the research framework of active online learning with theories of learning satisfaction, the Technology Acceptance Model (TAM), and a smart learning environment. We hypothesize that the following factors will influence active online learning: Typical characteristics of a smart learning environment, perceived usefulness and ease of use, social isolation, learning expectations, and complaints. A total of 528 valid questionnaires were collected through online platforms. The partial least squares structural equation modeling (PLS-SEM) analysis using SmartPLS 3 found that: (1) The personalization, intelligent interaction, and real-time feedback of the smart learning environment all have a positive impact on active online learning; (2) the perceived ease of use and perceived usefulness in the technology acceptance model (TAM) positively affect active online learning; (3) innovatively discovered some new variables that affect active online learning: Learning expectations positively impact active online learning, while learning complaints and social isolation negatively affect active online learning. Based on the results, this study proposes the online smart teaching model and discusses how to promote active online learning in a smart environment.


Author(s):  
Vijay A. Kotkar, Et. al.

This paper aims in developing an intelligent interactive robot with multi-functions which provides entertainment and companion. To obtain the information accurately, we have used speech recognition to perform the operations. For the Robot behavior, planning, interactions with the voice assistant and interactions with the user the various speech recognition results are applied. The robot has a simple design. In this study, we have used the microphone for speech recognition.  In addition, we have used room automation, notice display using voice message of the intelligent interaction between human and robots.


Sign in / Sign up

Export Citation Format

Share Document