scholarly journals Towards Hybrid Model for Human-Computer Interaction in Latvian

Author(s):  
Inguna Skadiņa ◽  
Didzis Goško

Human-computer interaction, especially in form of dialogue systems and chatbots, has become extremely popular during the last decade. The dominant approach in the recent development of practical virtual assistants is the application of deep learning techniques. However, in case of less resourced language (or domain), the application of deep learning could be very complicated due to the lack of necessary training data. In this paper, we discuss possibility to apply hybrid approach to dialogue modelling by combining data-driven approach with the knowledge-based approach. Our hypothesis is that by combining different agents (general domain chatbot, frequently asked questions module and goal oriented virtual assistant) into single virtual assistant we can facilitate adequacy and fluency of the conversation. We investigate suitability of different widely used techniques in less resourced settings. We demonstrate feasibility of our approach for morphologically rich less resourced language Latvian through initial virtual assistant prototype for the student service of the University of Latvia.

2018 ◽  
Author(s):  
Uri Shaham

AbstractBiological measurements often contain systematic errors, also known as “batch effects”, which may invalidate downstream analysis when not handled correctly. The problem of removing batch effects is of major importance in the biological community. Despite recent advances in this direction via deep learning techniques, most current methods may not fully preserve the true biological patterns the data contains. In this work we propose a deep learning approach for batch effect removal. The crux of our approach is learning a batch-free encoding of the data, representing its intrinsic biological properties, but not batch effects. In addition, we also encode the systematic factors through a decoding mechanism and require accurate reconstruction of the data. Altogether, this allows us to fully preserve the true biological patterns represented in the data. Experimental results are reported on data obtained from two high throughput technologies, mass cytometry and single-cell RNA-seq. Beyond good performance on training data, we also observe that our system performs well on test data obtained from new patients, which was not available at training time. Our method is easy to handle, a publicly available code can be found at https://github.com/ushaham/BatchEffectRemoval2018.


interactions ◽  
2013 ◽  
Vol 20 (5) ◽  
pp. 50-57 ◽  
Author(s):  
Ben Shneiderman ◽  
Kent Norman ◽  
Catherine Plaisant ◽  
Benjamin B. Bederson ◽  
Allison Druin ◽  
...  

Author(s):  
Vu Tuan Hai ◽  
Dang Thanh Vu ◽  
Huynh Ho Thi Mong Trinh ◽  
Pham The Bao

Recent advances in deep learning models have shown promising potential in object removal, which is the task of replacing undesired objects with appropriate pixel values using known context. Object removal-based deep learning can commonly be solved by modeling it as the Img2Img (image to image) translation or Inpainting. Instead of dealing with a large context, this paper aims at a specific application of object removal, that is, erasing braces trace out of an image having teeth with braces (called braces2teeth problem). We solved the problem by three methods corresponding to different datasets. Firstly, we use the CycleGAN model to deal with the problem that paired training data is not available. In the second case, we try to create pseudo-paired data to train the Pix2Pix model. In the last case, we utilize GraphCut combining generative inpainting model to build a user-interactive tool that can improve the result in case the user is not satisfied with previous results. To our best knowledge, this study is one of the first attempts to take the braces2teeth problem into account by using deep learning techniques and it can be applied in various fields, from health care to entertainment.


2018 ◽  
Vol 7 (3.27) ◽  
pp. 258 ◽  
Author(s):  
Yecheng Yao ◽  
Jungho Yi ◽  
Shengjun Zhai ◽  
Yuwen Lin ◽  
Taekseung Kim ◽  
...  

The decentralization of cryptocurrencies has greatly reduced the level of central control over them, impacting international relations and trade. Further, wide fluctuations in cryptocurrency price indicate an urgent need for an accurate way to forecast this price. This paper proposes a novel method to predict cryptocurrency price by considering various factors such as market cap, volume, circulating supply, and maximum supply based on deep learning techniques such as the recurrent neural network (RNN) and the long short-term memory (LSTM),which are effective learning models for training data, with the LSTM being better at recognizing longer-term associations. The proposed approach is implemented in Python and validated for benchmark datasets. The results verify the applicability of the proposed approach for the accurate prediction of cryptocurrency price.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2972
Author(s):  
Qinghua Gao ◽  
Shuo Jiang ◽  
Peter B. Shull

Hand gesture classification and finger angle estimation are both critical for intuitive human–computer interaction. However, most approaches study them in isolation. We thus propose a dual-output deep learning model to enable simultaneous hand gesture classification and finger angle estimation. Data augmentation and deep learning were used to detect spatial-temporal features via a wristband with ten modified barometric sensors. Ten subjects performed experimental testing by flexing/extending each finger at the metacarpophalangeal joint while the proposed model was used to classify each hand gesture and estimate continuous finger angles simultaneously. A data glove was worn to record ground-truth finger angles. Overall hand gesture classification accuracy was 97.5% and finger angle estimation R 2 was 0.922, both of which were significantly higher than shallow existing learning approaches used in isolation. The proposed method could be used in applications related to the human–computer interaction and in control environments with both discrete and continuous variables.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2308 ◽  
Author(s):  
Dilana Hazer-Rau ◽  
Sascha Meudt ◽  
Andreas Daucher ◽  
Jennifer Spohrs ◽  
Holger Hoffmann ◽  
...  

In this paper, we present a multimodal dataset for affective computing research acquired in a human-computer interaction (HCI) setting. An experimental mobile and interactive scenario was designed and implemented based on a gamified generic paradigm for the induction of dialog-based HCI relevant emotional and cognitive load states. It consists of six experimental sequences, inducing Interest, Overload, Normal, Easy, Underload, and Frustration. Each sequence is followed by subjective feedbacks to validate the induction, a respiration baseline to level off the physiological reactions, and a summary of results. Further, prior to the experiment, three questionnaires related to emotion regulation (ERQ), emotional control (TEIQue-SF), and personality traits (TIPI) were collected from each subject to evaluate the stability of the induction paradigm. Based on this HCI scenario, the University of Ulm Multimodal Affective Corpus (uulmMAC), consisting of two homogenous samples of 60 participants and 100 recording sessions was generated. We recorded 16 sensor modalities including 4 × video, 3 × audio, and 7 × biophysiological, depth, and pose streams. Further, additional labels and annotations were also collected. After recording, all data were post-processed and checked for technical and signal quality, resulting in the final uulmMAC dataset of 57 subjects and 95 recording sessions. The evaluation of the reported subjective feedbacks shows significant differences between the sequences, well consistent with the induced states, and the analysis of the questionnaires shows stable results. In summary, our uulmMAC database is a valuable contribution for the field of affective computing and multimodal data analysis: Acquired in a mobile interactive scenario close to real HCI, it consists of a large number of subjects and allows transtemporal investigations. Validated via subjective feedbacks and checked for quality issues, it can be used for affective computing and machine learning applications.


2020 ◽  
Author(s):  
Ghazi Abdalla ◽  
Fatih Özyurt

Abstract In the modern era, Internet usage has become a basic necessity in the lives of people. Nowadays, people can perform online shopping and check the customer’s views about products that purchased online. Social networking services enable users to post opinions on public platforms. Analyzing people’s opinions helps corporations to improve the quality of products and provide better customer service. However, analyzing this content manually is a daunting task. Therefore, we implemented sentiment analysis to make the process automatically. The entire process includes data collection, pre-processing, word embedding, sentiment detection and classification using deep learning techniques. Twitter was chosen as the source of data collection and tweets collected automatically by using Tweepy. In this paper, three deep learning techniques were implemented, which are CNN, Bi-LSTM and CNN-Bi-LSTM. Each of the models trained on three datasets consists of 50K, 100K and 200K tweets. The experimental result revealed that, with the increasing amount of training data size, the performance of the models improved, especially the performance of the Bi-LSTM model. When the model trained on the 200K dataset, it achieved about 3% higher accuracy than the 100K dataset and achieved about 7% higher accuracy than the 50K dataset. Finally, the Bi-LSTM model scored the highest performance in all metrics and achieved an accuracy of 95.35%.


2016 ◽  
Vol 42 (6) ◽  
pp. 782-797 ◽  
Author(s):  
Haifa K. Aldayel ◽  
Aqil M. Azmi

The fact that people freely express their opinions and ideas in no more than 140 characters makes Twitter one of the most prevalent social networking websites in the world. Being popular in Saudi Arabia, we believe that tweets are a good source to capture the public’s sentiment, especially since the country is in a fractious region. Going over the challenges and the difficulties that the Arabic tweets present – using Saudi Arabia as a basis – we propose our solution. A typical problem is the practice of tweeting in dialectical Arabic. Based on our observation we recommend a hybrid approach that combines semantic orientation and machine learning techniques. Through this approach, the lexical-based classifier will label the training data, a time-consuming task often prepared manually. The output of the lexical classifier will be used as training data for the SVM machine learning classifier. The experiments show that our hybrid approach improved the F-measure of the lexical classifier by 5.76% while the accuracy jumped by 16.41%, achieving an overall F-measure and accuracy of 84 and 84.01% respectively.


2018 ◽  
Vol 68 (2) ◽  
pp. 183 ◽  
Author(s):  
M. Justin Sagayaraj ◽  
Jithesh V. ◽  
J.B. Singh ◽  
Dange Roshani ◽  
K.G. Srinivasa

In many engineering domains, cognition is emerging to play vital role. Cognition will play crucial role in radar engineering as well for the development of next generation radars. In this paper, a cognitive architecture for radars is introduced, based on hybrid cognitive architectures. The paper proposes deep learning applications for integrated target classification based on high-resolution radar range profile measurements and target revisit time calculation as case studies. The proposed architecture is based on the artificial cognitive systems concepts and provides a basis for addressing cognition in radars, which is inadequately explored for radar systems. Initial experimental studies on the applicability of deep learning techniques under this approach provided promising results.


Sign in / Sign up

Export Citation Format

Share Document