scholarly journals Scene Understanding Technology of Intelligent Customer Service Robot Based on Deep Learning

2021 ◽  
Vol 2066 (1) ◽  
pp. 012049
Author(s):  
Jianfeng Zhong

Abstract As a value-added service that improves the efficiency of online customer service, customer service robots have been well received by sellers in recent years. Because the robot strives to free the customer service staff from the heavy consulting services in the past, thereby reducing the seller’s operating costs and improving the quality of online services. The purpose of this article is to study the intelligent customer service robot scene understanding technology based on deep learning. It mainly introduces some commonly used models and training methods of deep learning and the application fields of deep learning. Analyzed the problems of the traditional Encoder-Decoder framework, and introduced the chat model designed in this paper based on these problems, that is, the intelligent chat robot model (T-DLLModel) obtained by combining the neural network topic model and the deep learning language model. Conduct an independent question understanding experiment based on question retelling and a question understanding experiment combined with contextual information on the dialogue between online shopping customer service and customers. The experimental results show that when the similarity threshold is 0.4, the method achieves better results, and an F value of 0.5 is achieved. The semantic similarity calculation method proposed in this paper is better than the traditional method based on keywords and semantic information, especially when the similarity threshold increases, the recall rate of this paper is significantly better than the traditional method. The method in this article has a slightly better answer sorting effect on the real customer service dialogue data than the method based on LDA.

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Yuezhong Wu ◽  
Xuehao Shen ◽  
Qiang Liu ◽  
Falong Xiao ◽  
Changyun Li

Garbage classification is a social issue related to people’s livelihood and sustainable development, so letting service robots autonomously perform intelligent garbage classification has important research significance. Aiming at the problems of complex systems with data source and cloud service center data transmission delay and untimely response, at the same time, in order to realize the perception, storage, and analysis of massive multisource heterogeneous data, a garbage detection and classification method based on visual scene understanding is proposed. This method uses knowledge graphs to store and model items in the scene in the form of images, videos, texts, and other multimodal forms. The ESA attention mechanism is added to the backbone network part of the YOLOv5 network, aiming to improve the feature extraction ability of the network, combining with the built multimodal knowledge graph to form the YOLOv5-Attention-KG model, and deploying it to the service robot to perform real-time perception on the items in the scene. Finally, collaborative training is carried out on the cloud server side and deployed to the edge device side to reason and analyze the data in real time. The test results show that, compared with the original YOLOv5 model, the detection and classification accuracy of the proposed model is higher, and the real-time performance can also meet the actual use requirements. The model proposed in this paper can realize the intelligent decision-making of garbage classification for big data in the scene in a complex system and has certain conditions for promotion and landing.


Author(s):  
Amsal Pardamean ◽  
Hilman F. Pardede

Online medias are currently the dominant source of Information due to not being limited by time and place, fast and wide distributions. However, inaccurate news, or often referred as fake news is a major problem in news dissemination for online medias. Inaccurate news is information that is not true, that is engineered to cover the real information and has no factual basis. Usually, inaccurate news is made in the form of news that has mass appeal and is presented in the guise of genuine and legitimate news nuances to deceive or change the reader's mind or opinion. Identification of inaccurate news from real news can be done with natural language processing (NLP) technologies. In this paper, we proposed bidirectional encoder representations from transformers (BERT) for inaccurate news identification. BERT is a language model based on deep learning technologies and it has found effective for many NLP tasks. In this study, we use transfer learning and fine-tuning to adapt BERT for inaccurate news identification. The experiments show that our method could achieve accuracy of 99.23%, recall 99.46%, precision 98.86%, and F-Score of 99.15%. It is largely better than traditional method for the same tasks.


2021 ◽  
Vol 49 (1) ◽  
pp. 030006052098284
Author(s):  
Tingting Qiao ◽  
Simin Liu ◽  
Zhijun Cui ◽  
Xiaqing Yu ◽  
Haidong Cai ◽  
...  

Objective To construct deep learning (DL) models to improve the accuracy and efficiency of thyroid disease diagnosis by thyroid scintigraphy. Methods We constructed DL models with AlexNet, VGGNet, and ResNet. The models were trained separately with transfer learning. We measured each model’s performance with six indicators: recall, precision, negative predictive value (NPV), specificity, accuracy, and F1-score. We also compared the diagnostic performances of first- and third-year nuclear medicine (NM) residents with assistance from the best-performing DL-based model. The Kappa coefficient and average classification time of each model were compared with those of two NM residents. Results The recall, precision, NPV, specificity, accuracy, and F1-score of the three models ranged from 73.33% to 97.00%. The Kappa coefficient of all three models was >0.710. All models performed better than the first-year NM resident but not as well as the third-year NM resident in terms of diagnostic ability. However, the ResNet model provided “diagnostic assistance” to the NM residents. The models provided results at speeds 400 to 600 times faster than the NM residents. Conclusion DL-based models perform well in diagnostic assessment by thyroid scintigraphy. These models may serve as tools for NM residents in the diagnosis of Graves’ disease and subacute thyroiditis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christian Crouzet ◽  
Gwangjin Jeong ◽  
Rachel H. Chae ◽  
Krystal T. LoPresti ◽  
Cody E. Dunn ◽  
...  

AbstractCerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5–40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.


2021 ◽  
Vol 11 (9) ◽  
pp. 3952
Author(s):  
Shimin Tang ◽  
Zhiqiang Chen

With the ubiquitous use of mobile imaging devices, the collection of perishable disaster-scene data has become unprecedentedly easy. However, computing methods are unable to understand these images with significant complexity and uncertainties. In this paper, the authors investigate the problem of disaster-scene understanding through a deep-learning approach. Two attributes of images are concerned, including hazard types and damage levels. Three deep-learning models are trained, and their performance is assessed. Specifically, the best model for hazard-type prediction has an overall accuracy (OA) of 90.1%, and the best damage-level classification model has an explainable OA of 62.6%, upon which both models adopt the Faster R-CNN architecture with a ResNet50 network as a feature extractor. It is concluded that hazard types are more identifiable than damage levels in disaster-scene images. Insights are revealed, including that damage-level recognition suffers more from inter- and intra-class variations, and the treatment of hazard-agnostic damage leveling further contributes to the underlying uncertainties.


2021 ◽  
Vol 18 (3) ◽  
pp. 172988142110121
Author(s):  
David Portugal ◽  
André G Araújo ◽  
Micael S Couceiro

To move out of the lab, service robots must reveal a proven robustness so they can be deployed in operational environments. This means that they should function steadily for long periods of time in real-world areas under uncertainty, without any human intervention, and exhibiting a mature technology readiness level. In this work, we describe an incremental methodology for the implementation of an innovative service robot, entirely developed from the outset, to monitor large indoor areas shared by humans and other obstacles. Focusing especially on the reliability of the fundamental localization system of the robot in the long term, we discuss all the incremental software and hardware features, design choices, and adjustments conducted, and show their impact on the performance of the robot in the real world, in three distinct 24-h long trials, with the ultimate goal of validating the proposed mobile robot solution for indoor monitoring.


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 495
Author(s):  
Imayanmosha Wahlang ◽  
Arnab Kumar Maji ◽  
Goutam Saha ◽  
Prasun Chakrabarti ◽  
Michal Jasinski ◽  
...  

This article experiments with deep learning methodologies in echocardiogram (echo), a promising and vigorously researched technique in the preponderance field. This paper involves two different kinds of classification in the echo. Firstly, classification into normal (absence of abnormalities) or abnormal (presence of abnormalities) has been done, using 2D echo images, 3D Doppler images, and videographic images. Secondly, based on different types of regurgitation, namely, Mitral Regurgitation (MR), Aortic Regurgitation (AR), Tricuspid Regurgitation (TR), and a combination of the three types of regurgitation are classified using videographic echo images. Two deep-learning methodologies are used for these purposes, a Recurrent Neural Network (RNN) based methodology (Long Short Term Memory (LSTM)) and an Autoencoder based methodology (Variational AutoEncoder (VAE)). The use of videographic images distinguished this work from the existing work using SVM (Support Vector Machine) and also application of deep-learning methodologies is the first of many in this particular field. It was found that deep-learning methodologies perform better than SVM methodology in normal or abnormal classification. Overall, VAE performs better in 2D and 3D Doppler images (static images) while LSTM performs better in the case of videographic images.


2021 ◽  
Vol 35 (9) ◽  
pp. 15-27
Author(s):  
Magnus Söderlund

Purpose This study aims to examine humans’ reactions to service robots’ display of warmth in robot-to-robot interactions – a setting in which humans’ impressions of a service robot will not only be based on what this robot does in relation to humans, but also on what it does to other robots. Design/methodology/approach Service robot display of warmth was manipulated in an experimental setting in such a way that a service robot A expressed low versus high levels of warmth in relation to another service robot B. Findings The results indicate that a high level of warmth expressed by robot A vis-à-vis robot B boosted humans’ overall evaluations of A, and that this influence was mediated by the perceived humanness and the perceived happiness of A. Originality/value Numerous studies have examined humans’ reactions when they interact with a service robot or other synthetic agents that provide service. Future service encounters, however, will comprise also multi-robot systems, which means that there will be many opportunities for humans to be exposed to robot-to-robot interactions. Yet, this setting has hitherto rarely been examined in the service literature.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Bhavin Shah

PurposeThe assorted piece-wise retail orders in a cosmetics warehouse are fulfilled through a separate fast-picking area called Forward Buffer (FB). This study determines “just-right” size of FB to ensure desired Customer Service Level (CSL) at least storage wastages. It also investigates the impact of FB capacity and demand variations on FB leanness.Design/methodology/approachA Value Stream Mapping (VSM) tool is applied to analyse the warehouse activities and mathematical model is implemented in MATLAB to quantify the leanness at desired CSL. A comprehensive framework is developed to determine lean FB buffer size for a Retail Distribution Centre (RDC) of a cosmetics industry.FindingsThe CSL increases monotonically; however, the results concerning spent efforts towards CSL improvement gets diminished with raised demand variances. The desired CSL can be achieved at least FB capacity and fewer Storage Waste (SW) as it shifts towards more lean system regime. It is not possible to improve Value Added (VA) time beyond certain constraints and therefore, it is recommended to reduce Non-Value Added (NVA) order processing activities to improve leanness.Research limitations/implicationsThis study determines “just-right” capacity and investigates the impact of buffer and demand variations on leanness. It helps managers to analyse warehouse processes and design customized distribution policies in food, beverage and retail grocery warehouse.Practical implicationsProposed buffering model offers customized strategies beyond pre-set CSL by varying it dynamically to reduce wastages. The mathematical model deriving lean sizing and mitigation guidelines are constructive development for managers.Originality/valueThis research provides an inventive approach of VSM model and Mathematical algorithm endorsing lean thinking to design effective buffering policies in a forward warehouse.


Sign in / Sign up

Export Citation Format

Share Document