Chatbots are intelligent conversational computer systems designed to mimic human conversation to enable automated online guidance and support. The increased benefits of chatbots led to their wide adoption by many industries in order to provide virtual assistance to customers. Chatbots utilise methods and algorithms from two Artificial Intelligence domains: Natural Language Processing and Machine Learning. However, there are many challenges and limitations in their application. In this survey we review recent advances on chatbots, where Artificial Intelligence and Natural Language processing are used. We highlight the main challenges and limitations of current work and make recommendations for future research investigation.
The COVID-19 pandemic has caused the deaths of millions of people around the world. The scientific community faces a tough struggle to reduce the effects of this pandemic. Several investigations dealing with different perspectives have been carried out. However, it is not easy to find studies focused on COVID-19 contagion chains. A deep analysis of contagion chains may contribute new findings that can be used to reduce the effects of COVID-19. For example, some interesting chains with specific behaviors could be identified and more in-depth analyses could be performed to investigate the reasons for such behaviors. To represent, validate and analyze the information of contagion chains, we adopted an ontological approach. Ontologies are artificial intelligence techniques that have become widely accepted solutions for the representation of knowledge and corresponding analyses. The semantic representation of information by means of ontologies enables the consistency of the information to be checked, as well as automatic reasoning to infer new knowledge. The ontology was implemented in Ontology Web Language (OWL), which is a formal language based on description logics. This approach could have a special impact on smart cities, which are characterized as using information to enhance the quality of basic services for citizens. In particular, health services could take advantage of this approach to reduce the effects of COVID-19.
The analysis of social networks has attracted a lot of attention during the last two decades. These networks are dynamic: new links appear and disappear. Link prediction is the problem of inferring links that will appear in the future from the actual state of the network. We use information from nodes and edges and calculate the similarity between users. The more users are similar, the higher the probability of their connection in the future will be. The similarity metrics play an important role in the link prediction field. Due to their simplicity and flexibility, many authors have proposed several metrics such as Jaccard, AA, and Katz and evaluated them using the area under the curve (AUC). In this paper, we propose a new parameterized method to enhance the AUC value of the link prediction metrics by combining them with the mean received resources (MRRs). Experiments show that the proposed method improves the performance of the state-of-the-art metrics. Moreover, we used machine learning algorithms to classify links and confirm the efficiency of the proposed combination.
We consider a power-down system with two states—“on” and “off”—and a continuous set of power states. The system has to respond to requests for service in the “on” state and, after service, the system can power off or switch to any of the intermediate power-saving states. The choice of states determines the cost to power on for subsequent requests. The protocol for requests is “online”, which means that the decision as to which intermediate state (or the off-state) the system will switch has to be made without knowledge of future requests. We model a linear and a non-linear system, and we consider different online strategies, namely piece-wise linear, logarithmic and exponential. We provide results under online competitive analysis, which have relevance for the integration of renewable energy sources into the smart grid. Our analysis shows that while piece-wise linear systems are not specific for any type of system, logarithmic strategies work well for slack systems, whereas exponential systems are better suited for busy systems.
This paper presents a novel method based on a curve descriptor and projection geometry constrained for vessel matching. First, an LM (Leveberg–Marquardt) algorithm is proposed to optimize the matrix of geometric transformation. Combining with parameter adjusting and the trust region method, the error between 3D reconstructed vessel projection and the actual vessel can be minimized. Then, CBOCD (curvature and brightness order curve descriptor) is proposed to indicate the degree of the self-occlusion of blood vessels during angiography. Next, the error matrix constructed from the error of epipolar matching is used in point pairs matching of the vascular through dynamic programming. Finally, the recorded radius of vessels helps to construct ellipse cross-sections and samples on it to get a point set around the centerline and the point set is converted to mesh for reconstructing the surface of vessels. The validity and applicability of the proposed methods have been verified through experiments that result in the significant improvement of 3D reconstruction accuracy in terms of average back-projection errors. Simultaneously, due to precise point-pair matching, the smoothness of the reconstructed 3D coronary artery is guaranteed.
In this work, we propose both an improvement and extensions of a reverse Jensen inequality due to Wunder et al. (2021). The new proposed inequalities are fairly tight and reasonably easy to use in a wide variety of situations, as demonstrated in several application examples that are relevant to information theory. Moreover, the main ideas behind the derivations turn out to be applicable to generate bounds to expectations of multivariate convex/concave functions, as well as functions that are not necessarily convex or concave.
This work reviews existing research about attributes, which are assessed by individuals to evaluate the trustworthiness of (i) software applications, (ii) organizations (e.g., service providers), and (iii) other individuals. As these parties are part of social media services, previous research has identified the need for users to assess their trustworthiness. Based on the trustworthiness assessment, users decide whether they want to interact with them and whether such interactions appear safe. The literature review encompasses 264 works from which so-called trustworthiness facets of 100 papers could be identified. In addition to an overview of trustworthiness facets, this work further introduces a guideline for software engineers on how to select appropriate trustworthiness facets during the analysis of the problem space for the development of specific social media applications. It is exemplified by the problem of “catfishing” in online dating.
The reasonable pricing of options can effectively help investors avoid risks and obtain benefits, which plays a very important role in the stability of the financial market. The traditional single option pricing model often fails to meet the ideal expectations due to its limited conditions. Combining an economic model with a deep learning model to establish a hybrid model provides a new method to improve the prediction accuracy of the pricing model. This includes the usage of real historical data of about 10,000 sets of CSI 300 ETF options from January to December 2020 for experimental analysis. Aiming at the prediction problem of CSI 300ETF option pricing, based on the importance of random forest features, the Convolutional Neural Network and Long Short-Term Memory model (CNN-LSTM) in deep learning is combined with a typical stochastic volatility Heston model and stochastic interests CIR model in parameter models. The dual hybrid pricing model of the call option and the put option of CSI 300ETF is established. The dual-hybrid model and the reference model are integrated with ridge regression to further improve the forecasting effect. The results show that the dual-hybrid pricing model proposed in this paper has high accuracy, and the prediction accuracy is tens to hundreds of times higher than the reference model; moreover, MSE can be as low as 0.0003. The article provides an alternative method for the pricing of financial derivatives.
Digital Twins (DTs) are a core enabler of Industry 4.0 in manufacturing. Cognitive Digital Twins (CDTs), as an evolution, utilize services and tools towards enabling human-like cognitive capabilities in DTs. This paper proposes a conceptual framework for implementing CDTs to support resilience in production, i.e., to enable manufacturing systems to identify and handle anomalies and disruptive events in production processes and to support decisions to alleviate their consequences. Through analyzing five real-life production cases in different industries, similarities and differences in their corresponding needs are identified. Moreover, a connection between resilience and cognition is established. Further, a conceptual architecture is proposed that maps the tools materializing cognition within the DT core together with a cognitive process that enables resilience in production by utilizing CDTs.
Edge detection is one of the fundamental computer vision tasks. Recent methods for edge detection based on a convolutional neural network (CNN) typically employ the weighted cross-entropy loss. Their predicted results being thick and needing post-processing before calculating the optimal dataset scale (ODS) F-measure for evaluation. To achieve end-to-end training, we propose a non-maximum suppression layer (NMS) to obtain sharp boundaries without the need for post-processing. The ODS F-measure can be calculated based on these sharp boundaries. So, the ODS F-measure loss function is proposed to train the network. Besides, we propose an adaptive multi-level feature pyramid network (AFPN) to better fuse different levels of features. Furthermore, to enrich multi-scale features learned by AFPN, we introduce a pyramid context module (PCM) that includes dilated convolution to extract multi-scale features. Experimental results indicate that the proposed AFPN achieves state-of-the-art performance on the BSDS500 dataset (ODS F-score of 0.837) and the NYUDv2 dataset (ODS F-score of 0.780).