Reference representation techniques for large models

Author(s):  
Markus Scheidgen
2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Giovanni Pilato ◽  
Agnese Augello ◽  
Salvatore Gaglio

The paper illustrates a system that implements a framework, which is oriented to the development of a modular knowledge base for a conversational agent. This solution improves the flexibility of intelligent conversational agents in managing conversations. The modularity of the system grants a concurrent and synergic use of different knowledge representation techniques. According to this choice, it is possible to use the most adequate methodology for managing a conversation for a specific domain, taking into account particular features of the dialogue or the user behavior. We illustrate the implementation of a proof-of-concept prototype: a set of modules exploiting different knowledge representation methodologies and capable of managing different conversation features has been developed. Each module is automatically triggered through a component, named corpus callosum, that selects in real time the most adequate chatbot knowledge module to activate.


2016 ◽  
Vol 68 (4) ◽  
pp. 448-477 ◽  
Author(s):  
Dong Zhou ◽  
Séamus Lawless ◽  
Xuan Wu ◽  
Wenyu Zhao ◽  
Jianxun Liu

Purpose – With an increase in the amount of multilingual content on the World Wide Web, users are often striving to access information provided in a language of which they are non-native speakers. The purpose of this paper is to present a comprehensive study of user profile representation techniques and investigate their use in personalized cross-language information retrieval (CLIR) systems through the means of personalized query expansion. Design/methodology/approach – The user profiles consist of weighted terms computed by using frequency-based methods such as tf-idf and BM25, as well as various latent semantic models trained on monolingual documents and cross-lingual comparable documents. This paper also proposes an automatic evaluation method for comparing various user profile generation techniques and query expansion methods. Findings – Experimental results suggest that latent semantic-weighted user profile representation techniques are superior to frequency-based methods, and are particularly suitable for users with a sufficient amount of historical data. The study also confirmed that user profiles represented by latent semantic models trained on a cross-lingual level gained better performance than the models trained on a monolingual level. Originality/value – Previous studies on personalized information retrieval systems have primarily investigated user profiles and personalization strategies on a monolingual level. The effect of utilizing such monolingual profiles for personalized CLIR remains unclear. The current study fills the gap by a comprehensive study of user profile representation for personalized CLIR and a novel personalized CLIR evaluation methodology to ensure repeatable and controlled experiments can be conducted.


2021 ◽  
Author(s):  
Mehrnaz Shokrollahi

It is estimated that 50 to 70 million Americans suffer from a chronic sleep disorder, which hinders their daily life, affects their health, and incurs a significant economic burden to society. Untreated Periodic Leg Movement (PLM) or Rapid Eye Movement Behaviour Disorder (RBD) could lead to a three to four-fold increased risk of stroke and Parkinson’s disease respectively. These risks bring about the need for less costly and more available diagnostic tools that will have great potential for detection and prevention. The goal of this study is to investigate the potentially clinically relevant but under-explored relationship of the sleep-related movement disorders of PLMs and RBD with cerebrovascular diseases. Our objective is to introduce a unique and efficient way of performing non-stationary signal analysis using sparse representation techniques. To fulfill this objective, at first, we develop a novel algorithm for Electromyogram (EMG) signals in sleep based on sparse representation, and we use a generalized method based on Leave-One-Out (LOO) to perform classification for small size datasets. In the second objective, due to the long-length of these EMG signals, the need for feature extraction algorithms that can localize to events of interest increases. To fulfill this objective, we propose to use the Non-Negative Matrix Factorization (NMF) algorithm by means of sparsity and dictionary learning. This allows us to represent a variety of EMG phenomena efficiently using a very compact set of spectrum bases. Yet EMG signals pose severe challenges in terms of the analysis and extraction of discriminant features. To achieve a balance between robustness and classification performance, we aim to exploit deep learning and study the discriminant features of the EMG signals by means of dictionary learning, kernels, and sparse representation for classification. The classification performances that were achieved for detection of RBD and PLM by means of implicating these properties were 90% and 97% respectively. The theoretical properties of the proposed approaches pertaining to pattern recognition and detection are examined in this dissertation. The multi-layer feature extraction provide strong and successful characterization and classification for the EMG non-stationary signals and the proposed sparse representation techniques facilitate the adaptation to EMG signal quantification in automating the identification process.


2012 ◽  
Vol 22 (4-5) ◽  
pp. 614-704 ◽  
Author(s):  
NICOLAS POUILLARD ◽  
FRANÇOIS POTTIER

AbstractAtoms and de Bruijn indices are two well-known representation techniques for data structures that involve names and binders. However, using either technique, it is all too easy to make a programming error that causes one name to be used where another was intended. We propose an abstract interface to names and binders that rules out many of these errors. This interface is implemented as a library in Agda. It allows defining and manipulating term representations in nominal style and in de Bruijn style. The programmer is not forced to choose between these styles: on the contrary, the library allows using both styles in the same program, if desired. Whereas indexing the types of names and terms with a natural number is a well-known technique to better control the use of de Bruijn indices, we index types with worlds. Worlds are at the same time more precise and more abstract than natural numbers. Via logical relations and parametricity, we are able to demonstrate in what sense our library is safe, and to obtain theorems for free about world-polymorphic functions. For instance, we prove that a world-polymorphic term transformation function must commute with any renaming of the free variables. The proof is entirely carried out in Agda.


Author(s):  
Saurabh Sen ◽  
Ruchi L. Sen

NPA is a “termite” for the banking sector. It affects liquidity and profitability of the bank to a great extent; in addition, it also poses a threat to the quality of asset and survival of banks. The post-reform era has changed the whole structure of the banking sector of India. Now, the economy is not confined to the domestic boundary of the country. The core intention of economic reforms in India was to attract foreign investments and create a sound banking system. This chapter provides an empirical approach to the analysis of profitability indicators with a focal point on Non-Performing Assets (NPAs) of commercial banks in the Indian context. The chapter discusses NPA, factors contributing to NPA, magnitude, and consequences. By using an analytical perspective, the chapter observes that NPAs affected significantly the performance of the banks in the present scenario. On the other hand, factors like better credit culture, managing the risk, and business conditions led to lowering of NPAs. The empirical findings using observation method and statistical tools like correlation, regression, and data representation techniques identify that there is a negative relationship between profitability measure and NPAs.


Author(s):  
Adel Alti ◽  
Sébastian Laborie ◽  
Philippe Roose

This paper presents an approach to enhance users experience through the use of recommendations and social networks for on-the-fly (at runtime) adaptation of multimedia documents. This paper presents also CSSAP, a dynamic service selection and assembly tool based on new user profiles and community profiles defined as set of semantic metadata, which context, quality of service and quality of experience parameters. The tool is based on community-aware semantic services and offer architecture, with three layers (semantic query, community management and semantic services). The most innovative characteristic of the tool is that it profits from the potential of semantic representation techniques to express context constraints and community's interests, while they may be useful to generate and manage of complex dynamic adaptation process. This tool improves assembly of relevant adaptation services for communities inferred social influence from a Facebook as virtual P2P environment. The proposed approach has been validated through a prototype for mobiles user of multimedia contents exchanges. The goal is to improve assembly of potential adaptation services and the efficiency and effectiveness of the authors' approach.


Sign in / Sign up

Export Citation Format

Share Document