Articulate

Author(s):  
Yiwen Sun ◽  
Jason Leigh ◽  
Andrew Johnson ◽  
Barbara Di Eugenio

This chapter presents an approach to enable non-visualization experts to craft advanced visualizations through the use of natural language as the primary interface. The main challenge in this research is in determining how to translate imprecise verbal queries into effective visualizations. To demonstrate the viability of the concept, the authors developed and evaluated a prototype, Articulate, which allows users to simply ask the computer for questions about their data and have it automatically generate visualizations that answer these questions. The authors discovered that by relieving the user of the burden of learning how to use a complex interface, they enable them to focus on articulating better scientific questions and wasting less time in producing unintended visualizations.

2017 ◽  
Vol 44 (4) ◽  
pp. 526-551 ◽  
Author(s):  
Abdulgabbar Saif ◽  
Nazlia Omar ◽  
Mohd Juzaiddin Ab Aziz ◽  
Ummi Zakiah Zainodin ◽  
Naomie Salim

Wikipedia has become a high coverage knowledge source which has been used in many research areas such as natural language processing, text mining and information retrieval. Several methods have been introduced for extracting explicit or implicit relations from Wikipedia to represent semantics of concepts/words. However, the main challenge in semantic representation is how to incorporate different types of semantic relations to capture more semantic evidences of the associations of concepts. In this article, we propose a semantic concept model that incorporates different types of semantic features extracting from Wikipedia. For each concept that corresponds to an article, four semantic features are introduced: template links, categories, salient concepts and topics. The proposed model is based on the probability distributions that are defined for these semantic features of a Wikipedia concept. The template links and categories are the document-level features which are directly extracted from the structured information included in the article. On the other hand, the salient concepts and topics are corpus-level features which are extracted to capture implicit relations among concepts. For the salient concepts feature, the distributional-based method is utilised on the hypertext corpus to extract this feature for each Wikipedia concept. Then, the probability product kernel is used to improve the weight of each concept in this feature. For the topic feature, the Labelled latent Dirichlet allocation is adapted on the supervised multi-label of Wikipedia to train the probabilistic model of this feature. Finally, we used the linear interpolation for incorporating these semantic features into the probabilistic model to estimate the semantic relation probability of the specific concept over Wikipedia articles. The proposed model is evaluated on 12 benchmark datasets in three natural language processing tasks: measuring the semantic relatedness of concepts/words in general and in the biomedical domain, semantic textual relatedness measurement and measuring the semantic compositionality of noun compounds. The model is also compared with five methods that depends on separate semantic features in Wikipedia. Experimental results show that the proposed model achieves promising results in three tasks and outperforms the baseline methods in most of the evaluation datasets. This implies that incorporation of explicit and implicit semantic features is useful for representing semantics of concepts in Wikipedia.


2021 ◽  
Vol 26 (2) ◽  
pp. 143-149
Author(s):  
Abdelghani Bouziane ◽  
Djelloul Bouchiha ◽  
Redha Rebhi ◽  
Giulio Lorenzini ◽  
Noureddine Doumi ◽  
...  

The evolution of the traditional Web into the semantic Web makes the machine a first-class citizen on the Web and increases the discovery and accessibility of unstructured Web-based data. This development makes it possible to use Linked Data technology as the background knowledge base for unstructured data, especially texts, now available in massive quantities on the Web. Given any text, the main challenge is determining DBpedia's most relevant information with minimal effort and time. Although, DBpedia annotation tools, such as DBpedia spotlight, mainly targeted English and Latin DBpedia versions. The current situation of the Arabic language is less bright; the Web content of the Arabic language does not reflect the importance of this language. Thus, we have developed an approach to annotate Arabic texts with Linked Open Data, particularly DBpedia. This approach uses natural language processing and machine learning techniques for interlinking Arabic text with Linked Open Data. Despite the high complexity of the independent domain knowledge base and the reduced resources in Arabic natural language processing, the evaluation results of our approach were encouraging.


2007 ◽  
Vol Volume 6, april 2007, joint... ◽  
Author(s):  
T. Botha ◽  
D.G. Kourie ◽  
B.W. Watson

International audience This article reports on the approach taken, experience gathered, and results found in building a tool to support the derivation of solutions to a particular kind of word game. This required that techniques had to be derived for simple yet acceptably quick access to a dictionary of natural language words (in the present case, Afrikaans). The main challenge was to access a large corpus of natural language words via a partial match retrieval technique. Other challenges included discovering how to represent such a dictionary in a "semi-compressed" format, thus arriving at a balance that favours search speed but nevertheless derives a savings on storage requirements. In addition, a query language had to be developed that would effectively exploit this access method. The system is designed to support a more intelligent query capability in the future. Acceptable response times were achieved even though an interpretive scripting language, ObjectREXX, was used. Cet article présente la méthode utilisée, l’expérience menée et les résultats obtenus dans le processus de construction d’un outil d'aide à la dérivation des solutions relatives à un type particulier de jeux de mots. Ceci requiert que des techniques soient dérivées pour un accès simple et rapide dans le dictionnaire des mots du langage naturel (l’Afrikaans pour le cas d’espèce). Le gros problème était d’accéder à un large corpus de mots du langage naturel à travers la technique d’identification de mots par extraction partielle. L’autre défi était de représenter un tel dictionnaire sous un format «semi-compressé», permettant ainsi de se trouver dans une situation qui améliore non seulement le temps de recherche dans le dictionnaire, mais aussi l’espace requis pour sa sauvegarde. De plus, un langage de requête, exploitant de façon effective la dite méthode d’accès devait être mise en oeuvre. Le système est conçu de façon à supporter dans le futur des requêtes plus intelligentes. Un temps de réponse acceptable a été obtenu bien qu'un langage interprétatif de Scripting (ObjREXX) ait été utilisé.


VASA ◽  
2015 ◽  
Vol 44 (2) ◽  
pp. 85-91
Author(s):  
Erich Minar

The generally accepted first-line treatment in patients with intermittent claudication is risk factor modification, medical treatment and exercise training. In an era of reduced resources, the benefit of any further invasive intervention must be weighted against best conservative therapy for patients with claudication. According to some recent trials an integrative therapeutic concept combining best conservative treatment - including (supervised) exercise therapy - with endovascular therapy gives the best midterm results concerning walking distance and health-related quality of life. The improved mid- and long-term patency rate with use of modern technology further supports this concept. The conservative and interventional treatment strategy are more complimentary than competitive. The current main challenge is to overcome the economic barriers concerning the availability of exercise programmes.


1987 ◽  
Vol 32 (1) ◽  
pp. 33-34
Author(s):  
Greg N. Carlson
Keyword(s):  

2012 ◽  
Author(s):  
Loes Stukken ◽  
Wouter Voorspoels ◽  
Gert Storms ◽  
Wolf Vanpaemel
Keyword(s):  

2004 ◽  
Author(s):  
Harry E. Blanchard ◽  
Osamuyimen T. Stewart
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document