scholarly journals Knowledge Generation through Human-Centered Information Visualization

2009 ◽  
Vol 8 (3) ◽  
pp. 180-196 ◽  
Author(s):  
Katja Einsfeld ◽  
Achim Ebert ◽  
Andreas Kerren ◽  
Matthias Deller

One important intention of human-centered information visualization is to represent huge amounts of abstract data in a visual representation that allows even users from foreign application domains to interact with the visualization, to understand the underlying data, and finally, to gain new, application-related knowledge. The visualization will help experts as well as non-experts to link previously or isolated knowledge-items in their mental map with new insights. Our approach explicitly supports the process of linking knowledge-items with three concepts. At first, the representation of data items in an ontology categorizes and relates them. Secondly, the use of various visualization techniques visually correlates isolated items by graph-structures, layout, attachment, integration or hyperlink techniques. Thirdly, the intensive use of visual metaphors relates a known source domain to a less known target domain. In order to realize a scenario of these concepts, we developed a visual interface for non-experts to maintain complex wastewater treatment plants. This domain-specific application is used to give our concepts a meaningful background.

2012 ◽  
Vol 26 (3) ◽  
pp. 318-334 ◽  
Author(s):  
Eli Tsukayama ◽  
Angela Lee Duckworth ◽  
Betty Kim

We propose a model of impulsivity that predicts both domain–general and domain–specific variance in behaviours that produce short–term gratification at the expense of long–term goals and standards. Specifically, we posit that domain–general impulsivity is explained by domain–general self–control strategies and resources, whereas domain–specific impulsivity is explained by how tempting individuals find various impulsive behaviours, and to a lesser extent, in perceptions of their long–term harm. Using a novel self–report measure, factor analyses produced six (non–exhaustive) domains of impulsive behaviour (Studies 1–2): work, interpersonal relationships, drugs, food, exercise and finances. Domain–general self–control explained 40% of the variance in domain–general impulsive behaviour between individuals, reffect = .71. Domain–specific temptation ( reffect = .83) and perceived harm ( reffect = −.26) explained 40% and 2% of the unique within–individual variance in impulsive behaviour, respectively (59% together). In Study 3, we recruited individuals in special interest groups (e.g. procrastinators) to confirm that individuals who are especially tempted by behaviours in their target domain are not likely to be more tempted in non–target domains. Copyright © 2011 John Wiley & Sons, Ltd.


Author(s):  
Xin Liu ◽  
Kai Liu ◽  
Xiang Li ◽  
Jinsong Su ◽  
Yubin Ge ◽  
...  

The lack of sufficient training data in many domains, poses a major challenge to the construction of domain-specific machine reading comprehension (MRC) models with satisfying performance. In this paper, we propose a novel iterative multi-source mutual knowledge transfer framework for MRC. As an extension of the conventional knowledge transfer with one-to-one correspondence, our framework focuses on the many-to-many mutual transfer, which involves synchronous executions of multiple many-to-one transfers in an iterative manner.Specifically, to update a target-domain MRC model, we first consider other domain-specific MRC models as individual teachers, and employ knowledge distillation to train a multi-domain MRC model, which is differentially required to fit the training data and match the outputs of these individual models according to their domain-level similarities to the target domain. After being initialized by the multi-domain MRC model, the target-domain MRC model is fine-tuned to match both its training data and the output of its previous best model simultaneously via knowledge distillation. Compared with previous approaches, our framework can continuously enhance all domain-specific MRC models by enabling each model to iteratively and differentially absorb the domain-shared knowledge from others. Experimental results and in-depth analyses on several benchmark datasets demonstrate the effectiveness of our framework.


Author(s):  
Jorge Ferreira Franco ◽  
Irene Karaguilla Ficheman ◽  
Marcelo Knörich Zuffo ◽  
Valkiria Venâncio

This chapter addresses an ongoing work strategy for developing and sharing knowledge related to digital/ Web-based technology and multimedia tools, information visualization, computer graphics, desktop virtual reality techniques in combination with art/education. It includes a large body of research about advanced and contemporary technologies and their use for stimulating individuals’ education. These interactive processes of researching, developing and sharing knowledge have been carried out through interdisciplinary and collaborative learning and teaching experiences in the context of k-12 education in a primary public school and its surrounding community. The learning and direct manipulation of advanced and contemporary technologies have improved individuals’ technical skills, stimulated cooperative and collaborative work and innovations in the way of developing school’s curriculum content as well as supported ones’ independent learning. Furthermore, there have been changes on individuals’ mental models, behavior and cultural changes related to reflecting about diverse possibilities of using information and communication technology within collaborative formal and informal sustainable lifelong learning and teaching actions.


2020 ◽  
Vol 34 (05) ◽  
pp. 7780-7788
Author(s):  
Siddhant Garg ◽  
Thuy Vu ◽  
Alessandro Moschitti

We propose TandA, an effective technique for fine-tuning pre-trained Transformer models for natural language tasks. Specifically, we first transfer a pre-trained model into a model for a general task by fine-tuning it with a large and high-quality dataset. We then perform a second fine-tuning step to adapt the transferred model to the target domain. We demonstrate the benefits of our approach for answer sentence selection, which is a well-known inference task in Question Answering. We built a large scale dataset to enable the transfer step, exploiting the Natural Questions dataset. Our approach establishes the state of the art on two well-known benchmarks, WikiQA and TREC-QA, achieving the impressive MAP scores of 92% and 94.3%, respectively, which largely outperform the the highest scores of 83.4% and 87.5% of previous work. We empirically show that TandA generates more stable and robust models reducing the effort required for selecting optimal hyper-parameters. Additionally, we show that the transfer step of TandA makes the adaptation step more robust to noise. This enables a more effective use of noisy datasets for fine-tuning. Finally, we also confirm the positive impact of TandA in an industrial setting, using domain specific datasets subject to different types of noise.


2009 ◽  
Vol 8 (3) ◽  
pp. 153-166 ◽  
Author(s):  
A. Johannes Pretorius ◽  
Jarke J. Van Wijk

Information visualization is a user-centered design discipline. In this article we argue, however, that designing information visualization techniques often requires more than designing for user requirements. Additionally, the data that are to be visualized must also be carefully considered. An approach based on both the user and their data is encapsulated by two questions, which we argue information visualization designers should continually ask themselves: ‘What does the user want to see?’ and ‘What do the data want to be?’ As we show by presenting cases, these two points of departure are mutually reinforcing. By focusing on the data, new insight is gained into the requirements of the user, and vice versa, resulting in more effective visualization techniques.


2019 ◽  
Vol 37 (3) ◽  
pp. 591-603 ◽  
Author(s):  
Hsuanwei Michelle Chen

Purpose The purpose of this paper is to investigate how scholars in the digital humanities employ information visualization techniques in their research, and how academic librarians should prepare themselves to support this emerging trend. Design/methodology/approach This study adopts a content analysis methodology, which further draws techniques from data mining, natural language processing and information visualization to analyze three peer-reviewed journals published within the last five years and ten online university library research guides in this field. Findings To successfully support and effectively contribute to the digital humanities, academic librarians should be knowledgeable in more than just visualization concepts and tools. The content analysis results for the digital humanities journals reflect the importance of recognizing the wide variety of applications and purposes of information visualization in digital humanities research. Practical implications This study provides useful and actionable insights into how academic librarians can prepare for this emerging technology to support future endeavors in the digital humanities. Originality/value Although information visualization has been widely adopted in digital humanities research, it remains unclear how librarians, especially academic librarians who support digital humanities research, should prepare for this emerging technology. This research is the first study to address this research gap through the lens of actual applications of information visualization techniques in digital humanities research, which is compared against university LibGuides for digital humanities research.


Sign in / Sign up

Export Citation Format

Share Document