AN ACTIVE VISUAL SEARCH INTERFACE FOR MEDLINE

Author(s):  
Weijian Xuan ◽  
Manhong Dai ◽  
Barbara Mirel ◽  
Justin Wilson ◽  
Brian Athey ◽  
...  
2015 ◽  
Vol 33 (4) ◽  
pp. 610-624
Author(s):  
Po-Yao Chao ◽  
Chia-Ching Lin

Purpose – The purpose of this paper is to explore how young children interact with a visualized search interface to search for storybooks by assembling the provided visual searching items and to explore the difference in visual search behaviours and strategies exhibited by pre-schoolers and second-graders. Design/methodology/approach – The visualized search interface was used to help young children search for storybooks by dragging-and-dropping story characters, scene objects and colour icons to perform search queries. Twenty pre-schoolers and 20 second-graders were asked to finish a search task through the visualized search interface. Their activities and successes in performing visual searches were logged for later analysis. Furthermore, in-depth interviews were also conducted to research their cognitive strategies exhibited while formulating visual search queries. Findings – Young children with different grades adopted different cognitive strategies to perform visual searching. In contrast to the pre-schoolers who performed visual searching by personal preference, the second-graders could exercise visual searching accompanied with relatively high-order thinking. Young children may also place different foci on the storybook structure to deal with conditional storybook queries. The pre-schoolers tended to address the characters in the story, whereas the second-graders paid much attention to the aspects of scene and colour. Originality/value – This paper describes a new visual search approach allowing young children to search for storybooks by describing an intended storybook in terms of its characters, scenes or the background colours, which provides valuable indicators to inform researchers of how pre-schoolers and second-graders formulate concepts to search for storybooks.


2018 ◽  
Author(s):  
Maurice Schleußinger ◽  
Maria Henkel

Information Visualizations are well-established to represent high density information in an intuitive and interactive way. There are no popular general retrieval systems, however, which utilize the power of information visualizations for search result representation. This paper describes Knowde, a search interface with purely visual result representation. It employs a powerful information retrieval system and works in a common web browser in real-time. This working prototype, with three different variations of network graphs will assist us in exploring current issues in visualization research, such as the challenge of system evaluation.The final authenticated version is available online at https://doi.org/10.1007/978-3-319-92270-6_26.


2015 ◽  
Vol 74 (1) ◽  
pp. 55-60 ◽  
Author(s):  
Alexandre Coutté ◽  
Gérard Olivier ◽  
Sylvane Faure

Computer use generally requires manual interaction with human-computer interfaces. In this experiment, we studied the influence of manual response preparation on co-occurring shifts of attention to information on a computer screen. The participants were to carry out a visual search task on a computer screen while simultaneously preparing to reach for either a proximal or distal switch on a horizontal device, with either their right or left hand. The response properties were not predictive of the target’s spatial position. The results mainly showed that the preparation of a manual response influenced visual search: (1) The visual target whose location was congruent with the goal of the prepared response was found faster; (2) the visual target whose location was congruent with the laterality of the response hand was found faster; (3) these effects have a cumulative influence on visual search performance; (4) the magnitude of the influence of the response goal on visual search is marginally negatively correlated with the rapidity of response execution. These results are discussed in the general framework of structural coupling between perception and motor planning.


2008 ◽  
Vol 67 (2) ◽  
pp. 71-83 ◽  
Author(s):  
Yolanda A. Métrailler ◽  
Ester Reijnen ◽  
Cornelia Kneser ◽  
Klaus Opwis

This study compared individuals with pairs in a scientific problem-solving task. Participants interacted with a virtual psychological laboratory called Virtue to reason about a visual search theory. To this end, they created hypotheses, designed experiments, and analyzed and interpreted the results of their experiments in order to discover which of five possible factors affected the visual search process. Before and after their interaction with Virtue, participants took a test measuring theoretical and methodological knowledge. In addition, process data reflecting participants’ experimental activities and verbal data were collected. The results showed a significant but equal increase in knowledge for both groups. We found differences between individuals and pairs in the evaluation of hypotheses in the process data, and in descriptive and explanatory statements in the verbal data. Interacting with Virtue helped all students improve their domain-specific and domain-general psychological knowledge.


Author(s):  
Angela A. Manginelli ◽  
Franziska Geringswald ◽  
Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


Sign in / Sign up

Export Citation Format

Share Document