scholarly journals Making Life Easier for the Visually Impaired Web Searcher: It Is Now Clearer How This Should and Can Be Done, but Implementation Lags

2013 ◽  
Vol 8 (1) ◽  
pp. 90 ◽  
Author(s):  
R. Laval Hunsucker

A Review of: Sahib, N. G., Tombros, A., & Stockman, T. (2012). A comparative analysis of the information-seeking behavior of visually impaired and sighted searchers. Journal of the American Society for Information Science and Technology, 63(2), 377–391. doi: 10.1002/asi.21696 Objective – To determine how the behaviour of visually impaired persons significantly differs from that of sighted persons in the carrying out of complex search tasks on the internet. Design – A comparative observational user study, plus semi-structured interviews. Setting – Not specified. Subjects – 15 sighted and 15 visually impaired persons, all of them experienced and frequent Internet search engine users, of both sexes and varying in age from early twenties to mid-fifties. Methods – The subjects carried out self-selected complex search tasks on their own equipment and in their own familiar environments. The investigators observed this activity to some extent directly, but for the most part via video camera, through use of a screen-sharing facility, or with screen-capture software. They distinguished four stages of search task activity: query formulation, search results exploration, query reformulation, and search results management. The visually impaired participants, of whom 13 were totally blind and two had only marginal vision, were all working with text-to-speech screen readers and depended exclusively for all their observed activity on those applications’ auditory output. For data analysis, the investigators devised a grounded-theory-based coding scheme. They employed a search log format for deriving further quantitative data which they later controlled for statistical significance (two-tailed unpaired t-test; p < 0.05). The interviews allowed them to document, in particular, how the visually impaired subjects themselves subsequently accounted for, interpreted, and vindicated various observed aspects of their searching behaviour. Main Results – The investigators found significant differences between the sighted participants’ search behaviour and that of the visually impaired searchers. The latter displayed a clearly less “orienteering” (O'Day & Jeffries, 1993) disposition and style, more often starting out with already relatively long and comprehensive combinations of relatively precise search terms; “their queries were more expressive” (p. 386). They submitted fewer follow-up queries, and were considerably less inclined to attempt query reformulation. They were aiming to achieve a satisfactory search outcome in a single step. Nevertheless, they rarely employed advanced operators, and made far less use (in only 4 instances) of their search engine’s query-support features than did the sighted searchers (37 instances). Fewer of them (13%) ventured beyond the first page of the results returned for their query by the search engine than was the case among the sighted searchers (43%). They viewed fewer (a mean of 4.27, as opposed to 13.40) retrieved pages, and they visited fewer external links (6 visits by 4 visually impaired searchers, compared with 34 visits by 11 sighted searchers). The visually impaired participants more frequently engaged in note taking than did the sighted participants. The visually impaired searchers were in some cases, the investigators discovered, unaware of search engine facilities or searching tactics which might have improved their search outcomes. Yet even when they were aware of these, they very often chose not to employ them because doing so via their screen readers would have cost them more time and effort than they were willing to expend. In general, they were more diffident and less resourceful than the sighted searchers, and had more trust in the innate capacity and reliability of their search engine to return in an efficient manner the best available results. Conclusion – Despite certain inherent limitations of the present study (the relatively small sample sizes and the non-randomness of the purposive sighted-searcher sample, the possible presence of extraneous variables, the impossibility of entirely ruling out familiarity bias), its findings strongly support the conclusion that working with today’s search engine user interfaces through the intermediation of currently available assistive technologies necessarily imposes severe limits on the degree to which visually impaired persons can efficiently search the web for information relevant to their needs. The findings furthermore suggest that there are various measures that it would be possible to take toward alleviating the situation, in the form of further improvements to retrieval systems, to search interfaces, and to text-to-speech screen readers. Such improvements would include: • more accessible system hints to support a better, and less cognitively intensive, query formulation; • web page layouts which are more suitable to screen-reader intermediation; • a results presentation which more readily facilitates browsing and exploratory behaviour, preferably including auditory previews and overviews; • presentation formats which allow for quicker and more accurate relevance judgments; • mechanisms for (a better) monitoring of search progress. In any event, further information behaviour studies ought now to be conducted, with the specific aim of more closely informing the development of user interfaces which will offer the kind of support that visually impaired Internet searchers are most in need of. Success in this undertaking will ultimately contribute to the further empowerment of visually disabled persons and thereby facilitate efforts to combat social exclusion.

Author(s):  
Aboubakr Aqle ◽  
Dena Al-Thani ◽  
Ali Jaoua

AbstractThere are limited studies that are addressing the challenges of visually impaired (VI) users when viewing search results on a search engine interface by using a screen reader. This study investigates the effect of providing an overview of search results to VI users. We present a novel interactive search engine interface called InteractSE to support VI users during the results exploration stage in order to improve their interactive experience and web search efficiency. An overview of the search results is generated using an unsupervised machine learning approach to present the discovered concepts via a formal concept analysis that is domain-independent. These concepts are arranged in a multi-level tree following a hierarchical order and covering all retrieved documents that share maximal features. The InteractSE interface was evaluated by 16 legally blind users and compared with the Google search engine interface for complex search tasks. The evaluation results were obtained based on both quantitative (as task completion time) and qualitative (as participants’ feedback) measures. These results are promising and indicate that InteractSE enhances the search efficiency and consequently advances user experience. Our observations and analysis of the user interactions and feedback yielded design suggestions to support VI users when exploring and interacting with search results.


2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
JianGuo Wang ◽  
Joshua Zhexue Huang ◽  
Dingming Wu

Query recommendation is an essential part of modern search engine which aims at helping users find useful information. Existing query recommendation methods all focus on recommending similar queries to the users. However, the main problem of these similarity-based approaches is that even some very similar queries may return few or even no useful search results, while other less similar queries may return more useful search results, especially when the initial query does not reflect user’s search intent correctly. Therefore, we propose recommending high utility queries, that is, useful queries with more relevant documents, rather than similar ones. In this paper, we first construct a query-reformulation graph that consists of query nodes, satisfactory document nodes, and interruption node. Then, we apply an absorbing random walk on the query-reformulation graph and model the document utility with the transition probability from initial query to the satisfactory document. At last, we propagate the document utilities back to queries and rank candidate queries with their utilities for recommendation. Extensive experiments were conducted on real query logs, and the experimental results have shown that our method significantly outperformed the state-of-the-art methods in recommending high utility queries.


1990 ◽  
Vol 84 (10) ◽  
pp. 493-496
Author(s):  
J.M. Dixon ◽  
J.B. Mandelbaum

This overview traces reading methods for blind and visually impaired persons from paper braille, recordings, and radio reading services to computerized telephone services to personal computers that provide access to on-line services, books on disk, CD-ROM, and scanning systems. It concludes with a review of trends, such as graphical user interfaces, fax machines, and touchscreens, that may have a negative effect on reading via computers.


2006 ◽  
Vol 14 (1) ◽  
pp. 71-81 ◽  
Author(s):  
Ion Juvina ◽  
Herre van Oostendorp

This paper proposes a research-based tool to assist visually impaired persons (VIPs) in using the Internet via screen readers. The proposed tool is inspired by research on modeling web use and model-based highlighting. This tool assists VIPs in selecting goal-relevant information on web pages. A computational cognitive model simulates the VIPs’ Internet use. An intelligent agent capable of dynamic highlighting and selective reading based on efficient machine learning algorithms runs alongside the (simulated) user. The agent learns from interacting with the cognitive model and the information space. This agent is implemented in an adaptive interface that takes, expands and updates a user goal, finds goal-relevant information and suggests it to the (simulated) user in an appropriate way. The proposed tool could be applied in situations that require handling information overload with limited perceptive and cognitive capabilities.


2014 ◽  
pp. 131-137
Author(s):  
Peter J. A. Reusch ◽  
Bastian Stoll ◽  
Daniel Studnik ◽  
Joerg Swade

VoiceXML is a language of the W3C to create voice-user interfaces, particularly for the telephone. It uses speech recognition and touchtone (DTMF keypad) for input, and pre-recorded audio and text-to-speech synthesis (TTS) for output. The text-to-speech synthesis feature of advanced VoiceXML tools like WebSphere opens new perspectives for e-commerce and e-learning. We are no longer restricted to pre-recorded audio but can bring any text to the ear of the user – a user that could be visually impaired and needs a voice channel to communicate – or a user who can read but who prefers to listen. VoiceXML-applications have been implemented by the authors to support e-commerce (selection of commodities from catalogues) and user guides for hardware (mobile phones, etc.) and software systems (MS project, etc.). New contributions to e-learning are offered.


Sign in / Sign up

Export Citation Format

Share Document