Consistency of Visual Information in Web Design - Focusing on Responsiveness of a University Website

Author(s):  
Toshiki Matsuo ◽  
Wonseok Yang ◽  
Naoya Shibata
2021 ◽  
Vol 14 (2) ◽  
pp. 1-32
Author(s):  
Claire Kearney-Volpe ◽  
Amy Hurst

There are a growing number of jobs related to web development, yet there is little formal literature about the accessibility of web development with a screen reader. This article describes research to explore (1) web development accessibility issues and their impact on blind learners and programmers; (2) tools and strategies used to address issues; and (3) opportunities for creating inclusive web development curriculum and supportive tools. We conducted a Comprehensive Literature Review (CLR) to formulate accessibility issue categories, then interviewed 12 blind programmers to validate and expand on both issues in education and practice. The CLR yielded five issue categories: (1) visual information without an accessible equivalent, (2) orienting, (3) navigating, (4) lack of support, and (5) knowledge and use of supportive technologies. Our interview findings validated the use of CLR-derived categories and revealed nuances specific to learning and practicing web development. Blind web developers grapple with the inaccessibility of demonstrations and explanations of web design concepts, wireframing software, independent verification of computed Cascading Style Sheets (CSS), and navigating browser-based developer tool interfaces. Tools and strategies include seeking out alternative education materials to learn independently, use of CSS frameworks, collaboration with sighted colleagues, and avoidance of design and front-end development. This work contributes to our understanding of accessibility issues specific to web development and the strategies that blind web developers employ in both educational and applied contexts. We identify areas in which greater awareness and application of accessibility best practices are required in Web education, a need to disseminate existing screen reader strategies and accessible tools, and to develop new tools that support Web design and validation of CSS. Finally, this research signals future directions for the development of accessible web curriculum and supportive tools, including solutions that leverage artificial intelligence, tactile graphics, and supportive-online communities of practice.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.


Sign in / Sign up

Export Citation Format

Share Document