visual constraints
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 7)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Alejandro Rubio Barañano ◽  
Muhammad Faisal ◽  
Brendan T. Barrett ◽  
John G. Buckley

AbstractViewing one’s smartphone whilst walking commonly leads to a slowing of walking. Slowing walking speed may occur because of the visual constraints related to reading the hand-held phone whilst in motion. We determine how walking-induced phone motion affects the ability to read on-screen information. Phone-reading performance (PRP) was assessed whilst participants walked on a treadmill at various speeds (Slow, Customary, Fast). The fastest speed was repeated, wearing an elbow brace (Braced) or with the phone mounted stationary (Fixed). An audible cue (‘text-alert’) indicated participants had 2 s to lift/view the phone and read aloud a series of digits. PRP was the number of digits read correctly. Each condition was repeated 5 times. 3D-motion analyses determined phone motion relative to the head, from which the variability in acceleration in viewing distance, and in the point of gaze in space in the up-down and right-left directions were assessed. A main effect of condition indicated PRP decreased with walking speed; particularly so for the Braced and Fixed conditions (p = 0.022). Walking condition also affected the phone’s relative motion (p < 0.001); post-hoc analysis indicated that acceleration variability for the Fast, Fixed and Braced conditions were increased compared to that for Slow and Customary speed walking (p ≤ 0.05). There was an inverse association between phone acceleration variability and PRP (p = 0.02). These findings may explain why walking speed slows when viewing a hand-held phone: at slower speeds, head motion is smoother/more regular, enabling the motion of the phone to be coupled with head motion, thus making fewer demands on the oculomotor system. Good coupling ensures that the retinal image is stable enough to allow legibility of the information presented on the screen.


2021 ◽  
Vol 23 (3) ◽  
pp. 3-10
Author(s):  
Ming-Yuan Tang ◽  
Chih-Mei Yang ◽  
Hank Jun-Ling Jwo

OBJECTIVES The perceptual ability to detect movement is essential for expert table tennis players. A spatiotemporal occlusion paradigm was employed to examine the critical information that facilitates athletes’ perception.METHODS Thirty-one expert table tennis players, 29 participants and 2 demonstrators, volunteered to participate in the study. Four types of temporal conditions and five types of spatial occlusions were displayed in experimental videos of two opponents playing a table tennis forehand stroke. Period t1–4 represented the four temporal conditions, with 250, 500, 750, and 1000 ms of action being occluded, respectively. The five types of spatial occlusion involved showing the kinematics of only the ball, paddle, arm, trunk, or head. The participants were instructed to judge the landing direction of the ball on the basis of the information in the footage.RESULTS The footage depicted the longest period of play. Furthermore, in separate trials, the spatial information (for the ball, torso, or head) was missing because of occlusion. The absence of such critical spatiotemporal information impaired the ability of players to make an accurate prediction.CONCLUSION Players obtained crucial spatiotemporal information if the timeframe of the video was relatively complete and spatial information on the opponent’s torso and head was available. For peak performance, expert table tennis players perceive and detect the optical flow of the ball’s flight and consider invariant information concerning their opponent’s torso and head.


2021 ◽  
Vol 12 ◽  
Author(s):  
Candace C. Croney ◽  
Sarah T. Boysen

The ability of two Panepinto micro pigs and two Yorkshire pigs (Sus scrofa) to acquire a joystick-operated video-game task was investigated. Subjects were trained to manipulate a joystick that controlled movement of a cursor displayed on a computer monitor. The pigs were required to move the cursor to make contact with three-, two-, or one-walled targets randomly allocated for position on the monitor, and a reward was provided if the cursor collided with a target. The video-task acquisition required conceptual understanding of the task, as well as skilled motor performance. Terminal performance revealed that all pigs were significantly above chance on first attempts to contact one-walled targets (p &lt; 0.05). These results indicate that despite dexterity and visual constraints, pigs have the capacity to acquire a joystick-operated video-game task. Limitations in the joystick methodology suggest that future studies of the cognitive capacities of pigs and other domestic species may benefit from the use of touchscreens or other advanced computer-interfaced technology.


2020 ◽  
Author(s):  
Reto Stauffer ◽  
Achim Zeileis

&lt;p&gt;Color is an integral element in many visualizations in (geo-)sciences, specifically in maps but also bar plots, scatter plots, or time series displays. Well-chosen colors can make graphics more appealing and, more importantly, help to clearly communicate the underlying information. Conversely, poorly-chosen colors can obscure information or confuse the readers. One example for the latter gained prominence in the controversy over Hurricane Dorian: Using an official weather forecast map, U.S. President Donald Trump repeatedly claimed that early forecasts showed a high probability of Alabama being hit. We demonstrate that a potentially confusing rainbow color map may have attributed to an overestimation of the risk (among other factors that stirred the discussion).&lt;/p&gt;&lt;p&gt;To avoid such problems, we introduce general strategies for selecting robust color maps that are intuitive for many audiences, including readers with color vision deficiencies. The construction of sequential, diverging, or qualitative palettes is based on on appropriate light-dark &quot;luminance&quot; contrasts while suitably controlling the &quot;hue&quot; and the colorfulness (&quot;chroma&quot;). The strategies are also easy to put into practice using computations based on the so-called Hue-Chroma-Luminance (HCL) color model, e.g., as provided in our &quot;colorspace&quot; software package (http://hclwizard.org), available for both the R and Python programming languages. In addition to the HCL-based color maps the package provides interactive apps for exploring and modifying palettes along with further tools for manipulation and customization, demonstration plots, and emulation of visual constraints.&lt;/p&gt;


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Heidi Solberg Økland ◽  
Ana Todorović ◽  
Claudia S. Lüttke ◽  
James M. McQueen ◽  
Floris P. de Lange

Oikos ◽  
2019 ◽  
Vol 128 (6) ◽  
pp. 798-810 ◽  
Author(s):  
Cameron L. Rutt ◽  
Stephen R. Midway ◽  
Vitek Jirinec ◽  
Jared D. Wolfe ◽  
Philip C Stouffer

2018 ◽  
Author(s):  
Heidi Solberg Økland ◽  
Ana Todorović ◽  
Claudia S. Lüttke ◽  
James M. McQueen ◽  
Floris P. de Lange

AbstractIn language comprehension, a variety of contextual cues act in unison to render upcoming words more or less predictable. As a sentence unfolds, we use prior context (sentential constraints) to predict what the next words might be. Additionally, in a conversation, we can predict upcoming sounds through observing the mouth movements of a speaker (visual constraints). In electrophysiological studies, effects of visual salience have typically been observed early in language processing, while effects of sentential constraints have typically been observed later. We hypothesized that the visual and the sentential constraints might feed into the same predictive process such that effects of sentential constraints might also be detectable early in language processing through modulations of the early effects of visual salience. We presented participants with audiovisual speech while recording their brain activity with magnetoencephalography. Participants saw videos of a person saying sentences where the last word was either sententially constrained or not, and began with a salient or non-salient mouth movement. We found that sentential constraints indeed exerted an early (N1) influence on language processing. Sentential modulations of the N1 visual predictability effect were visible in brain areas associated with semantic processing, and were differently expressed in the two hemispheres. In the left hemisphere, visual and sentential constraints jointly suppressed the auditory evoked field, while the right hemisphere was sensitive to visual constraints only in the absence of strong sentential constraints. These results suggest that sentential and visual constraints can jointly influence even very early stages of audiovisual speech comprehension.


Sign in / Sign up

Export Citation Format

Share Document