Cruxes for visual domain sonification in digital arts

2021 ◽  
pp. 1-14
Author(s):  
Denis Trček
Keyword(s):  
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Julia Friedrich ◽  
Henriette Spaleck ◽  
Ronja Schappert ◽  
Maximilian Kleimaker ◽  
Julius Verrel ◽  
...  

AbstractIt is a common phenomenon that somatosensory sensations can trigger actions to alleviate experienced tension. Such “urges” are particularly relevant in patients with Gilles de la Tourette (GTS) syndrome since they often precede tics, the cardinal feature of this common neurodevelopmental disorder. Altered sensorimotor integration processes in GTS as well as evidence for increased binding of stimulus- and response-related features (“hyper-binding”) in the visual domain suggest enhanced perception–action binding also in the somatosensory modality. In the current study, the Theory of Event Coding (TEC) was used as an overarching cognitive framework to examine somatosensory-motor binding. For this purpose, a somatosensory-motor version of a task measuring stimulus–response binding (S-R task) was tested using electro-tactile stimuli. Contrary to the main hypothesis, there were no group differences in binding effects between GTS patients and healthy controls in the somatosensory-motor paradigm. Behavioral data did not indicate differences in binding between examined groups. These data can be interpreted such that a compensatory “downregulation” of increased somatosensory stimulus saliency, e.g., due to the occurrence of somatosensory urges and hypersensitivity to external stimuli, results in reduced binding with associated motor output, which brings binding to a “normal” level. Therefore, “hyper-binding” in GTS seems to be modality-specific.


2021 ◽  
pp. 174702182199003
Author(s):  
Andy J Kim ◽  
David S Lee ◽  
Brian A Anderson

Previously reward-associated stimuli have consistently been shown to involuntarily capture attention in the visual domain. Although previously reward-associated but currently task-irrelevant sounds have also been shown to interfere with visual processing, it remains unclear whether such stimuli can interfere with the processing of task-relevant auditory information. To address this question, we modified a dichotic listening task to measure interference from task-irrelevant but previously reward-associated sounds. In a training phase, participants were simultaneously presented with a spoken letter and number in different auditory streams and learned to associate the correct identification of each of three letters with high, low, and no monetary reward, respectively. In a subsequent test phase, participants were again presented with the same auditory stimuli but were instead instructed to report the number while ignoring spoken letters. In both the training and test phases, response time measures demonstrated that attention was biased in favour of the auditory stimulus associated with high value. Our findings demonstrate that attention can be biased towards learned reward cues in the auditory domain, interfering with goal-directed auditory processing.


2021 ◽  
pp. 174702182199545
Author(s):  
Emily M Crowe ◽  
Sander A Los ◽  
Louise Schindler ◽  
Christopher Kent

How quickly participants respond to a “go” after a “warning” signal is partly determined by the time between the two signals (the foreperiod) and the distribution of foreperiods. According to Multiple Trace Theory of Temporal Preparation (MTP), participants use memory traces of previous foreperiods to prepare for the upcoming go signal. If the processes underlying temporal preparation reflect general encoding and memory principles, transfer effects (the carryover effect of a previous block’s distribution of foreperiods to the current block) should be observed regardless of the sensory modality in which signals are presented. Despite convincing evidence for transfer effects in the visual domain, only weak evidence for transfer effects has been documented in the auditory domain. Three experiments were conducted to examine whether such differences in results are due to the modality of the stimulus or other procedural factors. In each experiment, two groups of participants were exposed to different foreperiod distributions in the acquisition phase and to the same foreperiod distribution in the transfer phase. Experiment 1 used a choice-reaction time (RT) task, and the warning signal remained on until the go signal, but there was no evidence for transfer effects. Experiments 2 and 3 used a simple- and choice-RT task, respectively, and there was silence between the warning and go signals. Both experiments revealed evidence for transfer effects, which suggests that transfer effects are most evident when there is no auditory stimulation between the warning and go signals.


2021 ◽  
Vol 17 (3) ◽  
pp. 1-19
Author(s):  
Xin Li ◽  
Dawei Li

Forecasting human poses given a sequence of historical pose frames has several important applications, especially in the domain of smart home safety. Recently, computer vision-based human pose forecasting has made a breakthrough using deep learning technology. However, to implement a practical system deployed on an IoT edge environment, there are still two issues to be addressed. First, existing methods on pose forecasting fail to model the coherent structural information of connected human joints and thus cannot achieve satisfactory prediction accuracy, especially for long-term predictions. Second, a general and static pre-trained prediction model may not perform well in the deployment environment due to the visual domain shift problem. In this article, we propose a hybrid cloud-edge system called GPFS to solve those issues. Specifically, we first introduce a novel graph convolutional neural network (GCN)-based sequence-to-sequence learning method, which enhances the sequence encoder by using a graph to represent both the spatial and temporal connections of the human joints in the input frames. The GCN improves the forecasting accuracy by capturing the motion pattern of each joint as well as the correlations among different human joints over time. Second, to address the domain shift issue and protect data privacy, we extend the system to perform online learning on the IoT edge to adapt the cloud trained general model with online collected on-site domain data. Extensive evaluation on Human 3.6M and Penn Action datasets demonstrates the superiority of our proposed system.


Perception ◽  
2021 ◽  
Vol 50 (1) ◽  
pp. 97-100
Author(s):  
Magdalena Szubielska ◽  
Marcin Wojtasiński

This study aimed to test differences in drawn size of familiar objects of different physical size in haptic drawings produced by blindfolded sighted participants. Using two sizes of the foil sheets on which they made convex drawings, they drew one object per foil. The results showed that the size of drawings increased linearly with the rising rank of real-world size. Although larger drawings were created on larger foils than on smaller ones, the ratio of the object drawn size within the foil sheet size did not differ across foil sizes. Hence, canonical size—a phenomenon known so far from studies on the visual domain—revealed here in a task performed in the haptic domain.


Leonardo ◽  
1999 ◽  
Vol 32 (4) ◽  
pp. 261-268 ◽  
Author(s):  
Matthew Kirschenbaum

This paper documents an interactive graphics installation entitled Lucid Mapping and Codex Transformissions in the Z-Buffer. Lucid Mapping uses the Virtual Reality Modeling Language to explore textual and narrative possibilities within three-dimensional (3D) electronic environments. The author describes the creative rationale and technical design of the work and places it within the context of other applications of 3D text and typography in the digital arts and the scientific visualization communities. The author also considers the implications of 3D textual environments on visual language and communication, and discriminates among a range of different visual/ rhetorical strategies that such environments can sustain.


2017 ◽  
Vol 39 (3) ◽  
pp. 307-308
Author(s):  
Maria Chatzichristodoulou
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document