AI-driven imaging biomarkers for sensory cue integration during melanoma screening (Conference Presentation)

Author(s):  
Daniel S. Gareau ◽  
Charles Vrattos ◽  
James Browning ◽  
Samantha R. Lish ◽  
Benjamin Firester ◽  
...  
Neuroscience ◽  
2019 ◽  
Vol 408 ◽  
pp. 378-387 ◽  
Author(s):  
Qadeer Arshad ◽  
Marta Casanovas Ortega ◽  
Usman Goga ◽  
Rhannon Lobo ◽  
Shuaib Siddiqui ◽  
...  

2013 ◽  
Vol 14 (6) ◽  
pp. 429-442 ◽  
Author(s):  
Christopher R. Fetsch ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki
Keyword(s):  

Perception ◽  
2013 ◽  
Vol 42 (4) ◽  
pp. 477-479
Author(s):  
Paul Barry Hibbard
Keyword(s):  

2016 ◽  
Vol 26 (7) ◽  
pp. 615-618 ◽  
Author(s):  
Daniel S. Gareau ◽  
Joel Correa da Rosa ◽  
Sarah Yagerman ◽  
John A. Carucci ◽  
Nicholas Gulati ◽  
...  

2014 ◽  
Vol 369 (1635) ◽  
pp. 20120512 ◽  
Author(s):  
Rebecca Knight ◽  
Caitlin E. Piette ◽  
Hector Page ◽  
Daniel Walters ◽  
Elizabeth Marozzi ◽  
...  

How the brain combines information from different sensory modalities and of differing reliability is an important and still-unanswered question. Using the head direction (HD) system as a model, we explored the resolution of conflicts between landmarks and background cues. Sensory cue integration models predict averaging of the two cues, whereas attractor models predict capture of the signal by the dominant cue. We found that a visual landmark mostly captured the HD signal at low conflicts: however, there was an increasing propensity for the cells to integrate the cues thereafter. A large conflict presented to naive rats resulted in greater visual cue capture (less integration) than in experienced rats, revealing an effect of experience. We propose that weighted cue integration in HD cells arises from dynamic plasticity of the feed-forward inputs to the network, causing within-trial spatial redistribution of the visual inputs onto the ring. This suggests that an attractor network can implement decision processes about cue reliability using simple architecture and learning rules, thus providing a potential neural substrate for weighted cue integration.


2018 ◽  
Vol 31 (7) ◽  
pp. 645-674 ◽  
Author(s):  
Maria Gallagher ◽  
Elisa Raffaella Ferrè

Abstract In the past decade, there has been a rapid advance in Virtual Reality (VR) technology. Key to the user’s VR experience are multimodal interactions involving all senses. The human brain must integrate real-time vision, hearing, vestibular and proprioceptive inputs to produce the compelling and captivating feeling of immersion in a VR environment. A serious problem with VR is that users may develop symptoms similar to motion sickness, a malady called cybersickness. At present the underlying cause of cybersickness is not yet fully understood. Cybersickness may be due to a discrepancy between the sensory signals which provide information about the body’s orientation and motion: in many VR applications, optic flow elicits an illusory sensation of motion which tells users that they are moving in a certain direction with certain acceleration. However, since users are not actually moving, their proprioceptive and vestibular organs provide no cues of self-motion. These conflicting signals may lead to sensory discrepancies and eventually cybersickness. Here we review the current literature to develop a conceptual scheme for understanding the neural mechanisms of cybersickness. We discuss an approach to cybersickness based on sensory cue integration, focusing on the dynamic re-weighting of visual and vestibular signals for self-motion.


2016 ◽  
Vol 29 (1-3) ◽  
pp. 7-28 ◽  
Author(s):  
Cesare V. Parise

Crossmodal correspondences refer to the systematic associations often found across seemingly unrelated sensory features from different sensory modalities. Such phenomena constitute a universal trait of multisensory perception even in non-human species, and seem to result, at least in part, from the adaptation of sensory systems to natural scene statistics. Despite recent developments in the study of crossmodal correspondences, there are still a number of standing questions about their definition, their origins, their plasticity, and their underlying computational mechanisms. In this paper, I will review such questions in the light of current research on sensory cue integration, where crossmodal correspondences can be conceptualized in terms of natural mappings across different sensory cues that are present in the environment and learnt by the sensory systems. Finally, I will provide some practical guidelines for the design of experiments that might shed new light on crossmodal correspondences.


Sign in / Sign up

Export Citation Format

Share Document