sensory modality
Recently Published Documents


TOTAL DOCUMENTS

601
(FIVE YEARS 207)

H-INDEX

51
(FIVE YEARS 6)

Author(s):  
Katie H Long ◽  
Kristine R McLellan ◽  
Maria Boyarinova ◽  
Sliman J Bensmaia

Hand proprioception - the sense of the posture and movements of the wrist and digits - is critical to dexterous manual behavior and to stereognosis, the ability to sense the three-dimensional structure of objects held in the hand. To better understand this sensory modality and its role in hand function, we sought to characterize the acuity with which the postures and movements of finger joints are sensed. To this end, we measured the ability of human subjects to discriminate changes in posture and speed around the three joints of the index finger. In these experiments, we isolated the sensory component by imposing the postures on an otherwise still hand, to complement other studies, in which subjects made judgments on actively achieved postures. We found that subjects could reliably sense 12-16% changes in joint angle and 18-32% changes in joint speed. Furthermore, the acuity for posture and speed was comparable across the three joints of the finger. Finally, task performance was unaffected by the presence of a vibratory stimulus, calling into question the role of cutaneous cues in hand proprioception.


2022 ◽  
Author(s):  
Qi Zhang ◽  
Yumeng Wu ◽  
Weiyuan Li ◽  
Jia Wang ◽  
Huiting Zhou ◽  
...  

Abstract Vision is the dominant sensory modality in fish and critical for the survival of fish larvae to detect predators or capture prey. The visual capacity of fish larvae is determined by the structure of the retina and the opsins expressed in the retinal and non-retinal photoreceptors. In this study, the retinal structure and opsin expression patterns during the early development stage of Takifugu rubripes larvae were investigated. At around two days after hatching days (dah), the yolk sac of T. rubripes disappeared, the mouth was clearly visible and the larvae started swimming and feeding on rotifers. Histological examination showed that at 1 dah, six layers were observed in the retina of T. rubripes larva, including the pigment epithelial layer (RPE), photoreceptor layer (PRos/is), outer nuclear layer (ONL), inner nuclear layer (INL), inner plexiform layer (IPL) and ganglion cell layer (GCL). At 2 dah, all eight layers were visible in the retina in T. rubripes larva, including RPE, PRos/is, ONL, outer plexiform layer (OPL), INL, IPL, GCL and optic fiber layer (OFL). By measuring the thickness of each layer, opposing developmental trends were found in the thickness of ONL, OPL, INL and IPL, GCL and OFL. The nuclear density of ONL, INL and GCL and ratio of ONL/INL, ONL/GCL and INL/GCL were also measured and the ratio of ONL/GCL ranged from 1.9 at 2 dah to 3.4 at 8 dah and no significant difference was observed between the different developmental stages (p > 0.05). No significant difference was observed for the INL/GCL ratio between the different developmental stages, which ranged from 1.2 at 2 dah to 2.0 at 18 dah (p > 0.05). The results of quantitative real-time PCR showed that the expression of rhodopsin, LWS, SWS2, green opsin, rod opsin, opsin3 and opsin5 could be detected from 1 dah. These results suggested that the maturation of eye of T. rubripes occurred during the period of transition from endogenous to mixed feeding, explaining the need for vision-based survival skills during the early life stages after hatching and for the overall ecology and fitness of T. rubripes.


As a foundational approach in inferential statistics, hypothesis testing (HT) is considered as one of the most challenging topics for teaching and learning. A promising approach is through the consideration of students’ learning modalities, as demonstrated in vast applications; however, contentions that surround the use of learning modality in education exist in recent debates. The cause of this unrest is the lack of robust empirical evidence on the efficacy of learning modalities in education. Thus, this work attempts to contribute to this debate and investigates whether sensory modality does influence learning. It develops an approach for teaching HT to college students via learning modality. Results show that learning modalities have a positive impact on students’ performance on competencies in learning HT. Furthermore, it was found out that some learning modalities work together on learning specific competencies. Lastly, the task-dependency of learning modalities was observed in the results of the experiment.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Bianca Maria Serena Inguscio ◽  
Giulia Cartocci ◽  
Nicolina Sciaraffa ◽  
Claudia Nasta ◽  
Andrea Giorgi ◽  
...  

Exploration of specific brain areas involved in verbal working memory (VWM) is a powerful but not widely used tool for the study of different sensory modalities, especially in children. In this study, for the first time, we used electroencephalography (EEG) to investigate neurophysiological similarities and differences in response to the same verbal stimuli, expressed in the auditory and visual modality during the n-back task with varying memory load in children. Since VWM plays an important role in learning ability, we wanted to investigate whether children elaborated the verbal input from auditory and visual stimuli through the same neural patterns and if performance varies depending on the sensory modality. Performance in terms of reaction times was better in visual than auditory modality ( p  = 0.008) and worse as memory load increased regardless of the modality ( p  < 0.001). EEG activation was proportionally influenced by task level and was evidenced in theta band over the prefrontal cortex ( p  = 0.021), along the midline ( p  = 0.003), and on the left hemisphere ( p  = 0.003). Differences in the effects of the two modalities were seen only in gamma band in the parietal cortices ( p  = 0.009). The values of a brainwave-based engagement index, innovatively used here to test children in a dual-modality VWM paradigm, varied depending on n-back task level ( p  = 0.001) and negatively correlated ( p  = 0.002) with performance, suggesting its computational effectiveness in detecting changes in mental state during memory tasks involving children. Overall, our findings suggest that auditory and visual VWM involved the same brain cortical areas (frontal, parietal, occipital, and midline) and that the significant differences in cortical activation in theta band were more related to memory load than sensory modality, suggesting that VWM function in the child’s brain involves a cross-modal processing pattern.


2021 ◽  
Vol 15 ◽  
Author(s):  
Liliana da Conceição Teixeira ◽  
Danielle Blacker ◽  
Carlos Campos ◽  
Carolina Garrett ◽  
Sophie Duport ◽  
...  

Purpose: The recommended way to assess consciousness in prolonged disorders of consciousness is to observe the patient’s responses to sensory stimulation. Multiple assessment sessions have to be completed in order to reach a correct diagnosis. There is, however, a lack of data on how many sessions are sufficient for validity and reliability. The aim of this study was to identify the number of Sensory Modality Assessment and Rehabilitation Technique (SMART) assessment sessions needed to reach a reliable diagnosis. A secondary objective was to identify which sensory stimulation modalities are more useful to reach a diagnosis.Materials and Methods: A retrospective analysis of all the adult patients (who received a SMART assessment) admitted to a specialist brain injury unit over the course of 4 years was conducted (n = 35). An independent rater analyzed the SMART levels for each modality and session and provided a suggestive diagnosis based on the highest SMART level per session.Results: For the vast majority of patients between 5 and 6 sessions was sufficient to reach the final clinical diagnosis. The visual, auditory, tactile, and motor function modalities were found to be more associated with the final diagnosis than the olfactory and gustatory modalities.Conclusion: These findings provide for the first time a rationale for optimizing the time spent on assessing patients using SMART.


2021 ◽  
Vol 15 ◽  
Author(s):  
Laurien Nagels-Coune ◽  
Lars Riecke ◽  
Amaia Benitez-Andonegui ◽  
Simona Klinkhammer ◽  
Rainer Goebel ◽  
...  

Severely motor-disabled patients, such as those suffering from the so-called “locked-in” syndrome, cannot communicate naturally. They may benefit from brain-computer interfaces (BCIs) exploiting brain signals for communication and therewith circumventing the muscular system. One BCI technique that has gained attention recently is functional near-infrared spectroscopy (fNIRS). Typically, fNIRS-based BCIs allow for brain-based communication via voluntarily modulation of brain activity through mental task performance guided by visual or auditory instructions. While the development of fNIRS-BCIs has made great progress, the reliability of fNIRS-BCIs across time and environments has rarely been assessed. In the present fNIRS-BCI study, we tested six healthy participants across three consecutive days using a straightforward four-choice fNIRS-BCI communication paradigm that allows answer encoding based on instructions using various sensory modalities. To encode an answer, participants performed a motor imagery task (mental drawing) in one out of four time periods. Answer encoding was guided by either the visual, auditory, or tactile sensory modality. Two participants were tested outside the laboratory in a cafeteria. Answers were decoded from the time course of the most-informative fNIRS channel-by-chromophore combination. Across the three testing days, we obtained mean single- and multi-trial (joint analysis of four consecutive trials) accuracies of 62.5 and 85.19%, respectively. Obtained multi-trial accuracies were 86.11% for visual, 80.56% for auditory, and 88.89% for tactile sensory encoding. The two participants that used the fNIRS-BCI in a cafeteria obtained the best single- (72.22 and 77.78%) and multi-trial accuracies (100 and 94.44%). Communication was reliable over the three recording sessions with multi-trial accuracies of 86.11% on day 1, 86.11% on day 2, and 83.33% on day 3. To gauge the trade-off between number of optodes and decoding accuracy, averaging across two and three promising fNIRS channels was compared to the one-channel approach. Multi-trial accuracy increased from 85.19% (one-channel approach) to 91.67% (two-/three-channel approach). In sum, the presented fNIRS-BCI yielded robust decoding results using three alternative sensory encoding modalities. Further, fNIRS-BCI communication was stable over the course of three consecutive days, even in a natural (social) environment. Therewith, the developed fNIRS-BCI demonstrated high flexibility, reliability and robustness, crucial requirements for future clinical applicability.


2021 ◽  
Author(s):  
Benjamin Mathieu ◽  
Antonin Abillama ◽  
Malvina Martinez ◽  
Laurence Mouchnino ◽  
Jean Blouin

Previous studies have shown that the sensory modality used to identify the position of proprioceptive targets hidden from sight, but frequently viewed, influences the type of the body representation employed for reaching them with the finger. The question then arises as to whether this observation also applies to proprioceptive targets which are hidden from sight, and rarely, if ever, viewed. We used an established technique for pinpointing the type of body representation used for the spatial encoding of targets which consisted of assessing the effect of peripheral gaze fixation on the pointing accuracy. More precisely, an exteroceptive, visually dependent, body representation is thought to be used if gaze deviation induces a deviation of the pointing movement. Three light-emitting diodes (LEDs) were positioned at the participants' eye level at -25 deg, 0 deg and +25 deg with respect to the cyclopean eye. Without moving the head, the participant fixated the lit LED before the experimenter indicated one of the three target head positions: topmost point of the head (vertex) and two other points located at the front and back of the head. These targets were either verbal-cued or tactile-cued. The goal of the subjects (n=27) was to reach the target with their index finger. We analysed the accuracy of the movements directed to the topmost point of the head, which is a well-defined, yet out of view anatomical point. Based on the possibility of the brain to create visual representations of the body areas that remain out of view, we hypothesized that the position of the vertex is encoded using an exteroceptive body representation, both when verbally or tactile-cued. Results revealed that the pointing errors were biased in the opposite direction of gaze fixation for both verbal-cued and tactile-cued targets, suggesting the use of a vision-dependent exteroceptive body representation. The enhancement of the visual body representations by sensorimotor processes was suggested by the greater pointing accuracy when the vertex was identified by tactile stimulation compared to verbal instruction. Moreover, we found in a control condition that participants were more accurate in indicating the position of their own vertex than the vertex of other people. This result supports the idea that sensorimotor experiences increase the spatial resolution of the exteroceptive body representation. Together, our results suggest that the position of rarely viewed body parts are spatially encoded by an exteroceptive body representation and that non-visual sensorimotor processes are involved in the constructing of this representation.


Insects ◽  
2021 ◽  
Vol 12 (11) ◽  
pp. 1043
Author(s):  
Juliette Ravaux ◽  
Julia Machon ◽  
Bruce Shillito ◽  
Dominique Barthélémy ◽  
Louis Amand ◽  
...  

Deep-sea species endemic to hydrothermal vents face the critical challenge of detecting active sites in a vast environment devoid of sunlight. This certainly requires specific sensory abilities, among which olfaction could be a relevant sensory modality, since chemical compounds in hydrothermal fluids or food odors could potentially serve as orientation cues. The temperature of the vent fluid might also be used for locating vent sites. The objective of this study is to observe the following key behaviors of olfaction in hydrothermal shrimp, which could provide an insight into their olfactory capacities: (1) grooming behavior; (2) attraction to environmental cues (food odors and fluid markers). We designed experiments at both deep-sea and atmospheric pressure to assess the behavior of the vent shrimp Rimicaris exoculata and Mirocaris fortunata, as well as of the coastal species Palaemon elegans and Palaemon serratus for comparison. Here, we show that hydrothermal shrimp groom their sensory appendages similarly to other crustaceans, but this does not clean the dense bacterial biofilm that covers the olfactory structures. These shrimp have previously been shown to possess functional sensory structures, and to detect the environmental olfactory signals tested, but we do not observe significant attraction behavior here. Only temperature, as a signature of vent fluids, clearly attracts vent shrimp and thus is confirmed to be a relevant signal for orientation in their environment.


2021 ◽  
Vol 12 ◽  
Author(s):  
LomaJohn T. Pendergraft ◽  
John M. Marzluff ◽  
Donna J. Cross ◽  
Toru Shimizu ◽  
Christopher N. Templeton

Social interaction among animals can occur under many contexts, such as during foraging. Our knowledge of the regions within an avian brain associated with social interaction is limited to the regions activated by a single context or sensory modality. We used 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) to examine American crow (Corvus brachyrhynchos) brain activity in response to conditions associated with communal feeding. Using a paired approach, we exposed crows to either a visual stimulus (the sight of food), an audio stimulus (the sound of conspecifics vocalizing while foraging) or both audio/visual stimuli presented simultaneously and compared to their brain activity in response to a control stimulus (an empty stage). We found two regions, the nucleus taenia of the amygdala (TnA) and a medial portion of the caudal nidopallium, that showed increased activity in response to the multimodal combination of stimuli but not in response to either stimulus when presented unimodally. We also found significantly increased activity in the lateral septum and medially within the nidopallium in response to both the audio-only and the combined audio/visual stimuli. We did not find any differences in activation in response to the visual stimulus by itself. We discuss how these regions may be involved in the processing of multimodal stimuli in the context of social interaction.


Sign in / Sign up

Export Citation Format

Share Document