visual input
Recently Published Documents


TOTAL DOCUMENTS

589
(FIVE YEARS 127)

H-INDEX

51
(FIVE YEARS 6)

Cognition ◽  
2022 ◽  
Vol 222 ◽  
pp. 104994
Author(s):  
Sarah Chabal ◽  
Sayuri Hayakawa ◽  
Viorica Marian
Keyword(s):  

2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Mengwei Liu ◽  
Yujia Zhang ◽  
Jiachuang Wang ◽  
Nan Qin ◽  
Heng Yang ◽  
...  

AbstractObject recognition is among the basic survival skills of human beings and other animals. To date, artificial intelligence (AI) assisted high-performance object recognition is primarily visual-based, empowered by the rapid development of sensing and computational capabilities. Here, we report a tactile-olfactory sensing array, which was inspired by the natural sense-fusion system of star-nose mole, and can permit real-time acquisition of the local topography, stiffness, and odor of a variety of objects without visual input. The tactile-olfactory information is processed by a bioinspired olfactory-tactile associated machine-learning algorithm, essentially mimicking the biological fusion procedures in the neural system of the star-nose mole. Aiming to achieve human identification during rescue missions in challenging environments such as dark or buried scenarios, our tactile-olfactory intelligent sensing system could classify 11 typical objects with an accuracy of 96.9% in a simulated rescue scenario at a fire department test site. The tactile-olfactory bionic sensing system required no visual input and showed superior tolerance to environmental interference, highlighting its great potential for robust object recognition in difficult environments where other methods fall short.


2022 ◽  
pp. 1-30
Author(s):  
Maribel Montero Perez

Abstract This article discusses research into the role of audio-visual input for second language (L2) or foreign language learning. It also addresses questions related to the effectiveness of audio-visual input with different types of on-screen text such as subtitles (i.e., in learners’ first language) and captions (i.e., subtitles in the same language as the L2 audio) for L2 learning. The review discusses the following themes: (a) the characteristics of audio-visual input such as the multimodal nature of the input and vocabulary demands of video; (b) L2 learners’ comprehension of audio-visual input and the role of different types of on-screen text; (c) the effectiveness of audio-visual input and on-screen text for aspects of L2 learning including vocabulary, grammar, and listening; and (d) research into L2 learners’ use and perceptions of audio-visual input and on-screen text. The review ends with a consideration of implications for teaching practice and a conclusion that discusses the generalizability of current research in relation to suggestions for future research.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Gabriel Moisan ◽  
Ludovic Miramand ◽  
Hananeh Younesian ◽  
Katia Turcot

2021 ◽  
Author(s):  
◽  
Elvenna Majuddin

<p>This research project aims to extend the line of inquiry on pedagogical interventions intended to help second language (L2) learners make better progress of their mastery of multiword expressions (MWEs). Existing studies on these interventions revealed a propensity towards exclusivity in terms of input modality, item type and learning condition. Firstly, there are far more MWE studies in the context of unimodal input, e.g., written input. It is only recently that the potential of audio-visual input (i.e., L2 viewing) has been explored for MWE learning. Secondly, previous studies have by and large focused on certain types of MWEs, such as collocations. While there is merit in focusing on a certain type of item, such studies do not represent the materials that L2 learners are often exposed to. Further, authentic videos entail diverse MWE types, providing a stronger reason to include more than one type of target item. Thirdly, many MWE interventions are investigated exclusively under one of the learning conditions, i.e., intentional or incidental learning conditions. Hulstijn’s (2001) criterion is adopted to distinguish these two learning conditions, in that the presence of test announcement characterises the intentional learning condition. Due to this tendency towards a dichotomy of learning conditions, many factors known to facilitate MWE learning have been investigated under one of the learning conditions only.  Two such factors are repetition and typographic enhancement. While repetition is well established as beneficial for MWE acquisition, evidence for this is mainly furnished by studies on incidental learning through written input. Therefore, the aim of this research project is to assess how repetition, operationalised as repeated viewing, influences MWE acquisition under both learning conditions. Similarly, although typographic enhancement has been shown to draw learners’ attention and promote MWE uptake, this positive evidence is mostly observed in incidental learning studies. As such, whether typographically-enhanced MWEs are indeed learned better than unenhanced MWEs under intentional learning conditions is still under-researched. Importantly, whether typographic enhancement in captioned viewing leads to superior learning compared to normal captions is unknown. This is one of the aims of the research project, in which different caption conditions are created to explore their effectiveness in facilitating MWE learning. Of further interest is whether MWE learning under different caption conditions would modulate the effect of repetition. This is motivated by the assumption that typographic enhancement might eliminate the need for repetition.  To answer the research questions, two studies differentiated by the presence of test announcement were carried out. For both studies, ESL learners watched a video containing target MWEs under one of six conditions, which differed in terms of caption condition (no captions, normal captions or enhanced captions) and the number of viewing times (once or twice). MWE learning was assessed through tests that tap into form and meaning knowledge at the level of recall and recognition. Though not part of the research questions, the effects of caption condition and repetition on content comprehension were also assessed. The findings of both studies revealed trends that are consistent with literature on MWE learning and vocabulary learning in general. Firstly, both types of captions promoted better form recall knowledge compared to uncaptioned viewing. This was found to be true under both incidental and intentional learning conditions. Secondly, typographically enhanced captions led to better form recall compared to normal captions, but only under the intentional learning conditions. Under the incidental learning conditions, the effects of L2 viewing with typograhically enhanced captions on form recall appeared to be similar to viewing with normal captions. The findings also suggest that the presence of typographically enhanced captions reduced the number of viewings needed to make incidental gains in form recall knowledge. In addition, while repeated viewing under all caption conditions led to better knowledge of form under the incidental learning conditions, the effect of repetition was not found under the intentional learning conditions. This aligns well with the supposition that fewer repetitions are needed for intentional learning. Thirdly, neither repetition nor caption condition had an effect on the acquisition of MWE meanings under both learning conditions. Finally, vocabulary knowledge played a significant role in the amount of MWE learning that takes place, especially so when learners were not forewarned of MWE tests. Taken as a whole, the findings of this research project support the use of captions for L2 viewing as a way to foster MWE acquisition, at least at the level of form acquisition. The use of typographically enhanced captions, however, may have adverse effects on content comprehension. As such, the findings of this research project have meaningful implications concerning when typographically enhanced captions and repeated viewing should be used to optimise MWE learning through L2 viewing.</p>


2021 ◽  
Author(s):  
◽  
Elvenna Majuddin

<p>This research project aims to extend the line of inquiry on pedagogical interventions intended to help second language (L2) learners make better progress of their mastery of multiword expressions (MWEs). Existing studies on these interventions revealed a propensity towards exclusivity in terms of input modality, item type and learning condition. Firstly, there are far more MWE studies in the context of unimodal input, e.g., written input. It is only recently that the potential of audio-visual input (i.e., L2 viewing) has been explored for MWE learning. Secondly, previous studies have by and large focused on certain types of MWEs, such as collocations. While there is merit in focusing on a certain type of item, such studies do not represent the materials that L2 learners are often exposed to. Further, authentic videos entail diverse MWE types, providing a stronger reason to include more than one type of target item. Thirdly, many MWE interventions are investigated exclusively under one of the learning conditions, i.e., intentional or incidental learning conditions. Hulstijn’s (2001) criterion is adopted to distinguish these two learning conditions, in that the presence of test announcement characterises the intentional learning condition. Due to this tendency towards a dichotomy of learning conditions, many factors known to facilitate MWE learning have been investigated under one of the learning conditions only.  Two such factors are repetition and typographic enhancement. While repetition is well established as beneficial for MWE acquisition, evidence for this is mainly furnished by studies on incidental learning through written input. Therefore, the aim of this research project is to assess how repetition, operationalised as repeated viewing, influences MWE acquisition under both learning conditions. Similarly, although typographic enhancement has been shown to draw learners’ attention and promote MWE uptake, this positive evidence is mostly observed in incidental learning studies. As such, whether typographically-enhanced MWEs are indeed learned better than unenhanced MWEs under intentional learning conditions is still under-researched. Importantly, whether typographic enhancement in captioned viewing leads to superior learning compared to normal captions is unknown. This is one of the aims of the research project, in which different caption conditions are created to explore their effectiveness in facilitating MWE learning. Of further interest is whether MWE learning under different caption conditions would modulate the effect of repetition. This is motivated by the assumption that typographic enhancement might eliminate the need for repetition.  To answer the research questions, two studies differentiated by the presence of test announcement were carried out. For both studies, ESL learners watched a video containing target MWEs under one of six conditions, which differed in terms of caption condition (no captions, normal captions or enhanced captions) and the number of viewing times (once or twice). MWE learning was assessed through tests that tap into form and meaning knowledge at the level of recall and recognition. Though not part of the research questions, the effects of caption condition and repetition on content comprehension were also assessed. The findings of both studies revealed trends that are consistent with literature on MWE learning and vocabulary learning in general. Firstly, both types of captions promoted better form recall knowledge compared to uncaptioned viewing. This was found to be true under both incidental and intentional learning conditions. Secondly, typographically enhanced captions led to better form recall compared to normal captions, but only under the intentional learning conditions. Under the incidental learning conditions, the effects of L2 viewing with typograhically enhanced captions on form recall appeared to be similar to viewing with normal captions. The findings also suggest that the presence of typographically enhanced captions reduced the number of viewings needed to make incidental gains in form recall knowledge. In addition, while repeated viewing under all caption conditions led to better knowledge of form under the incidental learning conditions, the effect of repetition was not found under the intentional learning conditions. This aligns well with the supposition that fewer repetitions are needed for intentional learning. Thirdly, neither repetition nor caption condition had an effect on the acquisition of MWE meanings under both learning conditions. Finally, vocabulary knowledge played a significant role in the amount of MWE learning that takes place, especially so when learners were not forewarned of MWE tests. Taken as a whole, the findings of this research project support the use of captions for L2 viewing as a way to foster MWE acquisition, at least at the level of form acquisition. The use of typographically enhanced captions, however, may have adverse effects on content comprehension. As such, the findings of this research project have meaningful implications concerning when typographically enhanced captions and repeated viewing should be used to optimise MWE learning through L2 viewing.</p>


Neuroreport ◽  
2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Yi Ran Wang ◽  
Benoit-Antoine Bacon ◽  
Maxime Maheu ◽  
François Champoux

2021 ◽  
Author(s):  
Andrey Chetverikov ◽  
Árni Kristjánsson

Prominent theories of perception suggest that the brain builds probabilistic models of the world, assessing the statistics of the visual input to inform this construction. However, the evidence for this idea is often based on simple impoverished stimuli, and the results have often been discarded as an illusion reflecting simple "summary statistics" of visual inputs. Here we show that the visual system represents probabilistic distributions of complex heterogeneous stimuli. Importantly, we show how these statistical representations are integrated with representations of other features and bound to locations, and can therefore serve as building blocks for object and scene processing. We uncover the organization of these representations at different spatial scales by showing how expectations for incoming features are biased by neighboring locations. We also show that there is not only a bias, but also a skew in the representations, arguing against accounts positing that probabilistic representations are discarded in favor of simplified summary statistics (e.g., mean and variance). In sum, our results reveal detailed probabilistic encoding of stimulus distributions, representations that are bound with other features and to particular locations.


2021 ◽  
Vol 10 (22) ◽  
pp. 5376
Author(s):  
Grzegorz Zieliński ◽  
Anna Matysik-Woźniak ◽  
Maria Rapa ◽  
Michał Baszczowski ◽  
Michał Ginszt ◽  
...  

This study aimed to analyze the change of visual input on electromyographic patterns of masticatory and cervical spine muscles in subjects with myopia. After applying the inclusion criteria, 50 subjects (18 males and 32 females) with myopia ranging from −0.5 to −5.75 Diopters (D), were included in the study. Four muscle pairs were analyzed: the anterior part of the temporalis muscle (TA), the superficial part of the masseter muscle (MM), the anterior belly of the digastric muscle (DA), and the middle part of the sternocleidomastoid muscle belly (SCM) during resting and functional activity. Statistical analysis showed a significant decrease within functional indices (FCI) for the sternocleidomastoid muscle (FCI SCM R, FCI SCM L, FCI SCM total) during clenching in the intercuspal position with eyes closed compared to eyes open. During maximum mouth opening, a statistically significant increase of functional opening index for the left temporalis muscle (FOI TA L) was observed. Within the activity index (AcI), there was a statistically significant decrease during clenching on dental cotton rollers with eyes closed compared to eyes open.


Sign in / Sign up

Export Citation Format

Share Document