language modality
Recently Published Documents


TOTAL DOCUMENTS

55
(FIVE YEARS 20)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Ilaria Berteletti ◽  
Sarah E. Kimbley ◽  
SaraBeth Sullivan ◽  
Lorna C Quandt ◽  
Makoto Miyakoshi

In this study, we investigate the impact of experience with a signed language on the neurocognitive processes recruited by adults solving single-digit arithmetic problems. We use event-related potentials (ERPs) to identify the components that are modulated by problem size and operation type in Deaf American Sign Language (ASL) native signers as well as in hearing English-speaking participants. Participants were presented with subtraction and multiplication problems in a delayed verification task. Problem size was manipulated in small and large with an additional extra-large subtraction condition to equate the overall magnitude with large multiplication problems. Results show overall comparable behavioral results across groups and similar ERP dissociations between operation types. First, an early operation type effect is observed between 180ms and 210ms post problem onset, suggesting that both groups have a similar attentional differentiation for processing subtraction and multiplication problems. Second, on the posterior-occipital component between 240ms and 300ms, similarly for both groups only subtraction problems show modulation with problem size suggesting that only this category recruit quantity-related processes. Control analyses exclude this effect as being perceptual and magnitude related. These results are the first evidence that the two operations rely on distinct cognitive processes within the ASL native signing population and this distinction is equivalent to the one observed in the English-speaking population.


Author(s):  
Yangyang Guo ◽  
Liqiang Nie ◽  
Zhiyong Cheng ◽  
Feng Ji ◽  
Ji Zhang ◽  
...  

A number of studies point out that current Visual Question Answering (VQA) models are severely affected by the language prior problem, which refers to blindly making predictions based on the language shortcut. Some efforts have been devoted to overcoming this issue with delicate models. However, there is no research to address it from the view of the answer feature space learning, despite the fact that existing VQA methods all cast VQA as a classification task. Inspired by this, in this work, we attempt to tackle the language prior problem from the viewpoint of the feature space learning. An adapted margin cosine loss is designed to discriminate the frequent and the sparse answer feature space under each question type properly. In this way, the limited patterns within the language modality can be largely reduced to eliminate the language priors. We apply this loss function to several baseline models and evaluate its effectiveness on two VQA-CP benchmarks. Experimental results demonstrate that our proposed adapted margin cosine loss can enhance the baseline models with an absolute performance gain of 15\% on average, strongly verifying the potential of tackling the language prior problem in VQA from the angle of the answer feature space learning.


2021 ◽  
Vol 72 (1) ◽  
pp. 21-36
Author(s):  
Juraj Dolník

Abstract The author asked the question of how the vertical and horizontal orientation of man manifests itself in his linguistic world. He follows the interpretations of the philosopher P. Sloterdijk and accepts the thesis that man’s activities are governed by his vertical and horizontal needs. He emphasizes that one experiences a vertical need as a need to have access to objective truth as well as a paradigmatic need, that is, the need to discover a pattern for one’s own behavior and action. In contrast, horizontal needs motivate him to concentrate on his own self-realization potential. He then develops the idea that a person also in the language modality has both horizontal and vertical needs and demonstrates how these needs manifest themselves in language communication and then in naming units with special regard to proper names. The interpretation of language communication and naming units from the perspective of these needs led the author to conclude that the fundamental governing factor of language use is the truth.


Author(s):  
Kristen Secora ◽  
David Smith

Purpose Language modality choices for deaf children continue to be an area of debate, but we argue that the dichotomy of “either/or” for language modality is outdated in a world that increasingly values bilingualism. Evidence is provided that a bilingual approach to language for deaf children is not contraindicated and that deaf children can learn both spoken and signed language given an adequate amount of exposure to each language. Conclusions We note that exposure to signed language during the early phases of auditory evaluation and rehabilitation can reduce missed opportunities for language acquisition. We further suggest that professionals who work with these children and their families need to consider their own biases in how language modality choices are presented in order to provide the best possible support services.


Author(s):  
Haiyan Li ◽  
Dezhi Han

Visual Question Answering (VQA) is a multimodal research related to Computer Vision (CV) and Natural Language Processing (NLP). How to better obtain useful information from images and questions and give an accurate answer to the question is the core of the VQA task. This paper presents a VQA model based on multimodal encoders and decoders with gate attention (MEDGA). Each encoder and decoder block in the MEDGA applies not only self-attention and crossmodal attention but also gate attention, so that the new model can better focus on inter-modal and intra-modal interactions simultaneously within visual and language modality. Besides, MEDGA further filters out noise information irrelevant to the results via gate attention and finally outputs attention results that are closely related to visual features and language features, which makes the answer prediction result more accurate. Experimental evaluations on the VQA 2.0 dataset and the ablation experiments under different conditions prove the effectiveness of MEDGA. In addition, the MEDGA accuracy on the test-std dataset has reached 70.11%, which exceeds many existing methods.


Author(s):  
Polina P. Dambueva ◽  

Introduction. Modality is present in any language at the level of words, phrases, sentences. In this regard, it is not unexpected, that the means of its expression can be observed at most different levels of the language: modality is expressed at the syntactic, morphological, lexical, phonetic levels; very often morphological, syntactic and other means are combined. And, nevertheless, despite the obvious prevalence and universality of this phenomenon, the problem of modality has not yet received its full description, and the literature on modality, functional-modal words in particular, is very limited. Goals. The article raises the question of lexical units — nouns, adjectives, adverbs, which in some conditions perform the functions of modal words due to their tendency to develop secondary uses, in particular, the function of modal words, that reinforce, emphasize a certain segment of a statement. Results. Functional-modal words, having different formal-morphological features and their primary lexical meanings, due to the emotional-expressive connotation inherent in their semantics, can in the message, along with modal words, express their general grammatical meaning - the speaker’s (multi-aspect) attitude to the content your statement or part of it. The considered functional-modal words do not differ from their ‘brothers’ in the lexical and grammatical category, but episodic losses of lexical meaning in connection with the performance of the function of the modal word draw attention to them as a phenomenon, that signifies the possible beginning of obscuring the lexical meaning and subsequent derivational processes — lexicalization, transposition, transition from one part of speech to another. The paper also touches upon the fundamental theoretical issue of including / excluding emotional-expressive meanings in the modality category, which many researchers exclude from the named category, explaining that they do not express the logical-rational qualification of the content of the utterance. The linguistic material, as it seems to the author, resists the division of statements into logical-rational and emotional-expressive, since in many cases, even within the framework of one word, these two aspects appear in an inseparable unity.


PLoS ONE ◽  
2020 ◽  
Vol 15 (11) ◽  
pp. e0236729
Author(s):  
Caroline Bogliotti ◽  
Hatice Aksen ◽  
Frédéric Isel

In psycholinguistics and clinical linguistics, the Sentence Repetition Task (SRT) is known to be a valuable tool to screen general language abilities in both spoken and signed languages. This task enables users to reliably and quickly assess linguistic abilities at different levels of linguistic analysis such as phonology, morphology, lexicon, and syntax. To evaluate sign language proficiency in deaf children using French Sign Language (LSF), we designed a new SRT comprising 20 LSF sentences. The task was administered to a cohort of 62 children– 34 native signers (6;09–12 years) and 28 non-native signers (6;08–12;08 years)–in order to study their general linguistic development as a function of age of sign language acquisition (AOA) and chronological age (CA). Previously, a group of 10 adult native signers was also evaluated with this task. As expected, our results showed a significant effect of AOA, indicating that the native signers repeated more signs and were more accurate than non-native signers. A similar pattern of results was found for CA. Furthermore, native signers made fewer phonological errors (i.e., handshape, movement, and location) than non-native signers. Finally, as shown in previous sign language studies, handshape and movement proved to be the most difficult parameters to master regardless of AOA and CA. Taken together, our findings support the assumption that AOA is a crucial factor in the development of phonological skills regardless of language modality (spoken vs. signed). This study thus constitutes a first step toward a theoretical description of the developmental trajectory in LSF, a hitherto understudied language.


2020 ◽  
Vol 41 (4) ◽  
pp. 817-845
Author(s):  
Rama Novogrodsky ◽  
Natalia Meir

AbstractThe current study described the development of the MacArthur–Bates Communicative Developmental Inventory (CDI) for Israeli Sign Language (ISL) and investigated the effects of age, sign iconicity, and sign frequency on lexical acquisition of bimodal-bilingual toddlers acquiring ISL. Previous findings bring inconclusive evidence on the role of sign iconicity (the relationship between form and meaning) and sign frequency (how often a word/sign is used in the language) on the acquisition of signs. The ISL-CDI consisted of 563 video clips. Iconicity ratings from 41 sign-naïve Hebrew-speaking adults (Study 1A) and sign frequency ratings from 19 native ISL adult signers (Study 1B) were collected. ISL vocabulary was evaluated in 34 toddlers, native signers (Study 2). Results indicated significant effects of age, strong correlations between parental ISL ratings and ISL size even when age was controlled for, and strong correlations between naturalistic data and ISL-CDI scores, supporting the validity of the ISL-CDI. Moreover, the results revealed effects of iconicity, frequency, and interactions between age and the iconicity and frequency factors, suggesting that both iconicity and frequency are modulated by age. The findings contribute to the field of sign language acquisition and to our understanding of potential factors affecting human language acquisition beyond language modality.


Sign in / Sign up

Export Citation Format

Share Document