shallow processing
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 6)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Pei Q Liu ◽  
Louise Connell ◽  
Dermot Lynott

What shapes the conceptual representation during metaphor processing? In this paper, we investigate this question by studying the roles of both embodied simulation and linguistic distributional patterns. Researchers have propose that the linguistic component is shallow and speedy, ideal as a shortcut to construct crude representations and conserve valuable cognitive resources. Thus, during metaphor processing, people should rely on the linguistic component more if the goal of processing is shallow and the time available is limited. Here, we present two pre-registered experiments which aim to evaluate this hypothesis. The results supported the role of simulation in metaphor processing, but not the linguistic shortcut hypotheses: the effect of linguistic distributional frequency increased as people had more time to process the metaphors, and as they engaged in deep processing. Furthermore during shallow processing, the processing was easier when the embodied and linguistic components support each other. These findings indicate a complex interaction between the embodied and linguistic components during metaphor processing.


2021 ◽  
Author(s):  
Abigail Marie Dester Mundorf ◽  
Mitchell Uitvlugt ◽  
Karl Healey

Memory tends to be better when items are processed for their meaning (deep processing) rather than their perceptual features (shallow processing). This levels of processing (LOP) effect is well-replicated and has been applied in many settings, but the mechanisms involved are still not well understood. The temporal contiguity effect (TCE), the finding that recalling one event often triggers recall of another event experienced nearby in time, also predicts memory performance. This effect has given rise to several competing theories with specific contiguity-generating mechanisms related to how items are processed. Therefore, studying how LOP and the TCE interact may shed light on the mechanisms underlying both effects. However, it is unknown how LOP and the TCE interact—the various theories make differing predictions. In this preregistered study, we tested predictions of three theoretical explanations: accounts which assume temporal information is automatically encoded, accounts based on a trade-off between item and order information, and accounts which emphasize the importance of strategic control processes. Participants completed an immediate free recall task where they either engaged in deep processing, shallow processing, or no additional task while studying each word. Recall and the TCE were highest for no-task lists and greater for deep than shallow processing. Our results support theories which assume temporal associations are automatically encoded and those which emphasize strategic control processes. Both perspectives should be considered in theory development. These findings also suggest temporal information may contribute to better recall under deeper processing with implications in determining which situations benefit from deep processing.


2021 ◽  
Vol 6 ◽  
Author(s):  
Edoardo Lombardi Vallauri

The paper shows that implicit strategies for questionable contents are frequent in persuasive texts, as compared to texts with other purposes. It proposes that the persuasive and manipulative effectiveness of introducing questionable contents implicitly can be explained through established cognitive patterns, namely that what is felt by addressees as information coming (also) from them and not (only) from the source of the message is less likely to be challenged. These assumptions are verified by showing examples of “implicitness of evidential responsibility” (essentially, presuppositions, and topics) as triggers of lesser attention in advertising and propaganda. A possible evolutionary path is sketched for three different pragmatic functions of presuppositions, leading to their availability for manipulation. The distraction effect of presuppositions and topics is also explained in relation with recent developments of Relevance Theory. Behavioral evidence that presuppositions and topics induce low epistemic vigilance and shallow processing is compared to recent neurophysiological evidence which does not confirm this assumption, showing greater processing costs for presuppositions and topics as compared to assertions and foci. A proposal is put forward to reconcile these apparently contrasting data and to explain why they may not be in contrast after all. Also due to natural language quick processing constraints (a “Now-or-Never processing Bottleneck”), effort devoted to accommodation of presupposed or topicalized new contents may drain resources from concurrent epistemic vigilance and critical evaluation, resulting in shallower processing.


2020 ◽  
Vol 9 (1) ◽  
pp. 95-123
Author(s):  
Edoardo Lombardi Vallauri ◽  
Laura Baranzini ◽  
Doriana Cimmino ◽  
Federica Cominetti ◽  
Claudia Coppola ◽  
...  

Abstract The paper provides evidence that linguistic strategies based on the implicit encoding of information are effective means of deceptive argumentation and manipulation, as they can ease the acceptance of doubtful arguments by distracting addressees’ attention and by encouraging shallow processing of doubtful contents. The persuasive and manipulative functions of these rhetorical strategies are observed in commercial and political propaganda. Linguistic implicit strategies are divided into two main categories: the implicit encoding of content, mainly represented by implicatures and vague expressions, and the implicit encoding of responsibility, mainly represented by presuppositions and topics. The paper also suggests that the amount of persuasive implicitness contained in texts can be measured. For this purpose, a measuring model is proposed and applied to some Italian political speeches. The possible social usefulness of this approach is showed by sketching the operation of a website in which the measuring model is used to monitor contemporary political speeches.


2019 ◽  
Author(s):  
David Soto ◽  
Usman Ayub Sheikh ◽  
Ning Mei ◽  
Roberto Santana

AbstractHow the brain representation of conceptual knowledge vary as a function of processing goals, strategies and task-factors remains a key unresolved question in cognitive neuroscience. Here we asked how the brain representation of semantic categories is shaped by the depth of processing during mental simulation. Participants were presented with visual words during functional magnetic resonance imaging (fMRI). During shallow processing, participants had to read the items. During deep processing, they had to mentally simulate the features associated with the words. Multivariate classification, informational connectivity and encoding models were used to reveal how the depth of processing determines the brain representation of word meaning. Decoding accuracy in putative substrates of the semantic network was enhanced when the depth processing was high, and the brain representations were more generalizable in semantic space relative to shallow processing contexts. This pattern was observed even in association areas in inferior frontal and parietal cortex. Deep information processing during mental simulation also increased the informational connectivity within key substrates of the semantic network. To further examine the properties of the words encoded in brain activity, we compared computer vision models - associated with the image referents of the words - and word embedding. Computer vision models explained more variance of the brain responses across multiple areas of the semantic network. These results indicate that the brain representation of word meaning is highly malleable by the depth of processing imposed by the task, relies on access to visual representations and is highly distributed, including prefrontal areas previously implicated in semantic control.


2019 ◽  
Author(s):  
Usman Ayub Sheikh ◽  
Manuel Carreiras ◽  
David Soto

The neurocognitive mechanisms that support the generalization of semantic representations across different languages remain to be determined. Current psycholinguistic models propose that semantic representations are likely to overlap across languages, although there is evidence also to the contrary. Neuroimaging studies observed that brain activity patterns associated with the meaning of words may be similar across languages. However, the factors that mediate cross-language generalization of semantic representations are not known. We here identify a key factor: the depth of processing. Human participants were asked to process visual words as they underwent functional MRI. We found that, during shallow processing, multivariate pattern classifiers could decode the word semantic category within each language in putative substrates of the semantic network, but there was no evidence of cross-language generalization in the shallow processing context. By contrast, when the depth of processing was higher, significant cross-language generalization was observed in several regions, including inferior parietal, ventromedial, lateral temporal, and inferior frontal cortex. These results support the distributed-only view of semantic processing and favour models based on multiple semantic hubs. The results also have ramifications for psycholinguistic models of word processing such as the BIA+, which by default assumes non-selective access to both native and second languages.


2018 ◽  
Vol 71 (1) ◽  
pp. 113-121 ◽  
Author(s):  
Anne E Cook ◽  
Erinn K Walsh ◽  
Margaret A A Bills ◽  
John C Kircher ◽  
Edward J O’Brien

Several theorists have argued that readers fail to detect semantic anomalies during reading, and that these effects are indicative of “shallow processing” behaviours. Previous studies of semantic anomalies such as the Moses illusion have focused primarily on explicit detection tasks. In the present study, we examined participants’ eye movements as they read true/false statements that were non-anomalous, or contained a semantic anomaly that was either high- or low-related to the correct information. Analyses of reading behaviours revealed that only low-related detected anomalies resulted in initial processing difficulty, but both detected and undetected anomalies, regardless of whether they were high- or low-related, resulted in delayed processing difficulty. The results extend previous findings on semantic anomalies and are discussed in terms of the RI-Val model of text processing.


2017 ◽  
Vol 13 (1) ◽  
pp. 129-142 ◽  
Author(s):  
Sau Hou Chang

The purpose of the present study was to investigate the effects of test trial and processing level on immediate and delayed retention. A 2 × 2 × 2 mixed ANOVAs was used with two between-subject factors of test trial (single test, repeated test) and processing level (shallow, deep), and one within-subject factor of final recall (immediate, delayed). Seventy-six college students were randomly assigned first to the single test (studied the stimulus words three times and took one free-recall test) and the repeated test trials (studied the stimulus words once and took three consecutive free-recall tests), and then to the shallow processing level (asked whether each stimulus word was presented in capital letter or in small letter) and the deep processing level (whether each stimulus word belonged to a particular category) to study forty stimulus words. The immediate test was administered five minutes after the trials, whereas the delayed test was administered one week later. Results showed that single test trial recalled more words than repeated test trial in immediate final free-recall test, participants in deep processing performed better than those in shallow processing in both immediate and delayed retention. However, the dominance of single test trial and deep processing did not happen in delayed retention. Additional study trials did not further enhance the delayed retention of words encoded in deep processing, but did enhance the delayed retention of words encoded in shallow processing.


AI Magazine ◽  
2016 ◽  
Vol 37 (1) ◽  
pp. 97-101 ◽  
Author(s):  
Douglas B. Lenat

Turing’s Imitation Game was a brilliant early proposed test of machine intelligence — one that is still compelling, today, despite the fact that in the hindsight of all that we’ve learned in the intervening 65 years we can see the flaws in his original test. And our field needs a good “Is it AI yet?” test more than ever, today, with so many of us spending our research time looking under the “shallow processing of big data” lamppost. If Turing were alive today, what sort of test might he propose?


Sign in / Sign up

Export Citation Format

Share Document