Semantics for a Module

Author(s):  
Roberto G. de Almeida ◽  
Ernie Lepore

Fodor’s The Modularity of Mind (1983) and subsequent work propose a principled distinction between perceptual computations and background knowledge. The chapter argues that language input analyzers produce a minimally—and highly constrained—context-sensitive propositional representation of the sentence, built up from sentence constituents. Compatible with the original Modularity story, it thus takes the output of sentence perception to be a “shallow” representation—though a semantic one. The empirical data discussed bear on alleged cases of sentence indeterminacy and how such cases might be assigned (shallow) semantic representations, interact with context in highly regulated ways, and whether and how they can be enriched. The chapter proposes a semantic level of representation that serves as output of the module and as input to other systems of interpretation, arguing for a form of modularity or encapsulation that is minimally context-sensitive provided that the information from context is itself determined by linguistic principles.

2007 ◽  
Vol 19 (12) ◽  
pp. 2005-2018 ◽  
Author(s):  
Barry Giesbrecht ◽  
Jocelyn L. Sy ◽  
James C. Elliott

When two masked targets are presented in rapid succession, correct identification of the first target (T1) leads to a dramatic impairment in identification of the second target (T2). Several studies of this so-called attentional blink (AB) phenomenon have provided behavioral and physiological evidence that T2 is processed to the semantic level, despite the profound impairment in T2 report. These findings have been interpreted as an example of perception without awareness and have been explained by models that assume that T2 is processed extensively even though it does not gain access into consciousness. The present study reports two experiments that test this assumption. In Experiment 1, the perceptual load of the T1 task was manipulated and T2 was a word that was either related or unrelated to a context word presented at the beginning of each trial. The event-related potential (ERP) technique was used to isolate the context-sensitive N400 component evoked by the T2 word. The ERP data revealed that there was a complete suppression of the N400 during the AB when the perceptual load was high, but not when perceptual load was low. Experiment 2 replicated the high-load condition of Experiment 1 while ruling out two alternative explanations for the reduction of the N400 during the AB. The results of both experiments demonstrate that word meanings are not always accessed during the AB and are consistent with studies that suggest that attention can act to select information at multiple stages of processing depending on concurrent task demands.


PARADIGMI ◽  
2009 ◽  
pp. 83-100
Author(s):  
Alessandro Lenci

- The aim of this paper is to analyse the analogy of the lexicon with a space defined by words, which is common to a number of computational models of meaning in cognitive science. This can be regarded as a case of constitutive scientific metaphor in the sense of Boyd (1979) and is grounded in the so-called Distributional Hypothesis, stating that the semantic similarity between two words is a function of the similarity of the linguistic contexts in which they typically co-occur. The meaning of words is represented in terms of their topological relations in a high-dimensional space, defined by their combinatorial behaviour in texts. A key consequence of adopting the metaphor of word spaces is that semantic representations are modelled as highly context-sensitive entities. Moreover, word space models promise to open interesting perspectives for the study of metaphorical uses in language, as well as of lexical dynamics in general. Keywords: Cognitive sciences, Computational linguistics, Distributional models of the lexicon, Metaphor, Semantics, Word spaces.


2016 ◽  
Vol 4 ◽  
pp. 155-168
Author(s):  
Kyle Richardson ◽  
Jonas Kuhn

We introduce a new approach to training a semantic parser that uses textual entailment judgements as supervision. These judgements are based on high-level inferences about whether the meaning of one sentence follows from another. When applied to an existing semantic parsing task, they prove to be a useful tool for revealing semantic distinctions and background knowledge not captured in the target representations. This information is used to improve the quality of the semantic representations being learned and to acquire generic knowledge for reasoning. Experiments are done on the benchmark Sportscaster corpus (Chen and Mooney, 2008), and a novel RTE-inspired inference dataset is introduced. On this new dataset our method strongly outperforms several strong baselines. Separately, we obtain state-of-the-art results on the original Sportscaster semantic parsing task.


2021 ◽  
pp. 437-456
Author(s):  
Karen Frost-Arnold

In this chapter, Karen Frost-Arnold provides a close analysis of the epistemological challenges posed by context collapse in online environments and argues that virtue epistemology provides a helpful normative framework for addressing some of these problems. “Context collapse” is the blurring or merging of multiple contexts or audiences into one. Frost-Arnold identifies at least three epistemic challenges posed by context collapse. First, context collapse facilitates online harassment, which causes epistemic harm by decreasing the diversity of epistemic communities. Second, context collapse threatens the integrity of marginalized epistemic communities in which some types of true beliefs flourish. Third, context collapse promotes misunderstanding, as understanding relies on background knowledge which, in turn, is often context sensitive. Frost-Arnold then argues that we can cultivate and promote the epistemic virtues of trustworthiness and discretion in order to address some of these problems.


1988 ◽  
Vol 63 (3) ◽  
pp. 811-818
Author(s):  
James L. De Boy

Few empirical data exist on the metacognitive skills of at-risk college students. This study was undertaken to identify differences among 45 at-risk and 30 nonrisk college freshmen who read a passage from the Test of Literal and Inferential Reading and answered six questions of comprehension (3 literal and 3 inferential). All 75 students then answered a series of metacognitive questions. Nonrisk students outperformed their at-risk peers in accuracy of prediction of test performance and in justification of accuracy for their answers to inferential questions. Nonrisk students cited background knowledge significantly more often than at-risk students when asked to justify their answers to inferential questions answered correctly.


2021 ◽  
Vol 3 (2) ◽  
pp. 181-214
Author(s):  
Robert Frank ◽  
Tim Hunter

Abstract Aravind Joshi famously hypothesized that natural language syntax was characterized (in part) by mildly context-sensitive generative power. Subsequent work in mathematical linguistics over the past three decades has revealed surprising convergences among a wide variety of grammatical formalisms, all of which can be said to be mildly context-sensitive. But this convergence is not absolute. Not all mildly context-sensitive formalisms can generate exactly the same stringsets (i.e. they are not all weakly equivalent), and even when two formalisms can both generate a certain stringset, there might be differences in the structural descriptions they use to do so. It has generally been difficult to find cases where such differences in structural descriptions can be pinpointed in a way that allows linguistic considerations to be brought to bear on choices between formalisms, but in this paper we present one such case. The empirical pattern of interest involves wh-movement dependencies in languages that do not enforce the wh-island constraint. This pattern draws attention to two related dimensions of variation among formalisms: whether structures grow monotonically from one end to another, and whether structure-building operations are conditioned by only a finite amount of derivational state. From this perspective, we show that one class of formalisms generates the crucial empirical pattern using structures that align with mainstream syntactic analysis, and another class can only generate that same string pattern in a linguistically unnatural way. This is particularly interesting given that (i) the structurally-inadequate formalisms are strictly more powerful than the structurally-adequate ones from the perspective of weak generative capacity, and (ii) the formalism based on derivational operations that appear on the surface to align most closely with the mechanisms adopted in contemporary work in syntactic theory (merge and move) are the formalisms that fail to align with the analyses proposed in that work when the phenomenon is considered in full generality.


Author(s):  
Debi A. LaPlante ◽  
Heather M. Gray ◽  
Pat M. Williams ◽  
Sarah E. Nelson

Abstract. Aims: To discuss and review the latest research related to gambling expansion. Method: We completed a literature review and empirical comparison of peer reviewed findings related to gambling expansion and subsequent gambling-related changes among the population. Results: Although gambling expansion is associated with changes in gambling and gambling-related problems, empirical studies suggest that these effects are mixed and the available literature is limited. For example, the peer review literature suggests that most post-expansion gambling outcomes (i. e., 22 of 34 possible expansion outcomes; 64.7 %) indicate no observable change or a decrease in gambling outcomes, and a minority (i. e., 12 of 34 possible expansion outcomes; 35.3 %) indicate an increase in gambling outcomes. Conclusions: Empirical data related to gambling expansion suggests that its effects are more complex than frequently considered; however, evidence-based intervention might help prepare jurisdictions to deal with potential consequences. Jurisdictions can develop and evaluate responsible gambling programs to try to mitigate the impacts of expanded gambling.


2014 ◽  
Vol 25 (4) ◽  
pp. 233-238 ◽  
Author(s):  
Martin Peper ◽  
Simone N. Loeffler

Current ambulatory technologies are highly relevant for neuropsychological assessment and treatment as they provide a gateway to real life data. Ambulatory assessment of cognitive complaints, skills and emotional states in natural contexts provides information that has a greater ecological validity than traditional assessment approaches. This issue presents an overview of current technological and methodological innovations, opportunities, problems and limitations of these methods designed for the context-sensitive measurement of cognitive, emotional and behavioral function. The usefulness of selected ambulatory approaches is demonstrated and their relevance for an ecologically valid neuropsychology is highlighted.


Author(s):  
Virginie Crollen ◽  
Julie Castronovo ◽  
Xavier Seron

Over the last 30 years, numerical estimation has been largely studied. Recently, Castronovo and Seron (2007) proposed the bi-directional mapping hypothesis in order to account for the finding that dependent on the type of estimation task (perception vs. production of numerosities), reverse patterns of performance are found (i.e., under- and over-estimation, respectively). Here, we further investigated this hypothesis by submitting adult participants to three types of numerical estimation task: (1) a perception task, in which participants had to estimate the numerosity of a non-symbolic collection; (2) a production task, in which participants had to approximately produce the numerosity of a symbolic numerical input; and (3) a reproduction task, in which participants had to reproduce the numerosity of a non-symbolic numerical input. Our results gave further support to the finding that different patterns of performance are found according to the type of estimation task: (1) under-estimation in the perception task; (2) over-estimation in the production task; and (3) accurate estimation in the reproduction task. Moreover, correlation analyses revealed that the more a participant under-estimated in the perception task, the more he/she over-estimated in the production task. We discussed these empirical data by showing how they can be accounted by the bi-directional mapping hypothesis ( Castronovo & Seron, 2007 ).


Sign in / Sign up

Export Citation Format

Share Document