Usage-based linguistics and the magic number four

2017 ◽  
Vol 28 (2) ◽  
pp. 209-237 ◽  
Author(s):  
Clarence Green

AbstractMiller’s (1956, The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review 63(2). 81–97) working memory (WM) capacity of around seven items, plus or minus two, was never found by usage-based linguists to be a recurrent pattern in language. Thus, it has not figured prominently in cognitive models of grammar. Upon reflection, this is somewhat unusual, since WM has been considered a fundamental cognitive domain for information processing in psychology, so one might have reasonably expected properties such as capacity constraints to be reflected in language use and structures derived from use. This paper proposes that Miller’s (1956) number has not been particularly productive in usage-based linguistics because it turns out to have been an overestimate. A revised WM capacity has now superseded it within cognitive science, a “magic number four plus or minus one” (Cowan 2001, The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences 24(1). 87–185). This paper suggests, drawing on evidence from spoken language corpora and multiple languages, that a range of linguistic structures and patterns align with this revised capacity estimate, unlike Miller’s (1956), ranging from phrasal verbs, idioms, n-grams, the lengths of intonation units and some abstract grammatical properties of phrasal categories and clause structure.

Author(s):  
Stephen Grossberg

A historical overview is given of interdisciplinary work in physics and psychology by some of the greatest nineteenth-century scientists, and why the fields split, leading to a century of ferment before the current scientific revolution in mind-brain sciences began to understand how we autonomously adapt to a changing world. New nonlinear, nonlocal, and nonstationary intuitions and laws are needed to understand how brains make minds. Work of Helmholtz on vision illustrates why he left psychology. His concept of unconscious inference presaged modern ideas about learning, expectation, and matching that this book scientifically explains. The fact that brains are designed to control behavioral success has profound implications for the methods and models that can unify mind and brain. Backward learning in time, and serial learning, illustrate why neural networks are a natural language for explaining brain dynamics, including the correct functional stimuli and laws for short-term memory (STM), medium-term memory (MTM), and long-term memory (LTM) traces. In particular, brains process spatial patterns of STM and LTM, not just individual traces. A thought experiment leads to universal laws for how neurons, and more generally all cellular tissues, process distributed STM patterns in cooperative-competitive networks without experiencing contamination by noise or pattern saturation. The chapter illustrates how thinking this way leads to unified and principled explanations of huge databases. A brief history of the advantages and disadvantages of the binary, linear, and continuous-nonlinear sources of neural models is described, and how models like Deep Learning and the author’s contributions fit into it.


2020 ◽  
Vol 12 (10) ◽  
pp. 4107
Author(s):  
Wafa Shafqat ◽  
Yung-Cheol Byun

The significance of contextual data has been recognized by analysts and specialists in numerous disciplines such as customization, data recovery, ubiquitous and versatile processing, information mining, and management. While a generous research has just been performed in the zone of recommender frameworks, by far most of the existing approaches center on prescribing the most relevant items to customers. It usually neglects extra-contextual information, for example time, area, climate or the popularity of different locations. Therefore, we proposed a deep long-short term memory (LSTM) based context-enriched hierarchical model. This proposed model had two levels of hierarchy and each level comprised of a deep LSTM network. In each level, the task of the LSTM was different. At the first level, LSTM learned from user travel history and predicted the next location probabilities. A contextual learning unit was active between these two levels. This unit extracted maximum possible contexts related to a location, the user and its environment such as weather, climate and risks. This unit also estimated other effective parameters such as the popularity of a location. To avoid feature congestion, XGBoost was used to rank feature importance. The features with no importance were discarded. At the second level, another LSTM framework was used to learn these contextual features embedded with location probabilities and resulted into top ranked places. The performance of the proposed approach was elevated with an accuracy of 97.2%, followed by gated recurrent unit (GRU) (96.4%) and then Bidirectional LSTM (94.2%). We also performed experiments to find the optimal size of travel history for effective recommendations.


2001 ◽  
Vol 24 (1) ◽  
pp. 126-127 ◽  
Author(s):  
Jerwen Jou

Cowan's concept of a pure short-term memory (STM) capacity limit is equivalent to that of memory subitizing. However, a robust phenomenon well known in the Sternberg paradigm, that is, the linear increase of RT as a function of memory set size is not consistent with this concept. Cowan's STM capacity theory will remain incomplete until it can account for this phenomenon.


2001 ◽  
Vol 24 (1) ◽  
pp. 120-121 ◽  
Author(s):  
K. Anders Ericsson ◽  
Elizabeth P. Kirk

Cowan's experimental techniques cannot constrain subject's recall of presented information to distinct independent chunks in short-term memory (STM). The encoding of associations in long-term memory contaminates recall of pure STM capacity. Even in task environments where the functional independence of chunks is convincingly demonstrated, individuals can increase the storage of independent chunks with deliberate practice – well above the magical number four.


2003 ◽  
Vol 20 (3) ◽  
pp. 135-145 ◽  
Author(s):  
Edilaine Lins Gouveia ◽  
Antonio Roazzi ◽  
David P. O'Brien ◽  
Karina Moutinho ◽  
Maria da Graça B. B. Dias

Nos últimos anos, tem havido muito debate acerca da existência ou não de uma lógica mental. Essa idéia tem sofrido inúmeros ataques, tanto por estudiosos que acreditam que todo raciocínio decorre de modelos mentais (e.g., Johnson-Laird & Byrne, 1993), como por aqueles que defendem que o raciocínio humano é dependente do conteúdo (Holyoak & Cheng, 1995). Essa controvérsia invadiu revistas internacionais como Psychological Review, Behavioral and Brain Sciences. No entanto, os proponentes da Teoria da Lógica Mental - TLM - crêem que poucos cientistas cognitivos realmente compreendem esta teoria (O'Brien, 1998a). Diante desse quadro, o presente artigo se propõe a trazer essa discussão para o cenário nacional. Serão apresentadas sumariamente algumas teorias sobre o raciocínio dedutivo. A seguir, as principais críticas à existência de uma lógica mental; e a "defesa" dos que proclamam a existência desse tipo de lógica. Por fim, a TLM será discutida mais detalhadamente.


Sign in / Sign up

Export Citation Format

Share Document