scholarly journals Are Words the Quanta of Human Language? Extending the Domain of Quantum Cognition

Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 6
Author(s):  
Diederik Aerts ◽  
Lester Beltran

In previous research, we showed that ‘texts that tell a story’ exhibit a statistical structure that is not Maxwell–Boltzmann but Bose–Einstein. Our explanation is that this is due to the presence of ‘indistinguishability’ in human language as a result of the same words in different parts of the story being indistinguishable from one another, in much the same way that ’indistinguishability’ occurs in quantum mechanics, also there leading to the presence of Bose–Einstein rather than Maxwell–Boltzmann as a statistical structure. In the current article, we set out to provide an explanation for this Bose–Einstein statistics in human language. We show that it is the presence of ‘meaning’ in ‘texts that tell a story’ that gives rise to the lack of independence characteristic of Bose–Einstein, and provides conclusive evidence that ‘words can be considered the quanta of human language’, structurally similar to how ‘photons are the quanta of electromagnetic radiation’. Using several studies on entanglement from our Brussels research group, we also show, by introducing the von Neumann entropy for human language, that it is also the presence of ‘meaning’ in texts that makes the entropy of a total text smaller relative to the entropy of the words composing it. We explain how the new insights in this article fit in with the research domain called ‘quantum cognition’, where quantum probability models and quantum vector spaces are used in human cognition, and are also relevant to the use of quantum structures in information retrieval and natural language processing, and how they introduce ‘quantization’ and ‘Bose–Einstein statistics’ as relevant quantum effects there. Inspired by the conceptuality interpretation of quantum mechanics, and relying on the new insights, we put forward hypotheses about the nature of physical reality. In doing so, we note how this new type of decrease in entropy, and its explanation, may be important for the development of quantum thermodynamics. We likewise note how it can also give rise to an original explanatory picture of the nature of physical reality on the surface of planet Earth, in which human culture emerges as a reinforcing continuation of life.

2019 ◽  
Vol 25 (3) ◽  
pp. 755-802
Author(s):  
Diederik Aerts ◽  
Lester Beltran

AbstractWe model a piece of text of human language telling a story by means of the quantum structure describing a Bose gas in a state close to a Bose–Einstein condensate near absolute zero temperature. For this we introduce energy levels for the words (concepts) used in the story and we also introduce the new notion of ‘cogniton’ as the quantum of human thought. Words (concepts) are then cognitons in different energy states as it is the case for photons in different energy states, or states of different radiative frequency, when the considered boson gas is that of the quanta of the electromagnetic field. We show that Bose–Einstein statistics delivers a very good model for these pieces of texts telling stories, both for short stories and for long stories of the size of novels. We analyze an unexpected connection with Zipf’s law in human language, the Zipf ranking relating to the energy levels of the words, and the Bose–Einstein graph coinciding with the Zipf graph. We investigate the issue of ‘identity and indistinguishability’ from this new perspective and conjecture that the way one can easily understand how two of ‘the same concepts’ are ‘absolutely identical and indistinguishable’ in human language is also the way in which quantum particles are absolutely identical and indistinguishable in physical reality, providing in this way new evidence for our conceptuality interpretation of quantum theory.


1985 ◽  
Vol 30 (7) ◽  
pp. 529-531
Author(s):  
Patrick Carroll

2013 ◽  
Vol 22 (12) ◽  
pp. 1342030 ◽  
Author(s):  
KYRIAKOS PAPADODIMAS ◽  
SUVRAT RAJU

We point out that nonperturbative effects in quantum gravity are sufficient to reconcile the process of black hole evaporation with quantum mechanics. In ordinary processes, these corrections are unimportant because they are suppressed by e-S. However, they gain relevance in information-theoretic considerations because their small size is offset by the corresponding largeness of the Hilbert space. In particular, we show how such corrections can cause the von Neumann entropy of the emitted Hawking quanta to decrease after the Page time, without modifying the thermal nature of each emitted quantum. Second, we show that exponentially suppressed commutators between operators inside and outside the black hole are sufficient to resolve paradoxes associated with the strong subadditivity of entropy without any dramatic modifications of the geometry near the horizon.


Nature ◽  
1935 ◽  
Vol 136 (3428) ◽  
pp. 65-65 ◽  
Author(s):  
N. BOHR

Webology ◽  
2021 ◽  
Vol 18 (Special Issue 01) ◽  
pp. 196-210
Author(s):  
Dr.P. Golda Jeyasheeli ◽  
N. Indumathi

Nowadays the interaction among deaf and mute people and normal people is difficult, because normal people scuffle to understand the sense of the gestures. The deaf and dumb people find problem in sentence formation and grammatical correction. To alleviate the issues faced by these people, an automatic sign language sentence generation approach is propounded. In this project, Natural Language Processing (NLP) based methods are used. NLP is a powerful tool for translation in the human language and also responsible for the formation of meaningful sentences from sign language symbols which is also understood by the normal person. In this system, both conventional NLP methods and Deep learning NLP methods are used for sentence generation. The efficiency of both the methods are compared. The generated sentence is displayed in the android application as an output. This system aims to connect the gap in the interaction among the deaf and dumb people and the normal people.


2020 ◽  
Author(s):  
Joshua Conrad Jackson ◽  
Joseph Watts ◽  
Johann-Mattis List ◽  
Ryan Drabble ◽  
Kristen Lindquist

Humans have been using language for thousands of years, but psychologists seldom consider what natural language can tell us about the mind. Here we propose that language offers a unique window into human cognition. After briefly summarizing the legacy of language analyses in psychological science, we show how methodological advances have made these analyses more feasible and insightful than ever before. In particular, we describe how two forms of language analysis—comparative linguistics and natural language processing—are already contributing to how we understand emotion, creativity, and religion, and overcoming methodological obstacles related to statistical power and culturally diverse samples. We summarize resources for learning both of these methods, and highlight the best way to combine language analysis techniques with behavioral paradigms. Applying language analysis to large-scale and cross-cultural datasets promises to provide major breakthroughs in psychological science.


2019 ◽  
Author(s):  
Joseph L. Austerweil ◽  
Shi Xian Liew ◽  
Nolan Bradley Conaway ◽  
Kenneth J. Kurtz

The ability to generate new concepts and ideas is among the most fascinating aspects of human cognition, but we do not have a strong understanding of the cognitive processes and representations underlying concept generation. In this paper, we study the generation of new categories using the computational and behavioral toolkit of traditional artificial category learning. Previous work in this domain has focused on how the statistical structure of known categories generalizes to generated categories, overlooking whether (and if so, how) contrast between the known and generated categories is a factor. We report three experiments demonstrating that contrast between what is known and what is created is of fundamental importance for categorization. We propose two novel approaches to modeling category contrast: one focused on exemplar dissimilarity and another on the representativeness heuristic. Our experiments and computational analyses demonstrate that both models capture different aspects of contrast’s role in categorization.


Sign in / Sign up

Export Citation Format

Share Document