scholarly journals SOME WORD ORDER BIASES FROM LIMITED BRAIN RESOURCES: A MATHEMATICAL APPROACH

2008 ◽  
Vol 11 (03) ◽  
pp. 393-414 ◽  
Author(s):  
RAMON FERRER-I-CANCHO

In this paper, we propose a mathematical framework for studying word order optimization. The framework relies on the well-known positive correlation between cognitive cost and the Euclidean distance between the elements (e.g. words) involved in a syntactic link. We study the conditions under which a certain word order is more economical than an alternative word order by proposing a mathematical approach. We apply our methodology to two different cases: (a) the ordering of subject (S), verb (V) and object (O), and (b) the covering of a root word by a syntactic link. For the former, we find that SVO and its symmetric, OVS, are more economical than OVS, SOV, VOS and VSO at least 2/3 of the time. For the latter, we find that uncovering the root word is more economical than covering it at least 1/2 of the time. With the help of our framework, one can explain some Greenbergian universals. Our findings provide further theoretical support for the hypothesis that the limited resources of the brain introduce biases toward certain word orders. Our theoretical findings could inspire or illuminate future psycholinguistics or corpus linguistics studies.

2008 ◽  
Vol 11 (03) ◽  
pp. 415-420 ◽  
Author(s):  
MICHAEL CYSOUW

This is a reply to Ramon Ferrer-I-Cancho's paper in this issue "Some Word Order Biases from Limited Brain Resources: A Mathematical Approach." In this reply, I challenge the Euclidean distance model proposed in that paper by proposing a simple alternative model based on linear ordering.


2021 ◽  
Vol 2090 (1) ◽  
pp. 012119
Author(s):  
Benjamin Ambrosio

Abstract This article focuses on a mathematical description of the emotional phenomenon. The key concept is to consider emotions as an energy, and to rely on the analogy with the electromagnetic waves. Our aim is to provide a mathematical approach to characterize the emergence of emotional fluxes in the human psyche. This goes beyond classical pscychological approaches. In this setting, specific emotions correspond to specific frequencies and our psychic state results from the summation of different characteristic frequencies. Our general model of psychic state is a dynamical system whose evolution results from interactions between external inputs and internal reactions. The model provides both qualitative (frequencies) and quantitative (intensity) components. It aims to be applied to real life situations (in particular in work environments) and we provide a typical example which naturally leads to a problem of control.


Open Physics ◽  
2019 ◽  
Vol 17 (1) ◽  
pp. 468-479
Author(s):  
Mária Ždímalová ◽  
Ján Major ◽  
Martin Kopáni

Abstract In this paper we introduce the concept of segmentation based on mathematical approach using graph theory algorithms using the family of augmenting paths algorithms. We present a new program, an implementation, algorithms and obtained results devoted to segmentation of biomedical data. We implement our program for handling with segmentation, counting a measure of the existence of the minerals in the biomedical data. As a consequence we prove the existence of minerals in the data obtained from the brain of rabbits.


2017 ◽  
Vol 2 (1) ◽  
pp. 6
Author(s):  
Rania Ahmed Kadry Abdel Gawad Birry

Abstract—Alzheimer’s disease (AD) is a brain disease that causes a slow decline in memory, thinking and reasoning skills. It represents a major public health problem.  Magnetic Resonance Imaging (MRI) have shown that the brains of people with (AD) shrink significantly as the disease progresses. This shrinkage appears in specific brain regions such as the hippocampus which is a small, curved formation in the brain that plays an important role in the limbic system also involved in the formation of new memories and is also associated with learning and emotions.  Medical information on brain MRI is used in detecting the abnormalities in physiological structures. Structural MRI measurements can detect and follow the evolution of brain atrophy which is a marker of the disease progression; therefore, it allows diagnosis and prediction of AD.  The research’s main target is the early recognition of Alzheimer’s disease automatically, which will thereby avoid deterioration of the case resulting in complete brain damage stage.  Alzheimer’s disease yields visible changes in the brain structures. The aim is to recognize if the patient belongs to Alzheimer’s disease category or a normal healthy person at an early stage. Initially, image pre-processing and features extraction techniques are applied including data reduction using Discrete Cosine Transform (DCT) and Cropping, then traditional classification techniques like Euclidean Distance, Chebyshev Distance, Cosine Distance, City Block Distance, and Black pixel counter, were applied on the resulting vectors for classification. Image pre-processing includes noise reduction, Gray-scale conversion and binary scale conversion were applied for the MRI images. Feature extraction techniques follow including cropping and low spatial frequency components (DCT). This paper aims to automatically recognize and detect Alzheimer’s infected brain using MRI, without the need of clinical expert. This early recognition would be helpful to postpone the disease progression and maintain it at an almost steady stage. It was concluded after collecting a dataset of 50 MRI , 25 for normal MRI and  25 for AD MRI that Chebyshev Distance classifier yielded the highest success rate in the recognition of Alzheimer’s disease with accuracy 94% compared to other classification techniques used where, Euclidean Distance is 91.6%,  Cosine Distance is 86.8%, City block Distance is 89.6%, Correlation Distance is 86.4% and Black pixels counter is 90%.


2020 ◽  
Author(s):  
Andrea I. Luppi ◽  
Pedro A.M. Mediano ◽  
Fernando E. Rosas ◽  
Judith Allanson ◽  
John D. Pickard ◽  
...  

AbstractA central goal of neuroscience is to understand how the brain synthesises information from multiple inputs to give rise to a unified conscious experience. This process is widely believed to require integration of information. Here, we combine information theory and network science to address two fundamental questions: how is the human information-processing architecture functionally organised? And how does this organisation support human consciousness? To address these questions, we leverage the mathematical framework of Integrated Information Decomposition to delineate a cognitive architecture wherein specialised modules interact with a “synergistic global workspace,” comprising functionally distinct gateways and broadcasters. Gateway regions gather information from the specialised modules for processing in the synergistic workspace, whose contents are then further integrated to later be made widely available by broadcasters. Through data-driven analysis of resting-state functional MRI, we reveal that gateway regions correspond to the brain’s well-known default mode network, whereas broadcasters of information coincide with the executive control network. Demonstrating that this synergistic workspace supports human consciousness, we further apply Integrated Information Decomposition to BOLD signals to compute integrated information across the brain. By comparing changes due to propofol anaesthesia and severe brain injury, we demonstrate that most changes in integrated information happen within the synergistic workspace. Furthermore, it was found that loss of consciousness corresponds to reduced integrated information between gateway, but not broadcaster, regions of the synergistic workspace. Thus, loss of consciousness may coincide with breakdown of information integration by this synergistic workspace of the human brain. Together, these findings demonstrate that refining our understanding of information-processing in the human brain through Integrated Information Decomposition can provide powerful insights into the human neurocognitive architecture, and its role in supporting consciousness.


2017 ◽  
Vol 3 (2) ◽  
pp. 259-272
Author(s):  
Arif Humaini

Indonesian and Arabic language are different languages ​​, both these languages ​​each have different systems at the level of phonemes, morphemes, phrases, clauses, and sentences. The Arabic language included in flection language, while Indonesian language is not. Flection language is defined as the process or result of adding affixes to the base or root word for limiting its grammatical meaning. Therefore, the presence of the marker in Arabic was deemed paramount, while Indonesian concerned with word order. A marker is a tool that serves as affixes to express grammatical feature or function words. Matching the appropriate marker in Arabic is called al - 'alamat. There are many kinds of markers in Arabic, as there is a gender marker (mu'annats and mudzakkar), mufrod markers, plural markers, and so on. In this paper, we only specialize in the plural marker. Plural marker in Indonesian mostly expressed in the form of reduplication consisting of nouns, verbs, and adjectives. Also with the use of the word number by using ‘penyukat’ which can indicate the plural of a word. In the Arabic language, the process of forming the plural marker is characterized by three things: (1) replace the letters or ‘harakat’, (2) eliminate one of the letters, and (3) providing additional or affixes, in front of , in the middle, or at the end of the word.


Author(s):  
Daniel D. Hutto ◽  
Erik Myin

Evolving Enactivism argues that cognitive phenomena—perceiving, imagining, remembering—can be best explained in terms of an interface between contentless and content-involving forms of cognition. Building on their earlier book Radicalizing Enactivism, which proposes that there can be forms of cognition without content, Daniel Hutto and Erik Myin demonstrate the unique explanatory advantages of recognizing that only some forms of cognition have content while others—the most elementary ones—do not. They offer an account of the mind in duplex terms, proposing a complex vision of mentality in which these basic contentless forms of cognition interact with content-involving ones. Hutto and Myin argue that the most basic forms of cognition do not, contrary to a currently popular account of cognition, involve picking up and processing information that is then used, reused, stored, and represented in the brain. Rather, basic cognition is contentless—fundamentally interactive, dynamic, and relational. In advancing the case for a radically enactive account of cognition, Hutto and Myin propose crucial adjustments to our concept of cognition and offer theoretical support for their revolutionary rethinking, emphasizing its capacity to explain basic minds in naturalistic terms. They demonstrate the explanatory power of the duplex vision of cognition, showing how it offers powerful means for understanding quintessential cognitive phenomena without introducing scientifically intractable mysteries into the mix.


2016 ◽  
Vol 25 (1) ◽  
pp. 84-92 ◽  
Author(s):  
DOMINIC WILKINSON

Abstract:Severe congenital hydrocephalus manifests as accumulation of a large amount of excess fluid in the brain. It is a paradigmatic example of a condition in which diagnosis is relatively straightforward and long-term survival is usually associated with severe disability. It might be thought that, should parents agree, palliative care and limitation of treatment would be clearly permissible on the basis of the best interests of the infant. However, severe congenital hydrocephalus illustrates some of the neuroethical challenges in pediatrics. The permissibility of withholding or withdrawing treatment is limited by uncertainty in prognosis and the possibility of “palliative harm.” Conversely, although there are some situations in which treatment is contrary to the interests of the child, or unreasonable on the grounds of limited resources, acute surgical treatment of hydrocephalus rarely falls into that category.


2016 ◽  
Author(s):  
Jörn Diedrichsen ◽  
Nikolaus Kriegeskorte

AbstractRepresentational models specify how activity patterns in populations of neurons (or, more generally, in multivariate brain-activity measurements) relate to sensory stimuli, motor responses, or cognitive processes. In an experimental context, representational models can be defined as hypotheses about the distribution of activity profiles across experimental conditions. Currently, three different methods are being used to test such hypotheses: encoding analysis, pattern component modeling (PCM), and representational similarity analysis (RSA). Here we develop a common mathematical framework for understanding the relationship of these three methods, which share one core commonality: all three evaluate the second moment of the distribution of activity profiles, which determines the representational geometry, and thus how well any feature can be decoded from population activity with any readout mechanism capable of a linear transform. Using simulated data for three different experimental designs, we compare the power of the methods to adjudicate between competing representational models. PCM implements a likelihood-ratio test and therefore provides the most powerful test if its assumptions hold. However, the other two approaches – when conducted appropriately – can perform similarly. In encoding analysis, the linear model needs to be appropriately regularized, which effectively imposes a prior on the activity profiles. With such a prior, an encoding model specifies a well-defined distribution of activity profiles. In RSA, the unequal variances and statistical dependencies of the dissimilarity estimates need to be taken into account to reach near-optimal power in inference. The three methods render different aspects of the information explicit (e.g. single-response tuning in encoding analysis and population-response representational dissimilarity in RSA) and have specific advantages in terms of computational demands, ease of use, and extensibility. The three methods are properly construed as complementary components of a single data-analytical toolkit for understanding neural representations on the basis of multivariate brain-activity data.Author SummaryModern neuroscience can measure activity of many neurons or the local blood oxygenation of many brain locations simultaneously. As the number of simultaneous measurements grows, we can better investigate how the brain represents and transforms information, to enable perception, cognition, and behavior. Recent studies go beyond showing that a brain region is involved in some function. They use representational models that specify how different perceptions, cognitions, and actions are encoded in brain-activity patterns. In this paper, we provide a general mathematical framework for such representational models, which clarifies the relationships between three different methods that are currently used in the neuroscience community. All three methods evaluate the same core feature of the data, but each has distinct advantages and disadvantages. Pattern component modelling (PCM) implements the most powerful test between models, and is analytically tractable and expandable. Representational similarity analysis (RSA) provides a highly useful summary statistic (the dissimilarity) and enables model comparison with weaker distributional assumptions. Finally, encoding models characterize individual responses and enable the study of their layout across cortex. We argue that these methods should be considered components of a larger toolkit for testing hypotheses about the way the brain represents information.


2020 ◽  
Vol 23 (2) ◽  
pp. 166-178
Author(s):  
Zaid H. Berjis ◽  
Ahmed K. Al-sulaifanie

Spike sorting is the process of separating the extracellular recording of the brain signal into one unit activity. There are a number of proposed algorithms for this purpose, but there is still no acceptable solution. In this paper a spike sorting method has been proposed based on the Euclidean distance of the most effective features of spikes represented by principle components (PCs) of the detected and aligned spikes. The assessments of the method, based on signal-to-noise ratio (SNR) representing background noise, showed that the method performed spike sorting to a high level of accuracy.


Sign in / Sign up

Export Citation Format

Share Document