scholarly journals The Message Shapes Phonology

2019 ◽  
Author(s):  
Andrew B Wedel ◽  
Kathleen Hall ◽  
T. Florian Jaeger ◽  
Elizabeth Hume

Based on a diverse and complementary set of theoretical and empirical findings, we describe an approach to phonology in which sound patterns are shaped by the trade-off between biases supporting message transmission accuracy and resource cost. We refer to this approach as Message-Oriented Phonology. The evidence suggests that these biases influence the form of messages, defined with reference to a language's morphemes, words or higher levels of meaning, rather than influencing phonological categories directly. Integrating concepts from information theory and Bayesian inference with the existing body of phonological research, we propose a testable model of phonology that makes quantitative predictions. Moreover, we show that approaching language as a system of message transfer provides greater explanatory coverage of a diverse range of sound patterns.

Author(s):  
Elizabeth Hume ◽  
Kathleen Currie Hall ◽  
Andrew Wedel

Perceptual factors have been drawn on to provide insight into sound patterns and commonly serve as a diagnostic for markedness. However, a puzzling situation hasemerged: patterns associated with strong perceptual distinctiveness and those with weak distinctiveness are both described as unmarked. We propose that insight into the unmarked nature of these patterns can be gained when we take seriously the view of language as a system of information transmission. In particular, we suggest thatperceptually weak and strong unmarked patterns are those that effectively balance two competing properties of effective communication: (a) the contribution of the phonological unit in context to accurate message transmission, and (b) the resource cost of the phonological unit.


2012 ◽  
Vol 59 (5) ◽  
pp. 1-35 ◽  
Author(s):  
Ashwinkumar Badanidiyuru ◽  
Arpita Patra ◽  
Ashish Choudhury ◽  
Kannan Srinathan ◽  
C. Pandu Rangan

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Asgar Ali ◽  
K.N. Badhani ◽  
Ashish Kumar

PurposeThis study aims to investigate the risk-return trade-off in the Indian equity market at both the aggregate equity market level and in the cross-sections of stock return using alternative risk measures.Design/methodology/approachThe study uses weekly and monthly data of 3,085 Bombay Stock Exchange-listed stocks spanning over 20 years from January 2000 to December 2019. The study evaluates the risk-return trade-off at the aggregate equity market level using the value-weighted and the equal-weighted broader portfolios. Eight different risk proxies belonging to the conventional, downside and extreme risk categories are considered to analyse the cross-sectional risk-return relationship.FindingsThe results show a positive equity premium on the value-weighted portfolio; however, the equal-weighted portfolio of these stocks shows an average return lower than the return on the 91-day Treasury Bills. The inverted size premium mainly causes this anomaly in the Indian equity market as the small stocks have lower returns than big stocks. The study presents a strong negative risk-return relationship across different risk proxies. However, under the subsample of more liquid stocks, the low-risk anomaly regarding other risk proxies becomes moderate except the beta-anomaly. This anomalous relationship seems to be caused by small and less liquid stocks having low institutional ownership and higher short-selling constraints.Practical implicationsThe findings have important implications for investors, managers and practitioners. Investors can incorporate the effects of different highlighted anomalies in their investment strategies to fetch higher returns. Managers can also use these findings in their capital budgeting decisions, resource allocations and other diverse range of direct and indirect decisions, particularly in emerging markets such as India. The findings provide insights to practitioners while valuing the firms.Originality/valueThe study is among the earlier attempts to examine the risk-return trade-off in an emerging equity market at both the aggregate equity market level and in the cross-sections of stock returns using alternative measures of risk and expected returns.


2006 ◽  
Vol 04 (03) ◽  
pp. 383-393 ◽  
Author(s):  
GERARDO ADESSO ◽  
FABRIZIO ILLUMINATI

It is a central trait of quantum information theory that there exist limitations to the free sharing of quantum correlations among multiple parties. Such monogamy constraints have been introduced in a landmark paper by Coffman, Kundu and Wootters, who derived a quantitative inequality expressing a trade-off between the couplewise and the genuine tripartite entanglement for states of three qubits. Since then, a lot of efforts have been devoted to the investigation of distributed entanglement in multipartite quantum systems. In this paper we report, in a unifying framework, a bird's eye view of the most relevant results that have been established so far on entanglement sharing in quantum systems. We will take off from the domain of N qubits, graze qudits, and finally land in the almost unexplored territory of multimode Gaussian states of continuous variable systems.


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 603
Author(s):  
Arthur Prat-Carrabin ◽  
Florent Meyniel ◽  
Misha Tsodyks ◽  
Rava Azeredo da Silveira

When humans infer underlying probabilities from stochastic observations, they exhibit biases and variability that cannot be explained on the basis of sound, Bayesian manipulations of probability. This is especially salient when beliefs are updated as a function of sequential observations. We introduce a theoretical framework in which biases and variability emerge from a trade-off between Bayesian inference and the cognitive cost of carrying out probabilistic computations. We consider two forms of the cost: a precision cost and an unpredictability cost; these penalize beliefs that are less entropic and less deterministic, respectively. We apply our framework to the case of a Bernoulli variable: the bias of a coin is inferred from a sequence of coin flips. Theoretical predictions are qualitatively different depending on the form of the cost. A precision cost induces overestimation of small probabilities, on average, and a limited memory of past observations, and, consequently, a fluctuating bias. An unpredictability cost induces underestimation of small probabilities and a fixed bias that remains appreciable even for nearly unbiased observations. The case of a fair (equiprobable) coin, however, is singular, with non-trivial and slow fluctuations in the inferred bias. The proposed framework of costly Bayesian inference illustrates the richness of a `resource-rational’ (or `bounded-rational’) picture of seemingly irrational human cognition.


2019 ◽  
Author(s):  
Shuji Shinohara ◽  
Nobuhito Manome ◽  
Kouta Suzuki ◽  
Ung-il Chung ◽  
Tatsuji Takahashi ◽  
...  

AbstractBayesian inference is the process of narrowing down the hypotheses (causes) to the one that best explains the observational data (effects). To accurately estimate a cause, a considerable amount of data is required to be observed for as long as possible. However, the object of inference is not always constant. In this case, a method such as exponential moving average (EMA) with a discounting rate is used to improve the ability to respond to a sudden change; it is also necessary to increase the discounting rate. That is, a trade-off is established in which the followability is improved by increasing the discounting rate, but the accuracy is reduced. Here, we propose an extended Bayesian inference (EBI), wherein human-like causal inference is incorporated. We show that both the learning and forgetting effects are introduced into Bayesian inference by incorporating the causal inference. We evaluate the estimation performance of the EBI through the learning task of a dynamically changing Gaussian mixture model. In the evaluation, the EBI performance is compared with those of the EMA and a sequential discounting expectation-maximization algorithm. The EBI was shown to modify the trade-off observed in the EMA.


Graph theory provides a robust tool for modeling a diverse range of subjects. It has been widely applied to computer networks and even network attacks. However, the incidence function in graph theory is often given a cursory treatment. This current research involves applying a range of information theory equations to describe the incidence function in a graph of a computer network. This improves modeling of computer network attacks and intrusions. Specifically attacks that involve substantial changes in network traffic can be more accurately modeled, if the incidence function of the graph is expanded.


2017 ◽  
Vol 14 (130) ◽  
pp. 20170166 ◽  
Author(s):  
Sarah E. Marzen ◽  
Simon DeDeo

In complex environments, there are costs to both ignorance and perception. An organism needs to track fitness-relevant information about its world, but the more information it tracks, the more resources it must devote to perception. As a first step towards a general understanding of this trade-off, we use a tool from information theory, rate–distortion theory, to study large, unstructured environments with fixed, randomly drawn penalties for stimuli confusion (‘distortions’). We identify two distinct regimes for organisms in these environments: a high-fidelity regime where perceptual costs grow linearly with environmental complexity, and a low-fidelity regime where perceptual costs are, remarkably, independent of the number of environmental states. This suggests that in environments of rapidly increasing complexity, well-adapted organisms will find themselves able to make, just barely, the most subtle distinctions in their environment.


2019 ◽  
Vol 43 ◽  
Author(s):  
Michael Gilead ◽  
Yaacov Trope ◽  
Nira Liberman

Abstract In recent years, scientists have increasingly taken to investigate the predictive nature of cognition. We argue that prediction relies on abstraction, and thus theories of predictive cognition need an explicit theory of abstract representation. We propose such a theory of the abstract representational capacities that allow humans to transcend the “here-and-now.” Consistent with the predictive cognition literature, we suggest that the representational substrates of the mind are built as a hierarchy, ranging from the concrete to the abstract; however, we argue that there are qualitative differences between elements along this hierarchy, generating meaningful, often unacknowledged, diversity. Echoing views from philosophy, we suggest that the representational hierarchy can be parsed into: modality-specific representations, instantiated on perceptual similarity; multimodal representations, instantiated primarily on the discovery of spatiotemporal contiguity; and categorical representations, instantiated primarily on social interaction. These elements serve as the building blocks of complex structures discussed in cognitive psychology (e.g., episodes, scripts) and are the inputs for mental representations that behave like functions, typically discussed in linguistics (i.e., predicators). We support our argument for representational diversity by explaining how the elements in our ontology are all required to account for humans’ predictive cognition (e.g., in subserving logic-based prediction; in optimizing the trade-off between accurate and detailed predictions) and by examining how the neuroscientific evidence coheres with our account. In doing so, we provide a testable model of the neural bases of conceptual cognition and highlight several important implications to research on self-projection, reinforcement learning, and predictive-processing models of psychopathology.


Sign in / Sign up

Export Citation Format

Share Document