scholarly journals Predictability and Information Theory. Part II: Imperfect Forecasts

2005 ◽  
Vol 62 (9) ◽  
pp. 3368-3381 ◽  
Author(s):  
Timothy DelSole

Abstract This paper presents a framework for quantifying predictability based on the behavior of imperfect forecasts. The critical quantity in this framework is not the forecast distribution, as used in many other predictability studies, but the conditional distribution of the state given the forecasts, called the regression forecast distribution. The average predictability of the regression forecast distribution is given by a quantity called the mutual information. Standard inequalities in information theory show that this quantity is bounded above by the average predictability of the true system and by the average predictability of the forecast system. These bounds clarify the role of potential predictability, of which many incorrect statements can be found in the literature. Mutual information has further attractive properties: it is invariant with respect to nonlinear transformations of the data, cannot be improved by manipulating the forecast, and reduces to familiar measures of correlation skill when the forecast and verification are joint normally distributed. The concept of potential predictable components is shown to define a lower-dimensional space that captures the full predictability of the regression forecast without loss of generality. The predictability of stationary, Gaussian, Markov systems is examined in detail. Some simple numerical examples suggest that imperfect forecasts are not always useful for joint normally distributed systems since greater predictability often can be obtained directly from observations. Rather, the usefulness of imperfect forecasts appears to lie in the fact that they can identify potential predictable components and capture nonstationary and/or nonlinear behavior, which are difficult to capture by low-dimensional, empirical models estimated from short historical records.

2003 ◽  
Vol 15 (8) ◽  
pp. 1715-1749 ◽  
Author(s):  
Blaise Agüera y Arcas ◽  
Adrienne L. Fairhall ◽  
William Bialek

A spiking neuron “computes” by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low-dimensional space. Generalizations of the reverse correlation technique with white noise input provide a numerical strategy for extracting the relevant low-dimensional features from experimental data, and information theory can be used to evaluate the quality of the low-dimensional approximation. We apply these methods to analyze the simplest biophysically realistic model neuron, the Hodgkin-Huxley (HH) model, using this system to illustrate the general methodological issues. We focus on the features in the stimulus that trigger a spike, explicitly eliminating the effects of interactions between spikes. One can approximate this triggering “feature space” as a two-dimensional linear subspace in the high-dimensional space of input histories, capturing in this way a substantial fraction of the mutual information between inputs and spike time. We find that an even better approximation, however, is to describe the relevant subspace as two dimensional but curved; in this way, we can capture 90% of the mutual information even at high time resolution. Our analysis provides a new understanding of the computational properties of the HH model. While it is common to approximate neural behavior as “integrate and fire,” the HH model is not an integrator nor is it well described by a single threshold.


Author(s):  
Wen-Ji Zhou ◽  
Yang Yu ◽  
Min-Ling Zhang

In multi-label classification tasks, labels are commonly related with each other. It has been well recognized that utilizing label relationship is essential to multi-label learning. One way to utilizing label relationship is to map labels to a lower-dimensional space of uncorrelated labels, where the relationship could be encoded in the mapping. Previous linear mapping methods commonly result in regression subproblems in the lower-dimensional label space. In this paper, we disclose that mappings to a low-dimensional multi-label regression problem can be worse than mapping to a classification problem, since regression requires more complex model than classification. We then propose the binary linear compression (BILC) method that results in a binary label space, leading to classification subproblems. Experiments on several multi-label datasets show that, employing classification in the embedded space results in much simpler models than regression, leading to smaller structure risk. The proposed methods are also shown to be superior to some state-of-the-art approaches.


Author(s):  
Alyssa Ney

This chapter considers and critiques some strategies for solving the macro-object problem for wave function realism. This is the problem of how a wave function understood as a field on a high-dimensional space may come to make up or constitute the low-dimensional, macroscopic objects of our experience. It is first noted that simply invoking correspondences between particle configurations and states of the wave function will not suffice to solve the macro-object problem, following issues noted previously by Maudlin and Monton. More sophisticated strategies are considered that appeal to functionalism. It is argued that these functionalist strategies for recovering low-dimensional macroscopic objects from the wave function also do not succeed.


2002 ◽  
Vol 14 (5) ◽  
pp. 1195-1232 ◽  
Author(s):  
Douglas L. T. Rohde

Multidimensional scaling (MDS) is the process of transforming a set of points in a high-dimensional space to a lower-dimensional one while preserving the relative distances between pairs of points. Although effective methods have been developed for solving a variety of MDS problems, they mainly depend on the vectors in the lower-dimensional space having real-valued components. For some applications, the training of neural networks in particular, it is preferable or necessary to obtain vectors in a discrete, binary space. Unfortunately, MDS into a low-dimensional discrete space appears to be a significantly harder problem than MDS into a continuous space. This article introduces and analyzes several methods for performing approximately optimized binary MDS.


2021 ◽  
Author(s):  
Tal Einav ◽  
Brian Cleary

SummaryCharacterizing the antibody response against large panels of viral variants provides unique insight into key processes that shape viral evolution and host antibody repertoires, and has become critical to the development of new vaccine strategies. Given the enormous diversity of circulating virus strains and antibody responses, exhaustive testing of all antibody-virus interactions is unfeasible. However, prior studies have demonstrated that, despite the complexity of these interactions, their functional phenotypes can be characterized in a vastly simpler and lower-dimensional space, suggesting that matrix completion of relatively few measurements could accurately predict unmeasured antibody-virus interactions. Here, we combine available data from several of the largest-scale studies for both influenza and HIV-1 and demonstrate how matrix completion can substantially expedite experiments. We explore how prediction accuracy evolves as the number of available measurements changes and approximate the number of additional measurements necessary in several highly incomplete datasets (suggesting ∼250,000 measurements could be saved). In addition, we show how the method can be used to combine disparate datasets, even when the number of available measurements is below the theoretical limit for successful prediction. Our results suggest new approaches to improve ongoing experimental design, and could be readily generalized to other viruses or more broadly to other low-dimensional biological datasets.


Author(s):  
Erik Voeten

This chapter investigates how ideological contestation has shaped the institutions that protect foreign investment from expropriation. It explains how a focus on competition in a low-dimensional ideological space helps one make sense of the emergence of the investment regime and adjustments to it. From the U.S. perspective, the investment regime is partially about protecting the specific assets of American investors. Yet this could be achieved through other means. The institutional regime is also about advancing principles favored by the United States over alternative principles advocated by the Soviet Union and other states. This chapter first details ideological conflict during the Cold War. It then uses the framework from Chapter 4 to analyze the role of ideology in determining which countries did and did not sign bilateral investment treaties (BITs) with the United States. Finally, the chapter shows that governments that changed their ideological orientations since originally negotiating BITs are the most likely to renegotiate or end treaties. The rational functional rationales of investment agreements must be understood against the backdrop of fierce ideological competition in a low-dimensional space.


NeuroImage ◽  
2021 ◽  
pp. 118200
Author(s):  
Sayan Ghosal ◽  
Qiang Chen ◽  
Giulio Pergola ◽  
Aaron L. Goldman ◽  
William Ulrich ◽  
...  

2021 ◽  
Vol 2021 (7) ◽  
Author(s):  
Dipankar Barman ◽  
Subhajit Barman ◽  
Bibhas Ranjan Majhi

Abstract We investigate the effects of field temperature T(f) on the entanglement harvesting between two uniformly accelerated detectors. For their parallel motion, the thermal nature of fields does not produce any entanglement, and therefore, the outcome is the same as the non-thermal situation. On the contrary, T(f) affects entanglement harvesting when the detectors are in anti-parallel motion, i.e., when detectors A and B are in the right and left Rindler wedges, respectively. While for T(f) = 0 entanglement harvesting is possible for all values of A’s acceleration aA, in the presence of temperature, it is possible only within a narrow range of aA. In (1 + 1) dimensions, the range starts from specific values and extends to infinity, and as we increase T(f), the minimum required value of aA for entanglement harvesting increases. Moreover, above a critical value aA = ac harvesting increases as we increase T(f), which is just opposite to the accelerations below it. There are several critical values in (1 + 3) dimensions when they are in different accelerations. Contrary to the single range in (1 + 1) dimensions, here harvesting is possible within several discrete ranges of aA. Interestingly, for equal accelerations, one has a single critical point, with nature quite similar to (1 + 1) dimensional results. We also discuss the dependence of mutual information among these detectors on aA and T(f).


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 394
Author(s):  
Xin Yan ◽  
Yanxing Qi ◽  
Yinmeng Wang ◽  
Yuanyuan Wang

The plane wave compounding (PWC) is a promising modality to improve the imaging quality and maintain the high frame rate for ultrafast ultrasound imaging. In this paper, a novel beamforming method is proposed to achieve higher resolution and contrast with low complexity. A minimum variance (MV) weight calculated by the partial generalized sidelobe canceler is adopted to beamform the receiving array signals. The dimension reduction technique is introduced to project the data into lower dimensional space, which also contributes to a large subarray length. Estimation of multi-wave receiving covariance matrix is performed and then utilized to determine only one weight. Afterwards, a fast second-order reformulation of the delay multiply and sum (DMAS) is developed as nonlinear compounding to composite the beamforming output of multiple transmissions. Simulations, phantom, in vivo, and robustness experiments were carried out to evaluate the performance of the proposed method. Compared with the delay and sum (DAS) beamformer, the proposed method achieved 86.3% narrower main lobe width and 112% higher contrast ratio in simulations. The robustness to the channel noise of the proposed method is effectively enhanced at the same time. Furthermore, it maintains a linear computational complexity, which means that it has the potential to be implemented for real-time response.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4454 ◽  
Author(s):  
Marek Piorecky ◽  
Vlastimil Koudelka ◽  
Jan Strobl ◽  
Martin Brunovsky ◽  
Vladimir Krajca

Simultaneous recordings of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) are at the forefront of technologies of interest to physicians and scientists because they combine the benefits of both modalities—better time resolution (hdEEG) and space resolution (fMRI). However, EEG measurements in the scanner contain an electromagnetic field that is induced in leads as a result of gradient switching slight head movements and vibrations, and it is corrupted by changes in the measured potential because of the Hall phenomenon. The aim of this study is to design and test a methodology for inspecting hidden EEG structures with respect to artifacts. We propose a top-down strategy to obtain additional information that is not visible in a single recording. The time-domain independent component analysis algorithm was employed to obtain independent components and spatial weights. A nonlinear dimension reduction technique t-distributed stochastic neighbor embedding was used to create low-dimensional space, which was then partitioned using the density-based spatial clustering of applications with noise (DBSCAN). The relationships between the found data structure and the used criteria were investigated. As a result, we were able to extract information from the data structure regarding electrooculographic, electrocardiographic, electromyographic and gradient artifacts. This new methodology could facilitate the identification of artifacts and their residues from simultaneous EEG in fMRI.


Sign in / Sign up

Export Citation Format

Share Document