Alternative Approaches for the Use of Uncertain Prior Information to Overcome the Rank-Deficiency of a Linear Model

Author(s):  
Burkhard Schaffrin ◽  
Kyle Snow ◽  
Xing Fang

1978 ◽  
Vol 39 (1-2) ◽  
pp. 113-127 ◽  
Author(s):  
T. Bree


2015 ◽  
Vol 9 (1) ◽  
pp. 698-704
Author(s):  
Shuangrui Chen ◽  
Quansheng Yan

It is of great significance to timely and accurately forecast the safety state of the bridge as far as the maintenance is concerned. Bayesian forecasting is a method of deriving posterior distribution in accord with the sampling information and prior information, where real time online forecasting is realized by means of recursive algorithm and the stationary assumption. Bayesian dynamic linear model is created to forecast the reliability of the bridge on the basis of the observed stress information of a bridge structure. According to the observed information, the model created is a superposition of constant mean model and seasonal effect model. The analysis of a practical example illustrates that Bayesian dynamic linear modes can provide an accurate real time forecast of the reliability of the bridge





2019 ◽  
Author(s):  
Marijn van Vliet ◽  
Riitta Salmelin

AbstractLinear machine learning models “learn” a data transformation by being exposed to examples of input with the desired output, forming the basis for a variety of powerful techniques for analyzing neuroimaging data. However, their ability to learn the desired transformation is limited by the quality and size of the example dataset, which in neuroimaging studies is often notoriously noisy and small. In these cases, it is desirable to fine-tune the learned linear model using domain information beyond the example dataset. To this end, we present a framework that decomposes the weight matrix of a fitted linear model into three subcomponents: the data covariance, the identified signal of interest, and a normalizer. Inspecting these subcomponents in isolation provides an intuitive way to inspect the inner workings of the model and assess its strengths and weaknesses. Furthermore, the three subcomponents may be altered, which provides a straightforward way to inject prior information and impose additional constraints. We refer to this process as “post-hoc modification” of a model and demonstrate how it can be used to achieve precise control over which aspects of the model are fitted to the data through machine learning and which are determined through domain information. As an example use case, we decode the associative strength between words from electroencephalography (EEG) reading data. Our results show how the decoding accuracy of two example linear models (ridge regression and logistic regression) can be boosted by incorporating information about the spatio-temporal nature of the data, domain information about the N400 evoked potential and data from other participants.HighlightsWe present a framework to decompose any linear model into three subcomponents that are straightforward to interpret.By modifying the subcomponents before re-assembling them into a linear model, prior information and further constraints may be injected into the model.As an example, we boost the performance of a linear regressor and classifier by injecting knowledge about the spatio-temporal nature of the data, the N400 evoked potential and data from other participants.



1983 ◽  
Vol 146 (2) ◽  
pp. 202
Author(s):  
J. Q. Smith ◽  
H. Toutenberg


Author(s):  
D. E. Johnson

Increased specimen penetration; the principle advantage of high voltage microscopy, is accompanied by an increased need to utilize information on three dimensional specimen structure available in the form of two dimensional projections (i.e. micrographs). We are engaged in a program to develop methods which allow the maximum use of information contained in a through tilt series of micrographs to determine three dimensional speciman structure.In general, we are dealing with structures lacking in symmetry and with projections available from only a limited span of angles (±60°). For these reasons, we must make maximum use of any prior information available about the specimen. To do this in the most efficient manner, we have concentrated on iterative, real space methods rather than Fourier methods of reconstruction. The particular iterative algorithm we have developed is given in detail in ref. 3. A block diagram of the complete reconstruction system is shown in fig. 1.



Author(s):  
J.M. Cowley

By extrapolation of past experience, it would seem that the future of ultra-high resolution electron microscopy rests with the advances of electron optical engineering that are improving the instrumental stability of high voltage microscopes to achieve the theoretical resolutions of 1Å or better at 1MeV or higher energies. While these high voltage instruments will undoubtedly produce valuable results on chosen specimens, their general applicability has been questioned on the basis of the excessive radiation damage effects which may significantly modify the detailed structures of crystal defects within even the most radiation resistant materials in a period of a few seconds. Other considerations such as those of cost and convenience of use add to the inducement to consider seriously the possibilities for alternative approaches to the achievement of comparable resolutions.



Optimization ◽  
1976 ◽  
Vol 7 (5) ◽  
pp. 679-683
Author(s):  
Michael Nussbaum




Sign in / Sign up

Export Citation Format

Share Document