scholarly journals DENSE-GWP: AN IMPROVED PRIMARY VISUAL ENCODING MODEL BASED ON DENSE GABOR FEATURES

Author(s):  
YIBO CUI ◽  
CHI ZHANG ◽  
LINYUAN WANG ◽  
BIN YAN ◽  
LI TONG

Brain visual encoding models based on functional magnetic resonance imaging are growing increasingly popular. The Gabor wavelet pyramid model (GWP) is a classic example, exhibiting a good prediction performance for the primary visual cortex (V1, V2, and V3). However, the local variations in the visual stimulation are quite convoluted in terms of spatial frequency, orientation, and position, posing a challenge for visual encoding models. Whether the GWP model can thoroughly extract informative and effective features from visual stimulus remains unclear. To this end, this paper proposes a dense GWP visual encoding model by ameliorating the composition of the Gabor wavelet basis from three aspects: spatial frequency, orientation, and position. The improved model named Dense-GWP model could extract denser features from the image stimulus. A regularization optimization algorithm was used to select informative and effective features, which were crucial for predicting voxel activity in the region of interest. Extensive experimental results showed that the Dense-GWP model exhibits an improved prediction performance and can therefore help further understand the human visual perception mechanism.

2017 ◽  
Author(s):  
Christian Keitel ◽  
Christopher SY Benwell ◽  
Gregor Thut ◽  
Joachim Gross

ABSTRACTRecent studies have probed the role of the parieto-occipital alpha rhythm (8 – 12 Hz) in human visual perception through attempts to drive its neural generators. To that end, paradigms have used high-intensity strictly-periodic visual stimulation that created strong predictions about future stimulus occurrences and repeatedly demonstrated perceptual consequences in line with an entrainment of parieto-occipital alpha. Our study, in turn, examined the case of alpha entrainment by non-predictive low-intensity quasi-periodic visual stimulation within theta-(4 – 7 Hz), alpha-(8 – 13 Hz) and beta (14 – 20 Hz) frequency bands, i.e. a class of stimuli that resemble the temporal characteristics of naturally occurring visual input more closely. We have previously reported substantial neural phase-locking in EEG recording during all three stimulation conditions. Here, we studied to what extent this phase-locking reflected an entrainment of intrinsic alpha rhythms in the same dataset. Specifically, we tested whether quasi-periodic visual stimulation affected several properties of parieto-occipital alpha generators. Speaking against an entrainment of intrinsic alpha rhythms by non-predictive low-intensity quasi-periodic visual stimulation, we found none of these properties to show differences between stimulation frequency bands. In particular, alpha band generators did not show increased sensitivity to alpha band stimulation and Bayesian inference corroborated evidence against an influence of stimulation frequency. Our results set boundary conditions for when and how to expect effects of entrainment of alpha generators and suggest that the parieto-occipital alpha rhythm may be more inert to external influences than previously thought.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 162-162 ◽  
Author(s):  
T Troscianko ◽  
C A Parraga ◽  
G Brelstaff ◽  
D Carr ◽  
K Nelson

A common assumption in the study of the relationship between human vision and the visual environment is that human vision has developed in order to encode the incident information in an optimal manner. Such arguments have been used to support the 1/f dependence of scene content as a function of spatial frequency. In keeping with this assumption, we ask whether there are any important differences between the luminance and (r/g) chrominance Fourier spectra of natural scenes, the simple expectation being that the chrominance spectrum should be relatively richer in low spatial frequencies than the luminance spectrum, to correspond with the different shape of luminance and chrominance contrast sensitivity functions. We analysed a data set of 29 images of natural scenes (predominantly of vegetation at different distances) which were obtained with a hyper-spectral camera (measuring the scene through a set of 31 wavelength bands in the range 400 – 700 nm). The images were transformed to the three Smith — Pokorny cone fundamentals, and further transformed into ‘luminance’ (r+g) and ‘chrominance’ (r-g) images, with various assumptions being made about the relative weighting of the r and g components, and the form of the chrominance response. We then analysed the Fourier spectra of these images using logarithmic intervals in spatial frequency space. This allowed a determination of the total energy within each Fourier band for each of the luminance and chrominance representations. The results strongly indicate that, for the set of scenes studied here, there was no evidence of a predominance of low-spatial-frequency chrominance information. Two classes of explanation are possible: (a) that raw Fourier content may not be the main organising principle determining visual encoding of colour, and/or (b) that our scenes were atypical of what may have driven visual evolution. We present arguments in favour of both of these propositions.


Metals ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 234 ◽  
Author(s):  
Yuxuan Wang ◽  
Xuebang Wu ◽  
Xiangyan Li ◽  
Zhuoming Xie ◽  
Rui Liu ◽  
...  

Predicting mechanical properties of metals from big data is of great importance to materials engineering. The present work aims at applying artificial neural network (ANN) models to predict the tensile properties including yield strength (YS) and ultimate tensile strength (UTS) on austenitic stainless steel as a function of chemical composition, heat treatment and test temperature. The developed models have good prediction performance for YS and UTS, with R values over 0.93. The models were also tested to verify the reliability and accuracy in the context of metallurgical principles and other data published in the literature. In addition, the mean impact value analysis was conducted to quantitatively examine the relative significance of each input variable for the improvement of prediction performance. The trained models can be used as a guideline for the preparation and development of new austenitic stainless steels with the required tensile properties.


Processes ◽  
2021 ◽  
Vol 9 (10) ◽  
pp. 1804
Author(s):  
John Ndisya ◽  
Ayub Gitau ◽  
Duncan Mbuge ◽  
Arman Arefi ◽  
Liliana Bădulescu ◽  
...  

In this study, hyperspectral imaging (HSI) and chemometrics were implemented to develop prediction models for moisture, colour, chemical and structural attributes of purple-speckled cocoyam slices subjected to hot-air drying. Since HSI systems are costly and computationally demanding, the selection of a narrow band of wavelengths can enable the utilisation of simpler multispectral systems. In this study, 19 optimal wavelengths in the spectral range 400–1700 nm were selected using PLS-BETA and PLS-VIP feature selection methods. Prediction models for the studied quality attributes were developed from the 19 wavelengths. Excellent prediction performance (RMSEP < 2.0, r2P > 0.90, RPDP > 3.5) was obtained for MC, RR, VS and aw. Good prediction performance (RMSEP < 8.0, r2P = 0.70–0.90, RPDP > 2.0) was obtained for PC, BI, CIELAB b*, chroma, TFC, TAA and hue angle. Additionally, PPA and WI were also predicted successfully. An assessment of the agreement between predictions from the non-invasive hyperspectral imaging technique and experimental results from the routine laboratory methods established the potential of the HSI technique to replace or be used interchangeably with laboratory measurements. Additionally, a comparison of full-spectrum model results and the reduced models demonstrated the potential replacement of HSI with simpler imaging systems.


Author(s):  
Eline R. Kupers ◽  
Noah C. Benson ◽  
Jonathan Winawer

AbstractSynchronization of neuronal responses over large distances is hypothesized to be important for many cortical functions. However, no straightforward methods exist to estimate synchrony non-invasively in the living human brain. MEG and EEG measure the whole brain, but the sensors pool over large, overlapping cortical regions, obscuring the underlying neural synchrony. Here, we developed a model from stimulus to cortex to MEG sensors to disentangle neural synchrony from spatial pooling of the instrument. We find that synchrony across cortex has a surprisingly large and systematic effect on predicted MEG spatial topography. We then conducted visual MEG experiments and separated responses into stimulus-locked and broadband components. The stimulus-locked topography was similar to model predictions assuming synchronous neural sources, whereas the broadband topography was similar to model predictions assuming asynchronous sources. We infer that visual stimulation elicits two distinct types of neural responses, one highly synchronous and one largely asynchronous across cortex.


1994 ◽  
Vol 11 (6) ◽  
pp. 1059-1076 ◽  
Author(s):  
Jin-Tang Xue ◽  
Charlene B.Y. Kim ◽  
Rodney J. Moore ◽  
Peter D. Spear

AbstractThe superior colliculus (SC) projects to all layers of the cat's lateral geniculate nucleus (LGN) and thus is in a position to influence information transmission through the LGN. We investigated the function of the tecto-geniculate pathway by studying the responses of cat LGN neurons before, during, and after inactivating the SC with microinjections of lidocaine. The LGN cells were stimulated with drifting sine-wave gratings that varied in spatial frequency and contrast. Among 71 LGN neurons that were studied, 53 showed a statistically significant change in response during SC inactivation. Control experiments with mock injections indicated that some changes could be attributed to slow waxing and waning of responsiveness over time. However, this could not account for all of the effects of SC inactivation that were observed. Forty cells showed changes that were attributed to the removal of tecto-geniculate influences. About equal numbers of cells showed increases (22 cells) and decreases (18 cells) in some aspect of their response to visual stimuli during SC inactivation. The proportion of cells that showed tecto-geniculate influences was somewhat higher in the C layers (68% of the cells) than in the A layers (44% of the cells). In addition, among cells that showed a significant change in maximal response to visual stimulation, the change was larger for cells in the C layers (64% average change) than in the A layers (26% average change) and it was larger for W cells (61% average change) than for X and Y cells (29% average change). Nearly all of the X cells that showed changes had an increase in response, and nearly all of the Y cells had a decrease in response. In addition, across all cell classes, 80% of the cells with receptive fields < 15 deg from the area centralis had an increase in response, and 80% of the cells with receptive fields > 15 deg from the area centralis had a decrease in response. None of the LGN cells had significant changes in spatial resolution, and only three cells had changes in optimal spatial frequency. Ten cells had a change in contrast threshold, 25 cells had a change in contrast gain, and 29 cells had a change in the maximal response to a high-contrast stimulus. Thus, our results suggest that the tecto-geniculate pathway has little or no effect on spatial processing by LGN neurons. Rather, the major influence is on maximal response levels and the relationship between response and stimulus contrast. Several hypotheses about the role of the tecto-geniculate pathway in visual behavior are considered.


2021 ◽  
Vol 26 (6) ◽  
Author(s):  
Christoph Laaber ◽  
Mikael Basmaci ◽  
Pasquale Salza

AbstractSoftware benchmarks are only as good as the performance measurements they yield. Unstable benchmarks show high variability among repeated measurements, which causes uncertainty about the actual performance and complicates reliable change assessment. However, if a benchmark is stable or unstable only becomes evident after it has been executed and its results are available. In this paper, we introduce a machine-learning-based approach to predict a benchmark’s stability without having to execute it. Our approach relies on 58 statically-computed source code features, extracted for benchmark code and code called by a benchmark, related to (1) meta information, e.g., lines of code (LOC), (2) programming language elements, e.g., conditionals or loops, and (3) potentially performance-impacting standard library calls, e.g., file and network input/output (I/O). To assess our approach’s effectiveness, we perform a large-scale experiment on 4,461 Go benchmarks coming from 230 open-source software (OSS) projects. First, we assess the prediction performance of our machine learning models using 11 binary classification algorithms. We find that Random Forest performs best with good prediction performance from 0.79 to 0.90, and 0.43 to 0.68, in terms of AUC and MCC, respectively. Second, we perform feature importance analyses for individual features and feature categories. We find that 7 features related to meta-information, slice usage, nested loops, and synchronization application programming interfaces (APIs) are individually important for good predictions; and that the combination of all features of the called source code is paramount for our model, while the combination of features of the benchmark itself is less important. Our results show that although benchmark stability is affected by more than just the source code, we can effectively utilize machine learning models to predict whether a benchmark will be stable or not ahead of execution. This enables spending precious testing time on reliable benchmarks, supporting developers to identify unstable benchmarks during development, allowing unstable benchmarks to be repeated more often, estimating stability in scenarios where repeated benchmark execution is infeasible or impossible, and warning developers if new benchmarks or existing benchmarks executed in new environments will be unstable.


2021 ◽  
Vol 2021 (29) ◽  
pp. 368-373
Author(s):  
Yuechen Zhu ◽  
Ming Ronnier Luo

The goal of this study was to investigate the chromatic adaptation under extreme chromatic lighting conditions using the magnitude estimation method. The locations of the lightings on CIE1976 u′v′ plane were close to the spectrum locus, so the colour purity was far beyond the previous studies, and the data could test the limitations of the existing models. Two psychophysical experiments were carried out, and 1,470 estimations of corresponding colours were accumulated. The results showed that CAT16 gave a good prediction performance for all the chromatic lightings except for blue lighting, and the degree of adaptation was relatively high, that is, D was close to 1. The prediction for blue lightings was modified, the results showed the performance of CAM16 could be improved by correcting the matrix instead of the D values.


2020 ◽  
Vol 21 (7) ◽  
pp. 2274 ◽  
Author(s):  
Aijun Deng ◽  
Huan Zhang ◽  
Wenyan Wang ◽  
Jun Zhang ◽  
Dingdong Fan ◽  
...  

The study of protein-protein interaction is of great biological significance, and the prediction of protein-protein interaction sites can promote the understanding of cell biological activity and will be helpful for drug development. However, uneven distribution between interaction and non-interaction sites is common because only a small number of protein interactions have been confirmed by experimental techniques, which greatly affects the predictive capability of computational methods. In this work, two imbalanced data processing strategies based on XGBoost algorithm were proposed to re-balance the original dataset from inherent relationship between positive and negative samples for the prediction of protein-protein interaction sites. Herein, a feature extraction method was applied to represent the protein interaction sites based on evolutionary conservatism of proteins, and the influence of overlapping regions of positive and negative samples was considered in prediction performance. Our method showed good prediction performance, such as prediction accuracy of 0.807 and MCC of 0.614, on an original dataset with 10,455 surface residues but only 2297 interface residues. Experimental results demonstrated the effectiveness of our XGBoost-based method.


2008 ◽  
Vol 2008 ◽  
pp. 1-12 ◽  
Author(s):  
Jean J. Chen ◽  
Marguerite Wieckowska ◽  
Ernst Meyer ◽  
G. Bruce Pike

An important aspect of functional magnetic resonance imaging (fMRI) is the study of brain hemodynamics, and MR arterial spin labeling (ASL) perfusion imaging has gained wide acceptance as a robust and noninvasive technique. However, the cerebral blood flow (CBF) measurements obtained with ASL fMRI have not been fully validated, particularly during global CBF modulations. We present a comparison of cerebral blood flow changes (ΔCBF) measured using a flow-sensitive alternating inversion recovery (FAIR) ASL perfusion method to those obtained usingH2O15PET, which is the current gold standard for in vivo imaging of CBF. To study regional and global CBF changes, a group of 10 healthy volunteers were imaged under identical experimental conditions during presentation of 5 levels of visual stimulation and one level of hypercapnia. The CBF changes were compared using 3 types of region-of-interest (ROI) masks. FAIR measurements of CBF changes were found to be slightly lower than those measured with PET (averageΔCBF of21.5±8.2% for FAIR versus28.2±12.8% for PET at maximum stimulation intensity). Nonetheless, there was a strong correlation between measurements of the two modalities. Finally, at-test comparison of the slopes of the linear fits of PET versus ASLΔCBF for all 3 ROI types indicated no significant difference from unity (P>.05).


Sign in / Sign up

Export Citation Format

Share Document