scholarly journals Investigation of layer specific BOLD in the human visual cortex during visual attention

2021 ◽  
Author(s):  
Tim van Mourik ◽  
Peter J. Koopmans ◽  
Lauren J. Bains ◽  
David G. Norris ◽  
Janneke F.M. Jehee

AbstractDirecting spatial attention towards a particular stimulus location enhances cortical responses at corresponding regions in cortex. How attention modulates the laminar response profile within the attended region, however, remains unclear. In this paper, we use high field (7T) fMRI to investigate the effects of attention on laminar activity profiles in areas V1-V3; both when a stimulus was presented to the observer, and in the absence of visual stimulation. Replicating previous findings, we find robust increases in the overall BOLD response for attended regions in cortex, both with and without visual stimulation. When analyzing the BOLD response across the individual layers in visual cortex, we observed no evidence for laminar-specific differentiation with attention. We offer several potential explanations for these results, including theoretical, methodological and technical reasons. Additionally, we provide all data and pipelines openly, in order to promote analytic consistency across layer-specific studies, improve reproducibility, and decrease the false positive rate as a result of analytical flexibility.

2005 ◽  
Vol 12 (4) ◽  
pp. 197-201 ◽  
Author(s):  
Nicholas J Wald ◽  
Joan K Morris ◽  
Simon Rish

Objective: To determine the quantitative effect on overall screening performance (detection rate for a given false-positive rate) of using several moderately strong, independent risk factors in combination as screening markers. Setting: Theoretical statistical analysis. Methods: For the purposes of this analysis, it was assumed that all risk factors were independent, had Gaussian distributions with the same standard deviation in affected and unaffected individuals and had the same screening performance. We determined the overall screening performance associated with using an increasing number of risk factors together, with each risk factor having a detection rate of 10%, 15% or 20% for a 5% false-positive rate. The overall screening performance was estimated as the detection rate for a 5% false-positive rate. Results: Combining the risk factors increased the screening performance, but the gain in detection at a constant false-positive rate was relatively modest and diminished with the addition of each risk factor. Combining three risk factors, each with a 15% detection rate for a 5% false-positive rate, yields a 28% detection rate. Combining five risk factors increases the detection rate to 39%. If the individual risk factors have a detection rate of 10% for a 5% false-positive rate, it would require combining about 15 such risk factors to achieve a comparable overall detection rate (41%). Conclusion: It is intuitively thought that combining moderately strong risk factors can substantially improve screening performance. For example, most cardiovascular risk factors that may be used in screening for ischaemic heart disease events, such as serum cholesterol and blood pressure, have a relatively modest screening performance (about 15% detection rate for a 5% false-positive rate). It would require the combination of about 15 or 20 such risk factors to achieve detection rates of about 80% for a 5% false-positive rate. This is impractical, given the risk factors so far discovered, because there are too few risk factors and their associations with disease are too weak.


2010 ◽  
Vol 104 (1) ◽  
pp. 76-87 ◽  
Author(s):  
John T. Serences ◽  
Sameer Saproo

Voluntary and stimulus-driven shifts of attention can modulate the representation of behaviorally relevant stimuli in early areas of visual cortex. In turn, attended items are processed faster and more accurately, facilitating the selection of appropriate behavioral responses. Information processing is also strongly influenced by past experience and recent studies indicate that the learned value of a stimulus can influence relatively late stages of decision making such as the process of selecting a motor response. However, the learned value of a stimulus can also influence the magnitude of cortical responses in early sensory areas such as V1 and S1. These early effects of stimulus value are presumed to improve the quality of sensory representations; however, the nature of these modulations is not clear. They could reflect nonspecific changes in response amplitude associated with changes in general arousal or they could reflect a bias in population responses so that high-value features are represented more robustly. To examine this issue, subjects performed a two-alternative forced choice paradigm with a variable-interval payoff schedule to dynamically manipulate the relative value of two stimuli defined by their orientation (one was rotated clockwise from vertical, the other counterclockwise). Activation levels in visual cortex were monitored using functional MRI and feature-selective voxel tuning functions while subjects performed the behavioral task. The results suggest that value not only modulates the relative amplitude of responses in early areas of human visual cortex, but also sharpens the response profile across the populations of feature-selective neurons that encode the critical stimulus feature (orientation). Moreover, changes in space- or feature-based attention cannot easily explain the results because representations of both the selected and the unselected stimuli underwent a similar feature-selective modulation. This sharpening in the population response profile could theoretically improve the probability of correctly discriminating high-value stimuli from low-value alternatives.


Author(s):  
Anna Lin ◽  
Soon Song ◽  
Nancy Wang

IntroductionStats NZ’s Integrated Data Infrastructure (IDI) is a linked longitudinal database combining administrative and survey data. Previously, false positive linkages (FP) in the IDI were assessed by clerical review of a sample of linked records, which was time consuming and subject to inconsistency. Objectives and ApproachA modelled approach, ‘SoLinks’ has been developed in order to automate the FP estimation process for the IDI. It uses a logistic regression model to calculate the probability that a given link is a true match. The model is based on the agreement types defined for four key linking variables – first name, last name, sex, and date of birth. Exemptions have been given to some specific types of links that we believe to be high quality true matches. The training data used to estimate the model parameters was based on the outcomes of the clerical review process over several years. ResultsWe have compared the FP rates estimated through clerical review to the ones estimated through the SoLinks model. Some SoLinks estimates fall outside the 95% confidence intervals of the clerically reviewed ones. This may be the result of the pre-defined probabilities for the specific types of links are too high. ConclusionThe automation of FP checking has saved analyst time and resource. The modelled FP estimates have been more stable across time than the previous clerical reviews. As this model estimates the probability of a true match at the individual link level, we may provide this probability to researchers so that they can calculate linked quality indicators for their research populations.


Author(s):  
Ciza Thomas ◽  
N. Balakrishnan

Intrusion Detection Systems form an important component of network defense. Because of the heterogeneity of the attacks, it has not been possible to make a single Intrusion Detection System that is capable of detecting all types of attacks with acceptable levels of accuracy. In this chapter, the distinct advantage of sensor fusion over individual IDSs is proved. The detection rate and the false positive rate quantify the performance benefit obtained through the fixing of threshold bounds. Also, the more independent and distinct the attack space is for the individual IDSs, the better the fusion of Intrusion Detection Systems performs. A simple theoretical model is initially illustrated and later supplemented with experimental evaluation. The chapter demonstrates that the proposed fusion technique is more flexible and also outperforms other existing fusion techniques such as OR, AND, SVM, and ANN, using the real-world network traffic embedded with attacks.


Author(s):  
David Anderson

Abstract Screening for prohibited items at airports is an example of a multi-layered screening process. Multiple layers of screening – often comprising different technologies with complementary strengths and weaknesses – are combined to create a single screening process. The detection performance of the overall system depends on multiple factors, including the performance of individual layers, the complementarity of different layers, and the decision rule(s) for determining how outputs from individual layers are combined. The aim of this work is to understand and optimise the overall system performance of a multi-layered screening process. Novel aspects include the use of realistic profiles of alarm distributions based on experimental observations and a focus on the influence of correlation/orthogonality amongst the layers of screening. The results show that a cumulative screening architecture can outperform a cascading one, yielding a significant increase in system-level true positive rate for only a modest increase in false positive rate. A cumulative screening process is also more resilient to weaknesses in the individual layers. The performance of a multi-layered screening process using a cascading approach is maximised when the false positives are orthogonal across the different layers and the true positives are correlated. The system-level performance of a cumulative screening process, on the other hand, is maximised when both false positives and true positives are as orthogonal as possible. The cost of ignoring orthogonality between screening layers is explored with some numerical examples. The underlying software model is provided in a Jupyter Notebook as supplementary material.


2021 ◽  
Author(s):  
Ishanu Chattopadhyay ◽  
Dmytro Onishchenko ◽  
Yi Huang ◽  
Peter J. Smith ◽  
Michael M. Msall ◽  
...  

Abstract Autism spectrum disorder (ASD) is a developmental disability associated with significant social and behavioral challenges. There is a need for tools that help identify children with ASD as early as possible. Our current incomplete understanding of ASD pathogenesis, and the lack of reliable biomarkers hampers early detection, intervention, and developmental trajectories. In this study we develop and validate machine inferred digital biomarkers for autism using individual diagnostic codes already recorded during medical encounters. Our risk estimator identifies children at high risk with a corresponding area under the receiver operating characteristic curve (AUC) exceeding 80% from shortly after two years of age for either sex, and across two independent databases of patient records. Thus, we systematically leverage ASD co-morbidities - with no requirement of additional blood work, tests or procedures - to compute the Autism Co-morbid Risk Score (ACoR) which predicts elevated risk during the earliest childhood years, when interventions are the most effective. By itself, ACoR has superior performance to common questionnaires-based screenings such as the M-CHAT/F, and has the potential to reduce socio-economic, ethnic and demographic biases. In addition to superior standalone performance, independence from questionnaire based screening allows us to further boost performance by conditioning on the individual M-CHAT/F scores - we can either halve the false positive rate of current screening protocols or boost sensitivity to over 60%, while maintaining specificity above 95%. Adopted in practice, ACoR could significantly reduce the median diagnostic age for ASD, and reduce long post-screen wait-times experienced by families for confirmatory diagnoses and access to evidence based interventions.


This paper presents the data analysis and feature extraction of KDD dataset of 1999. This is used to detect signature based and anomaly attacks on a system. The process is supported by data extraction as well as data cleaning of the above mentioned data set. The dataset consists of 42 parameters and 58 services. These parameters are further filtered to extract useful attributes. Every attack in the dataset is labeled either with “normal” or into four different attack types i.e. denial-of-service, network probe, remote-to-local or user-to-root. Using different machine learning algorithms, the work tries to compare the individual accuracy, True Positive and False positive rate of every algorithm with every other algorithm. The work focuses its attention to increase security through detection of static as well as dynamic attack.


2002 ◽  
Vol 41 (01) ◽  
pp. 37-41 ◽  
Author(s):  
S. Shung-Shung ◽  
S. Yu-Chien ◽  
Y. Mei-Due ◽  
W. Hwei-Chung ◽  
A. Kao

Summary Aim: Even with careful observation, the overall false-positive rate of laparotomy remains 10-15% when acute appendicitis was suspected. Therefore, the clinical efficacy of Tc-99m HMPAO labeled leukocyte (TC-WBC) scan for the diagnosis of acute appendicitis in patients presenting with atypical clinical findings is assessed. Patients and Methods: Eighty patients presenting with acute abdominal pain and possible acute appendicitis but atypical findings were included in this study. After intravenous injection of TC-WBC, serial anterior abdominal/pelvic images at 30, 60, 120 and 240 min with 800k counts were obtained with a gamma camera. Any abnormal localization of radioactivity in the right lower quadrant of the abdomen, equal to or greater than bone marrow activity, was considered as a positive scan. Results: 36 out of 49 patients showing positive TC-WBC scans received appendectomy. They all proved to have positive pathological findings. Five positive TC-WBC were not related to acute appendicitis, because of other pathological lesions. Eight patients were not operated and clinical follow-up after one month revealed no acute abdominal condition. Three of 31 patients with negative TC-WBC scans received appendectomy. They also presented positive pathological findings. The remaining 28 patients did not receive operations and revealed no evidence of appendicitis after at least one month of follow-up. The overall sensitivity, specificity, accuracy, positive and negative predictive values for TC-WBC scan to diagnose acute appendicitis were 92, 78, 86, 82, and 90%, respectively. Conclusion: TC-WBC scan provides a rapid and highly accurate method for the diagnosis of acute appendicitis in patients with equivocal clinical examination. It proved useful in reducing the false-positive rate of laparotomy and shortens the time necessary for clinical observation.


1993 ◽  
Vol 32 (02) ◽  
pp. 175-179 ◽  
Author(s):  
B. Brambati ◽  
T. Chard ◽  
J. G. Grudzinskas ◽  
M. C. M. Macintosh

Abstract:The analysis of the clinical efficiency of a biochemical parameter in the prediction of chromosome anomalies is described, using a database of 475 cases including 30 abnormalities. A comparison was made of two different approaches to the statistical analysis: the use of Gaussian frequency distributions and likelihood ratios, and logistic regression. Both methods computed that for a 5% false-positive rate approximately 60% of anomalies are detected on the basis of maternal age and serum PAPP-A. The logistic regression analysis is appropriate where the outcome variable (chromosome anomaly) is binary and the detection rates refer to the original data only. The likelihood ratio method is used to predict the outcome in the general population. The latter method depends on the data or some transformation of the data fitting a known frequency distribution (Gaussian in this case). The precision of the predicted detection rates is limited by the small sample of abnormals (30 cases). Varying the means and standard deviations (to the limits of their 95% confidence intervals) of the fitted log Gaussian distributions resulted in a detection rate varying between 42% and 79% for a 5% false-positive rate. Thus, although the likelihood ratio method is potentially the better method in determining the usefulness of a test in the general population, larger numbers of abnormal cases are required to stabilise the means and standard deviations of the fitted log Gaussian distributions.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


Sign in / Sign up

Export Citation Format

Share Document