scholarly journals VEP-based acuity estimation: unaffected by translucency of contralateral occlusion

Author(s):  
Sven P. Heinrich ◽  
Isabell Strübin ◽  
Michael Bach

Abstract Purpose Visual evoked potential (VEP) recordings for objective visual acuity estimates are typically obtained monocularly with the contralateral eye occluded. Psychophysical studies suggest that the translucency of the occluder has only a minimal effect on the outcome of an acuity test. However, there is literature evidence for the VEP being susceptible to the type of occlusion. The present study assessed whether this has an impact on VEP-based estimates of visual acuity. Methods We obtained VEP-based acuity estimates with opaque, non-translucent occlusion of the contralateral eye, and with translucent occlusion that lets most of the light pass while abolishing the perception of any stimulus structure. The tested eye was measured with normal and artificially degraded vision, resulting in a total of 4 experimental conditions. Two different algorithms, a stepwise heuristic and a machine learning approach, were used to derive acuity from the VEP tuning curve. Results With normal vision, translucent occlusion resulted in slight, yet statistically significant better acuity estimates when analyzed with the heuristic algorithm (p = 0.014). The effect was small (mean ΔlogMAR = 0.06), not present in some participants, and without practical relevance. It was absent with the machine learning approach. With degraded vision, the difference was tiny and not statistically significant. Conclusion The type of occlusion for the contralateral eye does not substantially affect the outcome of VEP-based acuity estimation.

2021 ◽  
pp. 127302
Author(s):  
Punniyakotti Varadharajan Gopirajan ◽  
Kannappan Panchamoorthy Gopinath ◽  
Govindarajan Sivaranjani ◽  
Jayaseelan Arun

2018 ◽  
Author(s):  
Christopher DiMattina ◽  
Curtis L. Baker

AbstractBackgroundVisual pattern detection and discrimination are essential first steps for scene analysis. Numerous human psychophysical studies have modeled visual pattern detection and discrimination by estimating linear templates for classifying noisy stimuli defined by spatial variations in pixel intensities. However, such methods are poorly suited to understanding sensory processing mechanisms for complex visual stimuli such as second-order boundaries defined by spatial differences in contrast or texture.Methodology / Principal FindingsWe introduce a novel machine learning framework for modeling human perception of second-order visual stimuli, using image-computable hierarchical neural network models fit directly to psychophysical trial data. This framework is applied to modeling visual processing of boundaries defined by differences in the contrast of a carrier texture pattern, in two different psychophysical tasks: (1) boundary orientation identification, and (2) fine orientation discrimination. Cross-validation analysis is employed to optimize model hyper-parameters, and demonstrate that these models are able to accurately predict human performance on novel stimulus sets not used for fitting model parameters. We find that, like the ideal observer, human observers take a region-based approach to the orientation identification task, while taking an edge-based approach to the fine orientation discrimination task. How observers integrate contrast modulation across orientation channels is investigated by fitting psychophysical data with two models representing competing hypotheses, revealing a preference for a model which combines multiple orientations at the earliest possible stage. Our results suggest that this machine learning approach has much potential to advance the study of second-order visual processing, and we outline future steps towards generalizing the method to modeling visual segmentation of natural texture boundaries.Conclusions / SignificanceThis study demonstrates how machine learning methodology can be fruitfully applied to psychophysical studies of second-order visual processing.Author SummaryMany naturally occurring visual boundaries are defined by spatial differences in features other than luminance, for example by differences in texture or contrast. Quantitative models of such “second-order” boundary perception cannot be estimated using the standard regression techniques (known as “classification images”) commonly applied to “first-order”, luminance-defined stimuli. Here we present a novel machine learning approach to modeling second-order boundary perception using hierarchical neural networks. In contrast to previous quantitative studies of second-order boundary perception, we directly estimate network model parameters using psychophysical trial data. We demonstrate that our method can reveal different spatial summation strategies that human observers utilize for different kinds of second-order boundary perception tasks, and can be used to compare competing hypotheses of how contrast modulation is integrated across orientation channels. We outline extensions of the methodology to other kinds of second-order boundaries, including those in natural images.


2020 ◽  
Author(s):  
Lucas M. Thimoteo ◽  
Marley M. Vellasco ◽  
Jorge M. do Amaral ◽  
Karla Figueiredo ◽  
Cátia Lie Yokoyama ◽  
...  

This work proposes an interpretable machine learning approach to diagnosesuspected COVID-19 cases based on clinical variables. Results obtained for the proposed models have F-2 measure superior to 0.80 and accuracy superior to 0.85. Interpretation of the linear model feature importance brought insights about the most relevant features. Shapley Additive Explanations were used in the non-linear models. They were able to show the difference between positive and negative patients as well as offer a global interpretability sense of the models.


2021 ◽  
Author(s):  
Silke van Klaveren ◽  
Ivan Vasconcelos ◽  
Andre Niemeijer

<p>The successful prediction of earthquakes is one of the holy grails in Earth Sciences. Traditional predictions use statistical information on recurrence intervals, but those predictions are not accurate enough. In a recent paper, a machine learning approach was proposed and applied to data of laboratory earthquakes. The machine learning algorithm utilizes continuous measurements of radiated energy through acoustic emissions and the authors were able to successfully predict the timing of laboratory earthquakes. Here, we reproduced their model which was applied to a gouge layer of glass beads and applied it to a data set obtained using a gouge layer of salt. In this salt experiment different load point velocities were set, leading to variable recurrence times. The machine learning technique we use is called random forest and uses the acoustic emissions during the interseismic period. The random forest model succeeds in making a relatively reliable prediction for both materials, also long before the earthquake. Apparently there is information in the data on the timing of the next earthquake throughout the experiment. For glass beads energy is gradually and increasingly released whereas for salt energy is only released during precursor activity, therefore the important features used in the prediction are different. We interpret the difference in results to be due to the different micromechanics of slip. The research shows that a machine learning approach can reveal the presence of information in the data on the timing of unstable slip events (earthquakes). Further research is needed to identify the responsible micromechanical processes which might be then be used to extrapolate to natural conditions.</p>


Diabetes ◽  
2020 ◽  
Vol 69 (Supplement 1) ◽  
pp. 1552-P
Author(s):  
KAZUYA FUJIHARA ◽  
MAYUKO H. YAMADA ◽  
YASUHIRO MATSUBAYASHI ◽  
MASAHIKO YAMAMOTO ◽  
TOSHIHIRO IIZUKA ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document