scholarly journals Gravitational effects of scene information in object localization

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Anna Kosovicheva ◽  
Peter J. Bex

AbstractWe effortlessly interact with objects in our environment, but how do we know where something is? An object’s apparent position does not simply correspond to its retinotopic location but is influenced by its surrounding context. In the natural environment, this context is highly complex, and little is known about how visual information in a scene influences the apparent location of the objects within it. We measured the influence of local image statistics (luminance, edges, object boundaries, and saliency) on the reported location of a brief target superimposed on images of natural scenes. For each image statistic, we calculated the difference between the image value at the physical center of the target and the value at its reported center, using observers’ cursor responses, and averaged the resulting values across all trials. To isolate image-specific effects, difference scores were compared to a randomly-permuted null distribution that accounted for any response biases. The observed difference scores indicated that responses were significantly biased toward darker regions, luminance edges, object boundaries, and areas of high saliency, with relatively low shared variance among these measures. In addition, we show that the same image statistics were associated with observers’ saccade errors, despite large differences in response time, and that some effects persisted when high-level scene processing was disrupted by 180° rotations and color negatives of the originals. Together, these results provide evidence for landmark effects within natural images, in which feature location reports are pulled toward low- and high-level informative content in the scene.

2010 ◽  
Vol 10 (04) ◽  
pp. 513-529
Author(s):  
BARTHÉLÉMY DURETTE ◽  
JEANNY HÉRAULT ◽  
DAVID ALLEYSSON

To extract high-level information from natural scenes, the visual system has to cope with a wide variety of ambient lights, reflection properties of objects, spatio-temporal contexts, and geometrical complexity. By pre-processing the visual information, the retina plays a key role in the functioning of the whole visual system. It is crucial to reproduce such a pre-processing in artificial devices aiming at replacing or substituting the damaged vision system by artificial means. In this paper, we present a biologically plausible model of the retina at the cell level and its implementation as a real-time retinal simulation software. It features the non-uniform sampling of the visual information by the photoreceptor cells, the non-separable spatio-temporal properties of the retina, the subsequent generation of the Parvocellular and Magnocellular pathways, and the non-linear equalization of luminance and contrast at the local level. For each of these aspects, a description of the model is provided and illustrated. Their respective interest for the replacement or substitution of vision is discussed.


2008 ◽  
Vol 20 (10) ◽  
pp. 2464-2490 ◽  
Author(s):  
Eric K. C. Tsang ◽  
Bertram E. Shi

Binocular fusion takes place over a limited region smaller than one degree of visual angle (Panum's fusional area), which is on the order of the range of preferred disparities measured in populations of disparity-tuned neurons in the visual cortex. However, the actual range of binocular disparities encountered in natural scenes extends over tens of degrees. This discrepancy suggests that there must be a mechanism for detecting whether the stimulus disparity is inside or outside the range of the preferred disparities in the population. Here, we compare the efficacy of several features derived from the population responses of phase-tuned disparity energy neurons in differentiating between in-range and out-of-range disparities. Interestingly, some features that might be appealing at first glance, such as the average activation across the population and the difference between the peak and average responses, actually perform poorly. On the other hand, normalizing the difference between the peak and average responses results in a reliable indicator. Using a probabilistic model of the population responses, we improve classification accuracy by combining multiple features. A decision rule that combines the normalized peak to average difference and the peak location significantly improves performance over decision rules based on either measure in isolation. In addition, classifiers using normalized difference are also robust to mismatch between the image statistics assumed by the model and the actual image statistics.


1998 ◽  
Vol 79 (06) ◽  
pp. 1184-1190 ◽  
Author(s):  
Yoshiaki Tomiyama ◽  
Shigenori Honda ◽  
Kayoko Senzaki ◽  
Akito Tanaka ◽  
Mitsuru Okubo ◽  
...  

SummaryThis study investigated the difference of [Ca2+]i movement in platelets in response to thrombin and TRAP. The involvement of αIIbβ3 in this signaling was also studied. Stimulation of platelets with thrombin at 0.03 U/ml caused platelet aggregation and a two-peak increase in [Ca2+]i. The second peak of [Ca2+]i, but not the first peak was abolished by the inhibition of platelet aggregation with αIIbβ3 antagonists or by scavenging endogenous ADP with apyrase. A cyclooxygenase inhibitor, aspirin, and a TXA2 receptor antagonist, BM13505, also abolished the second peak of [Ca2+]i but not the first peak, although these regents did not inhibit aggregation. Under the same assay conditions, measurement of TXB2 demonstrated that αIIbβ3 antagonists and aspirin almost completely inhibited the production of TXB2. In contrast to thrombin-stimulation, TRAP caused only a single peak of [Ca2+]i even in the presence of platelet aggregation, and a high level of [Ca2+]i increase was needed for the induction of platelet aggregation. The inhibition of aggregation with αIIbβ3 antagonists had no effect on [Ca2+]i change and TXB2 production induced by TRAP. Inhibition studies using anti-GPIb antibodies suggested that GPIb may be involved in the thrombin response, but not in the TRAP. Our findings suggest that low dose thrombin causes a different [Ca2+]i response and TXA2 producing signal from TRAP. Endogenous ADP release and fibrinogen binding to αIIbβ3 are responsible for the synthesis of TXA2 which results in the induction of the second peak of [Ca2+]i in low thrombin- but not TRAP-stimulated platelets.


2018 ◽  
Vol 1 (1) ◽  
pp. 6-21 ◽  
Author(s):  
I. K. Razumova ◽  
N. N. Litvinova ◽  
M. E. Shvartsman ◽  
A. Yu. Kuznetsov

Introduction. The paper presents survey results on the awareness towards and practice of Open Access scholarly publishing among Russian academics.Materials and Methods. We employed methods of statistical analysis of survey results. Materials comprise results of data processing of Russian survey conducted in 2018 and published results of the latest international surveys. The survey comprised 1383 respondents from 182 organizations. We performed comparative studies of the responses from academics and research institutions as well as different research areas. The study compares results obtained in Russia with the recently published results of surveys conducted in the United Kingdom and Europe.Results. Our findings show that 95% of Russian respondents support open access, 94% agree to post their publications in open repositories and 75% have experience in open access publishing. We did not find any difference in the awareness and attitude towards open access among seven reference groups. Our analysis revealed the difference in the structure of open access publications of the authors from universities and research institutes. Discussion andConclusions. Results reveal a high level of awareness and support to open access and succeful practice in the open access publications in the Russian scholarly community. The results for Russia demonstrate close similarity with the results of the UK academics. The governmental open access policies and programs would foster the practical realization of the open access in Russia.


Author(s):  
O. M. Reva ◽  
V. V. Kamyshin ◽  
S. P. Borsuk ◽  
V. A. Shulhin ◽  
A. V. Nevynitsyn

The negative and persistent impact of the human factor on the statistics of aviation accidents and serious incidents makes proactive studies of the attitude of “front line” aviation operators (air traffic controllers, flight crewmembers) to dangerous actions or professional conditions as a key component of the current paradigm of ICAO safety concept. This “attitude” is determined through the indicators of the influence of the human factor on decision-making, which also include the systems of preferences of air traffic controllers on the indicators and characteristics of professional activity, illustrating both the individual perception of potential risks and dangers, and the peculiarities of generalized group thinking that have developed in a particular society. Preference systems are an ordered (ranked) series of n = 21 errors: from the most dangerous to the least dangerous and characterize only the danger preference of one error over another. The degree of this preference is determined only by the difference in the ranks of the errors and does not answer the question of how much time one error is more dangerous in relation to another. The differential method for identifying the comparative danger of errors, as well as the multistep technology for identifying and filtering out marginal opinions were applied. From the initial sample of m = 37 professional air traffic controllers, two subgroups mB=20 and mG=7 people were identified with statisti-cally significant at a high level of significance within the group consistency of opinions a = 1%. Nonpara-metric optimization of the corresponding group preference systems resulted in Kemeny’s medians, in which the related (middle) ranks were missing. Based on these medians, weighted coefficients of error hazards were determined by the mathematical prioritization method. It is substantiated that with the ac-cepted accuracy of calculations, the results obtained at the second iteration of this method are more ac-ceptable. The values of the error hazard coefficients, together with their ranks established in the preference systems, allow a more complete quantitative and qualitative analysis of the attitude of both individual air traffic controllers and their professional groups to hazardous actions or conditions.


Author(s):  
Richard Stone ◽  
Minglu Wang ◽  
Thomas Schnieders ◽  
Esraa Abdelall

Human-robotic interaction system are increasingly becoming integrated into industrial, commercial and emergency service agencies. It is critical that human operators understand and trust automation when these systems support and even make important decisions. The following study focused on human-in-loop telerobotic system performing a reconnaissance operation. Twenty-four subjects were divided into groups based on level of automation (Low-Level Automation (LLA), and High-Level Automation (HLA)). Results indicated a significant difference between low and high word level of control in hit rate when permanent error occurred. In the LLA group, the type of error had a significant effect on the hit rate. In general, the high level of automation was better than the low level of automation, especially if it was more reliable, suggesting that subjects in the HLA group could rely on the automatic implementation to perform the task more effectively and more accurately.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 627
Author(s):  
David Marquez-Viloria ◽  
Luis Castano-Londono ◽  
Neil Guerrero-Gonzalez

A methodology for scalable and concurrent real-time implementation of highly recurrent algorithms is presented and experimentally validated using the AWS-FPGA. This paper presents a parallel implementation of a KNN algorithm focused on the m-QAM demodulators using high-level synthesis for fast prototyping, parameterization, and scalability of the design. The proposed design shows the successful implementation of the KNN algorithm for interchannel interference mitigation in a 3 × 16 Gbaud 16-QAM Nyquist WDM system. Additionally, we present a modified version of the KNN algorithm in which comparisons among data symbols are reduced by identifying the closest neighbor using the rule of the 8-connected clusters used for image processing. Real-time implementation of the modified KNN on a Xilinx Virtex UltraScale+ VU9P AWS-FPGA board was compared with the results obtained in previous work using the same data from the same experimental setup but offline DSP using Matlab. The results show that the difference is negligible below FEC limit. Additionally, the modified KNN shows a reduction of operations from 43 percent to 75 percent, depending on the symbol’s position in the constellation, achieving a reduction 47.25% reduction in total computational time for 100 K input symbols processed on 20 parallel cores compared to the KNN algorithm.


Author(s):  
Peng Lu ◽  
Xiao Cong ◽  
Dongdai Zhou

Nowadays, E-learning system has been widely applied to practical teaching. It was favored by people for its characterized course arrangement and flexible learning schedule. However, the system does have some problems in the process of application such as the functions of single software are not diversified enough to satisfy the requirements in teaching completely. In order to cater more applications in the teaching process, it is necessary to integrate functions from different systems. But the difference in developing techniques and the inflexibility in design makes it difficult to implement. The major reason of these problems is the lack of fine software architecture. In this article, we build domain model and component model of E-learning system and components integration method on the basis of WebService. And we proposed an abstract framework of E-learning which could express the semantic relationship among components and realize high level reusable on the basis of informationized teaching mode. On this foundation, we form an E-learning oriented layering software architecture contain component library layer, application framework layer and application layer. Moreover, the system contains layer division multiplexing and was not built upon developing language and tools. Under the help of the software architecture, we could build characterized E-learning system flexibly like building blocks through framework selection, component assembling and replacement. In addition, we exemplify how to build concrete E-learning system on the basis of this software architecture.


2021 ◽  
Vol 14 (3) ◽  
pp. 103
Author(s):  
Shaojie Lai ◽  
Qing Wang ◽  
Jiangze Du ◽  
Shuwen Pi

This article examines the propensity to pay dividends in the U.S banking sector during 1973–2014. Although the propensity to pay dividends has been declining over the 52 years of our sample period, banks are consistently more likely to pay dividends than non-financial firms. Using the coefficients from logit models estimated early in the sample period to forecast the percentage of dividend payers in each subsequent year, we conclude that there has been a decline in the likelihood of paying dividends in the banking sector. However, the decline started from a very high level as compared to that of the non-banking sectors. In addition, the variables taken from the non-financial firm literature do not explain the difference between the actual and expected percentage of dividend payers in the banking sector. We also conduct exploratory analyses with bank-specific variables. Although newly included variables are significantly related to the likelihood of paying dividends, they do not explain the declining propensity to pay dividends in the banking sector.


Sign in / Sign up

Export Citation Format

Share Document