Peaked Encoding of Relative Luminance in Macaque Areas V1 and V2

2005 ◽  
Vol 93 (3) ◽  
pp. 1620-1632 ◽  
Author(s):  
Xinmiao Peng ◽  
David C. Van Essen

It is widely presumed that throughout the primate visual pathway neurons encode the relative luminance of objects (at a given light adaptation level) using two classes of monotonic function, one positively and the other negatively sloped. Based on computational considerations, we hypothesized that early visual cortex also contains neurons preferring intermediate relative luminance values. We tested this hypothesis by recording from single neurons in areas V1 and V2 of alert, fixating macaque monkeys during presentation of a large, spatially uniform patch oscillating slowly in luminance and surrounded by a static texture background. A substantial subset of neurons responsive to such low spatial frequency luminance stimuli in both areas exhibited prominent and statistically reliable response peaks to intermediate rather than minimal or maximal luminance values. When presented with static patches of different luminance but of the same spatial configuration, most neurons tested retained a preference for intermediate relative luminance. Control experiments using luminance modulation at multiple low temporal frequencies or reduced amplitude indicate that in the slow luminance-oscillating paradigm, responses were more strongly modulated by the luminance level than the rate of luminance change. These results strongly support our hypothesis and reveal a striking cortical transformation of luminance-related information that may contribute to the perception of surface brightness and lightness. In addition, we tested many luminance-sensitive neurons with large chromatic patches oscillating slowly in luminance. Many cells, including the gray-preferring neurons, exhibited strong color preferences, suggesting a role of luminance-sensitive cells in encoding information in three-dimensional color space.

Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 1042
Author(s):  
Rafał Krupiński

The paper presents the opportunities to apply computer graphics in an object floodlighting design process and in an analysis of object illumination. The course of object floodlighting design has been defined based on a virtual three-dimensional geometric model. The problems related to carrying out the analysis of lighting, calculating the average illuminance, luminance levels and determining the illuminated object surface area are also described. These parameters are directly tied with the calculations of the Floodlighting Utilisation Factor, and therefore, with the energy efficiency of the design as well as the aspects of light pollution of the natural environment. The paper shows how high an impact of the geometric model of the object has on the accuracy of photometric calculations. Very often the model contains the components that should not be taken into account in the photometric calculations. The research on what influence the purity of the geometric mesh of the illuminated object has on the obtained results is presented. It shows that the errors can be significant, but it is possible to optimise the 3D object model appropriately in order to receive the precise results. For the example object presented in this paper, removing the planes that do not constitute its external surface has caused a two-fold increase in the average illuminance and average luminance. This is dangerous because a designer who wants to achieve a specific average luminance level in their design without optimizing the model will obtain the luminance values that will actually be much higher.


2007 ◽  
Vol 16 (1) ◽  
pp. 119-122 ◽  
Author(s):  
Patrick Ledda

In the natural world, the human eye is confronted with a wide range of colors and luminances. A surface lit by moonlight might have a luminance level of around 10−3 cd/m2, while surfaces lit during a sunny day could reach values larger than 105 cd/m2. A good quality CRT (cathode ray tube) or LCD (liquid crystal display) monitor is only able to achieve a maximum luminance of around 200 to 300 cd/m2 and a contrast ratio of not more than two orders of magnitude. In this context the contrast ratio or dynamic range is defined as the ratio of the highest to the lowest luminance. We call high dynamic range (HDR) images, those images (or scenes) in which the contrast ratio is larger than what a display can reproduce. In practice, any scene that contains some sort of light source and shadows is HDR. The main problem with HDR images is that they cannot be displayed, therefore although methods to create them do exist (by taking multiple photographs at different exposure times or using computer graphics 3D software for example) it is not possible to see both bright and dark areas simultaneously. (See Figure 1.) There is data that suggests that our eyes can see detail at any given adaptation level within a contrast of 10,000:1 between the brightest and darkest regions of a scene. Therefore an ideal display should be able to reproduce this range. In this review, we present two high dynamic range displays developed by Brightside Technologies (formerly Sunnybrook Technologies) which are capable, for the first time, of linearly displaying high contrast images. These displays are of great use for both researchers in the vision/graphics/VR/medical fields as well as professionals in the VFX/gaming/architectural industry.


2012 ◽  
Vol 262 ◽  
pp. 36-39 ◽  
Author(s):  
Yun Hui Luo ◽  
Mao Hai Lin

As color gamut of digital output device greatly affects image appearance, accurate and effective gamut description for output device is intensively required for developing high-quality image reproduction technique based on gamut mapping. In this paper, we present a novel method to determine color gamut of output device by using a specific 3D reconstruction technology and device ICC profile. First, we populate the device color space by uniform sampling in the RGB 3-Dimensional space, and convert these sampling points to CMYK color space. Then, we work out the CIE LAB value of these points according to the ICC profile of output device. At last, in CIE LAB color space the boundary of these points is determined by using a gamut boundary descriptor based on Ball-Pivoting Algorithm (BPA) proposed by Bernardini. Compared with the results generated by ICC3D, our proposed method can compute device gamut more efficiently and at the same time give a more accurate gamut description of the output device. It will be help to develop effective gamut mapping algorithms for color reproduction.


2008 ◽  
Vol 99 (5) ◽  
pp. 2602-2616 ◽  
Author(s):  
Marion R. Van Horn ◽  
Pierre A. Sylvestre ◽  
Kathleen E. Cullen

When we look between objects located at different depths the horizontal movement of each eye is different from that of the other, yet temporally synchronized. Traditionally, a vergence-specific neuronal subsystem, independent from other oculomotor subsystems, has been thought to generate all eye movements in depth. However, recent studies have challenged this view by unmasking interactions between vergence and saccadic eye movements during disconjugate saccades. Here, we combined experimental and modeling approaches to address whether the premotor command to generate disconjugate saccades originates exclusively in “vergence centers.” We found that the brain stem burst generator, which is commonly assumed to drive only the conjugate component of eye movements, carries substantial vergence-related information during disconjugate saccades. Notably, facilitated vergence velocities during disconjugate saccades were synchronized with the burst onset of excitatory and inhibitory brain stem saccadic burst neurons (SBNs). Furthermore, the time-varying discharge properties of the majority of SBNs (>70%) preferentially encoded the dynamics of an individual eye during disconjugate saccades. When these experimental results were implemented into a computer-based simulation, to further evaluate the contribution of the saccadic burst generator in generating disconjugate saccades, we found that it carries all the vergence drive that is necessary to shape the activity of the abducens motoneurons to which it projects. Taken together, our results provide evidence that the premotor commands from the brain stem saccadic circuitry, to the target motoneurons, are sufficient to ensure the accurate control shifts of gaze in three dimensions.


2019 ◽  
Author(s):  
Reuben Rideaux ◽  
Nuno Goncalves ◽  
Andrew E Welchman

ABSTRACTThe offset between images projected onto the left and right retinae (binocular disparity) provides a powerful cue to the three-dimensional structure of the environment. It was previously shown that depth judgements are better when images comprise both light and dark features, rather than only dark or only light elements. Since Harris and Parker (1995) discovered the “mixed-polarity benefit”, there has been limited evidence supporting their hypothesis that the benefit is due to separate bright and dark channels. Goncalves and Welchman (2017) observed that single- and mixed-polarity stereograms evoke different levels of positive and negative activity in a deep neural network trained on natural images to make depth judgements, which also showed the mixed-polarity benefit. Motivated by this discovery, here we seek to test the potential for changes in the balance of excitation and inhibition that are produced by viewing these stimuli. In particular, we use magnetic resonance spectroscopy to measure Glx and GABA concentration in the early visual cortex of adult humans while viewing single- and mixed-polarity random-dot stereograms (RDS). We find that observers’ Glx concentration is significantly higher while GABA concentration is significantly lower when viewing mixed-polarity RDS than when viewing single-polarity RDS. These results indicate that excitation and inhibition facilitate processing of single- and mixed-polarity stereograms in the early visual cortex to different extents, consistent with recent theoretical work (Goncalves & Welchman, 2017).


2021 ◽  
pp. 2150352
Author(s):  
Li-Jun Du ◽  
Yan-Song Meng ◽  
Yu-Ling He ◽  
Jun Xie

Herein, a fine-tuning method is proposed for the spatial distributions of a mixed three-dimensional (3D) ion system in dual radio frequency (RF) linear Paul traps to achieve efficient sympathetic cooling. The dual RF field matching, efficient capture method and transient process of the intrinsic micromotion of the mixed ion system are analyzed quantitatively by numerical simulations. The 3D correlation coupling characteristics between intrinsic micromotion and secular motion of ion system are obtained. It is found that reasonable low-frequency trapping potential can produce ultra-low-frequency pulling effect on ions with low mass-to-charge ratio (M/Q), which is beneficial to the dynamic coupling between ions with large M/Q differences. The effects of equivalent stiffness coefficients [Formula: see text] on the relative spatial configuration and dynamic coupling process of mixed 3D ion crystals with large M/Q differences are discussed. By tuning [Formula: see text], radial distributions of laser-cooled ions (LCIs) and sympathetically cooled ions (SCIs) that do not conform to the rules based on M/Q are realized. The optimum sympathetic-cooling efficiency occurs, where [Formula: see text] is approximately equivalent to [Formula: see text]. These results are applicable to studies such as cold ion clocks, quantum logic manipulation, antimatter synthesis, regulation of cold chemical reaction, and precise spectral measurements based on sympathetic cooling.


Author(s):  
Joy V. Hughes

The techniques known as Cellular Automata (CA) can be used to create a variety of visual effects. As the state space for each cell, 24-bit photo realistic color was used. Several new state transition rules were created to produce unusual and beautiful results, which can be used in an interactive program or for special effects for images or videos. This chapter presents a technique for applying CA rules to an image at several different levels of resolution and recombining the results. A “soft” artistic look can result. The concept of “targeted” CAs is introduced. A targeted CA changes the value of a cell only if it approaches a desired value using some distance metric. This technique is used to transform one image into another, to transform an image to a distorted version of itself, and to generate fractals. The author believes that the techniques presented can form the basis for a new artistic medium that is partially directed by the artist and partially emergent. Images and animations from this work are posted on the World Wide Web at (http://www.scruznet.com/~hughes/CA.html). All cellular automata (CA) operate on a space of discrete states. The simplest CAs, such as the Game of Life, use a 1-bit state space. Most modern personal computers represent color as a 24-bit value, allowing for approximately 16 million possible colors. The work presented in this chapter uses a 24-bit color space that is represented in a 32-bit-long integer. This color space can be conceptualized as a three-dimensional bounded continuous vector space. Often, it is desirable to work with in the HSV (Hue, Saturation, Value) color space. Some of the rules encode the value (luminance) of a cell in the otherwise unused 8 high-order bits of a 32-bit word. The hue and saturation can be estimated “on the fly” with simple, fast algorithms. The hue is represented as an angle on the color wheel. For some rules, it is necessary to know the “distance” between two colors. Estimating the distance in perceptual space would be a difficult problem, as it would be dependent on the monitor used and the gamma exponent applied for a particular setup.


Author(s):  
Ankit Chaudhary ◽  
Jagdish L. Raheja ◽  
Karen Das ◽  
Shekhar Raheja

In the last few years gesture recognition and gesture-based human computer interaction has gained a significant amount of popularity amongst researchers all over the world. It has a number of applications ranging from security to entertainment. Gesture recognition is a form of biometric identification that relies on the data acquired from the gesture depicted by an individual. This data, which can be either two-dimensional or three-dimensional, is compared against a database of individuals or is compared with respective thresholds based on the way of solving the riddle. In this paper, a novel method for angle calculation of both hands’ bended fingers is discussed and its application to a robotic hand control is presented. For the first time, such a study has been conducted in the area of natural computing for calculating angles without using any wired equipment, colors, marker or any device. The system deploys a simple camera and captures images. The pre-processing and segmentation of the region of interest is performed in a HSV color space and a binary format respectively. The technique presented in this paper requires no training for the user to perform the task.


2013 ◽  
Vol 33 (5) ◽  
pp. 0515002
Author(s):  
谢煜 Xie Yu ◽  
叶玉堂 Ye Yutang ◽  
张静 Zhang Jing ◽  
刘霖 Liu Lin

2020 ◽  
Vol 10 (18) ◽  
pp. 6205
Author(s):  
Maria Cerreta ◽  
Roberta Mele ◽  
Giuliano Poli

The complexity of the urban spatial configuration, which affects human wellbeing and landscape functioning, necessitates data acquisition and three-dimensional (3D) visualisation to support effective decision-making processes. One of the main challenges in sustainability research is to conceive spatial models adapting to changes in scale and recalibrate the related indicators, depending on scale and data availability. From this perspective, the inclusion of the third dimension in the Urban Ecosystem Services (UES) identification and assessment can enhance the detail in which urban structure–function relationships can be studied. Moreover, improving the modelling and visualisation of 3D UES indicators can aid decision-makers in localising, analysing, assessing, and managing urban development strategies. The main goal of the proposed framework is concerned with evaluating, planning, and monitoring UES within a 3D virtual environment, in order to improve the visualisation of spatial relationships among services and to support site-specific planning choices.


Sign in / Sign up

Export Citation Format

Share Document