How Serial is Serial Processing in Vision?

Perception ◽  
1989 ◽  
Vol 18 (2) ◽  
pp. 191-200 ◽  
Author(s):  
Ehud Zohary ◽  
Shaul Hochstein

Visual search for an element defined by the conjunction of its colour and orientation has previously been shown to be a serial processing task since reaction times increase linearly with the number of distractor elements used in the display. Evidence is presented that there are parallel processing constituents to this serial search. Processing time depended on the ratio of the number of the two distractor types used, suggesting that only one type was scanned. Which type was scanned also depended on the distractor ratio, indicating that this decision was made after stimulus presentation and was based on a parallel figure—ground separation of the stimulus elements. Furthermore, in accordance with this serial scanning model, there was an increase in processing speed (elements scanned per second) with increase in number of elements to be scanned. This increased efficiency suggests that clumps of elements were processed synchronously. Under the stimulation conditions used, clumps contained six to sixteen elements and each clump was processed in 50–150 ms.

Perception ◽  
1987 ◽  
Vol 16 (3) ◽  
pp. 389-398 ◽  
Author(s):  
Scott B Steinman

The nature of the processing of combinations of stimulus dimensions in human vision has recently been investigated. A study is reported in which visual search for suprathreshold positional information—vernier offsets, stereoscopic disparity, lateral separation, and orientation—was examined. The initial results showed that reaction times for visual search for conjunctions of stereoscopic disparity and either vernier offsets or orientation were independent of the number of distracting stimuli displayed, suggesting that disparity was searched in parallel with vernier offsets or orientation. Conversely, reaction times for detection of conjunctions of vernier offsets and orientation, or lateral separation and each of the other positional judgements, were related linearly to the number of distractors, suggesting serial search. However, practice has a significant effect upon the results, indicative of a shift in the mode of search from serial to parallel for all conjunctions tested as well as for single features. This suggests a reinter-pretation of these and perhaps other studies that use the Treisman visual search paradigm, in terms of perceptual segregation of the visual field by disparity, motion, color, and pattern features such as colinearity, orientation, lateral separation, or size.


2020 ◽  
Author(s):  
Anne-Sophie Laurin ◽  
Julie Ouerfelli-Éthier ◽  
Laure Pisella ◽  
Aarlenne Zein Khan

Older adults show declines performing visual search, but their nature is unclear. We propose that it is related to greater attentional reliance on central vision. To investigate this, we tested how occluding central vision would affect younger and older adults in visual search. Participants (14 younger, M = 21.6 years; 16 older, M = 69.6 years) performed pop-out and serial search tasks in full view and with different sized gaze-contingent artificial central scotomas (no scotoma, 3°, 5° or 7° diameter).In pop-out search, older adults showed longer search times for peripheral targets during full viewing. Their reaction times, saccades and fixation durations also increased as a function of scotoma size, contrary to younger adults. These declines may reflect a relative impairment in peripheral visual attention for global processing in aging.In serial search, despite older adults being generally slower, we found no difference between groups in reaction time increases for eccentric targets and for bigger scotomas. These results may come from the difficulty of serial search, in which both groups used centrally limited attentional windows.We conclude that older adults allocate more attentional resources towards central vision compared to younger adults, impairing their peripheral processing primarily in pop-out visual search.


2007 ◽  
Vol 97 (1) ◽  
pp. 942-947 ◽  
Author(s):  
Neil W. D. Thomas ◽  
Martin Paré

We studied whether the lateral intraparietal (LIP) area—a subdivision of parietal cortex anatomically interposed between visual cortical areas and saccade executive centers—contains neurons with activity patterns sufficient to contribute to the active process of selecting saccade targets in visual search. Visually responsive neurons were recorded while monkeys searched for a color-different target presented concurrently with seven distractors evenly distributed in a circular search array. We found that LIP neurons initially responded indiscriminately to the presentation of a visual stimulus in their response fields, regardless of its feature and identity. Their activation nevertheless evolved to signal the search target before saccade initiation: an ideal observer could reliably discriminate the target from the individual activation of 60% of neurons, on average, 138 ms after stimulus presentation and 26 ms before saccade initiation. Importantly, the timing of LIP neuronal discrimination varied proportionally with reaction times. These findings suggest that LIP activity reflects the selection of both the search target and the targeting saccade during active visual search.


2020 ◽  
Author(s):  
Lluís Hernández-Navarro ◽  
Ainhoa Hermoso-Mendizabal ◽  
Daniel Duque ◽  
Alexandre Hyafil ◽  
Jaime de la Rocha

It is commonly assumed that, during perceptual decisions, the brain integrates stimulus evidence until reaching a decision, and then performs the response. There are conditions, however (e.g. time pressure), in which the initiation of the response must be prepared in anticipation of the stimulus presentation. It is therefore not clear when the timing and the choice of perceptual responses depend exclusively on evidence accumulation, or when preparatory motor signals may interfere with this process. Here, we find that, in a free reaction time auditory discrimination task in rats, the timing of fast responses does not depend on the stimulus, although the choices do, suggesting a decoupling of the mechanisms of action initiation and choice selection. This behavior is captured by a novel model, the Parallel Sensory Integration and Action Model (PSIAM), in which response execution is triggered whenever one of two processes, Action Initiation or Evidence Accumulation, reaches a bound, while choice category is always set by the latter. Based on this separation, the model accurately predicts the distribution of reaction times when the stimulus is omitted, advanced or delayed. Furthermore, we show that changes in Action Initiation mediates both post-error slowing and a gradual slowing of the responses within each session. Overall, these results extend the standard models of perceptual decision-making, and shed a new light on the interaction between action preparation and evidence accumulation.


1993 ◽  
Vol 25 (1) ◽  
pp. 176-202 ◽  
Author(s):  
Nicholas Bambos ◽  
Jean Walrand

In this paper we study the following general class of concurrent processing systems. There are several different classes of processors (servers) and many identical processors within each class. There is also a continuous random flow of jobs, arriving for processing at the system. Each job needs to engage concurrently several processors from various classes in order to be processed. After acquiring the needed processors the job begins to be executed. Processing is done non-preemptively, lasts for a random amount of time, and then all the processors are released simultaneously. Each job is specified by its arrival time, its processing time, and the list of processors that it needs to access simultaneously. The random flow (sequence) of jobs has a stationary and ergodic structure. There are several possible policies for scheduling the jobs on the processors for execution; it is up to the system designer to choose the scheduling policy to achieve certain objectives.We focus on the effect that the choice of scheduling policy has on the asymptotic behavior of the system at large times and especially on its stability, under general stationary and ergocic input flows.


Author(s):  
Cepi Ramdani ◽  
Indah Soesanti ◽  
Sunu Wibirama

Fuzzy C Means algorithm or FCM is one of many clustering algorithms that has better accuracy to solve problems related to segmentation. Its application is almost in every aspects of life and many disciplines of science. However, this algorithm has some shortcomings, one of them is the large amount of processing time consumption. This research conducted mainly to do an analysis about the effect of segmentation parameters towards processing time in sequential and parallel. The other goal is to reduce the processing time of segmentation process using parallel approach. Parallel processing applied on Nvidia GeForce GT540M GPU using CUDA v8.0 framework. The experiment conducted on natural RGB color image sized 256x256 and 512x512. The settings of segmentation parameter values were done as follows, weight in range (2-3), number of iteration (50-150), number of cluster (2-8), and error tolerance or epsilon (0.1 – 1e-06). The results obtained by this research as follows, parallel processing time is faster 4.5 times than sequential time with similarity level of image segmentations generated both of processing types is 100%. The influence of segmentation parameter values towards processing times in sequential and parallel can be concluded as follows, the greater value of weight parameter then the sequential processing time becomes short, however it has no effects on parallel processing time. For iteration and cluster parameters, the greater their values will make processing time consuming in sequential and parallel become large. Meanwhile the epsilon parameter has no effect or has an unpredictable tendency on both of processing time.


Author(s):  
Masashi Yoshikawa ◽  
Koji Mineshima ◽  
Hiroshi Noji ◽  
Daisuke Bekki

In logic-based approaches to reasoning tasks such as Recognizing Textual Entailment (RTE), it is important for a system to have a large amount of knowledge data. However, there is a tradeoff between adding more knowledge data for improved RTE performance and maintaining an efficient RTE system, as such a big database is problematic in terms of the memory usage and computational complexity. In this work, we show the processing time of a state-of-the-art logic-based RTE system can be significantly reduced by replacing its search-based axiom injection (abduction) mechanism by that based on Knowledge Base Completion (KBC). We integrate this mechanism in a Coq plugin that provides a proof automation tactic for natural language inference. Additionally, we show empirically that adding new knowledge data contributes to better RTE performance while not harming the processing speed in this framework.


2004 ◽  
Vol 120 ◽  
pp. 555-562
Author(s):  
D. Apelian ◽  
S. K. Chaudhury

Heat Treatment and post casting treatments of cast components has always been an important step in the control of microstructure, and resultant properties. In the past, the solutionizing, quenching and ageing process steps may have “required” in total over 20 hours of processing time. With the advent of fluidized bed reactors (FB), processing time has been dramatically reduced. For example, instead of 8-10 hours solutionizing time in a conventional furnace, the time required in FB is less than an hour. Experiments with Al-Si-Mg alloy, (both modified with Sr, and unmodified) were performed, having different diffusion distances (different DAS), and for different reaction times and temperatures. Both the model and the experimental results are presented and discussed.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 62-62 ◽  
Author(s):  
A Grodon ◽  
M Fahle

Some features of complex visual displays are analysed effortlessly and in parallel by the human visual system, without requiring scrutiny. Examples for such features are changes of luminance, colour, orientation, and movement. We measured thresholds as well as reaction times for the detection of abrupt spatial changes in luminance in the presence of luminance gradients, in order to evaluate the ability of the system to ignore such gradients. Stimuli were presented on a 20 inch monitor under control of a Silicon Graphics workstation. Luminance was calibrated by means of a photometer (Minolta). We presented between 4 and 14 rectangles simultaneously on a homogeneous dark background. Rectangles were arranged on an incomplete, imaginary circle around the fixation point and luminance changed stepwise from one rectangle to the next. Five observers had to indicate whether all luminance steps between the rectangles were subjectively equal or whether one luminance step was larger. Detection thresholds were determined for the larger step as a function of the small steps (‘base step size’) by means of an adaptive staircase procedure. The smallest luminance steps were detected when the base step size was zero and when only few rectangles were presented. Thresholds increased slightly with the number of rectangles displayed simultaneously, and to a greater extent (by up to a factor of 2) with increasing base step size. The results of all observers improved significantly through practice, by about a factor of 2. We conclude that the visual system is unable to completely eliminate gradients of luminance and to isolate sharp transitions in luminance.


Sign in / Sign up

Export Citation Format

Share Document