scholarly journals Using AI-Based Classification Techniques to Process EEG Data Collected during the Visual Short-Term Memory Assessment

2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Milos Antonijevic ◽  
Miodrag Zivkovic ◽  
Sladjana Arsic ◽  
Aleksandar Jevremovic

Visual short-term memory (VSTM) is defined as the ability to remember a small amount of visual information, such as colors and shapes, during a short period of time. VSTM is a part of short-term memory, which can hold information up to 30 seconds. In this paper, we present the results of research where we classified the data gathered by using an electroencephalogram (EEG) during a VSTM experiment. The experiment was performed with 12 participants that were required to remember as many details as possible from the two images, displayed for 1 minute. The first assessment was done in an isolated environment, while the second assessment was done in front of the other participants, in order to increase the stress of the examinee. The classification of the EEG data was done by using four algorithms: Naive Bayes, support vector, KNN, and random forest. The results obtained show that AI-based classification could be successfully used in the proposed way, since we were able to correctly classify the order of the images presented 90.12% of the time and type of the displayed image 90.51% of the time.

2020 ◽  
Vol 117 (51) ◽  
pp. 32329-32339
Author(s):  
Jing Liu ◽  
Hui Zhang ◽  
Tao Yu ◽  
Duanyu Ni ◽  
Liankun Ren ◽  
...  

Visual short-term memory (VSTM) enables humans to form a stable and coherent representation of the external world. However, the nature and temporal dynamics of the neural representations in VSTM that support this stability are barely understood. Here we combined human intracranial electroencephalography (iEEG) recordings with analyses using deep neural networks and semantic models to probe the representational format and temporal dynamics of information in VSTM. We found clear evidence that VSTM maintenance occurred in two distinct representational formats which originated from different encoding periods. The first format derived from an early encoding period (250 to 770 ms) corresponded to higher-order visual representations. The second format originated from a late encoding period (1,000 to 1,980 ms) and contained abstract semantic representations. These representational formats were overall stable during maintenance, with no consistent transformation across time. Nevertheless, maintenance of both representational formats showed substantial arrhythmic fluctuations, i.e., waxing and waning in irregular intervals. The increases of the maintained representational formats were specific to the phases of hippocampal low-frequency activity. Our results demonstrate that human VSTM simultaneously maintains representations at different levels of processing, from higher-order visual information to abstract semantic representations, which are stably maintained via coupling to hippocampal low-frequency activity.


1968 ◽  
Vol 27 (3_suppl) ◽  
pp. 1155-1158 ◽  
Author(s):  
Daniel N. Robinson

Ss were exposed to discontinuously presented signals in a compensatory tracking task. Signals were “on” for durations of 16.7, 50, 150, 300, or 500 msec. followed by “off” periods of the same durations. From measures of tracking accuracy under the various on-off combinations, the following conclusions emerge: (a) most of the utilizable visual information is present in the first 15 to 50 msec.; (b) the short-term storage capacity, i.e., the temporal range over which the system can “coast” without input, extends to at least 300 msec.; (c) measures taken under stimulating conditions of long duration and time-varying characteristics result in different assessments of visual short-term memory than those obtained under two-flash (transient response) conditions.


2002 ◽  
Vol 55 (3) ◽  
pp. 753-774 ◽  
Author(s):  
Jackie Andrade ◽  
Eva Kemps ◽  
Yves Werniers ◽  
Jon May ◽  
Arnaud Szmalec

Several authors have hypothesized that visuo-spatial working memory is functionally analogous to verbal working memory. Irrelevant background speech impairs verbal short-term memory. We investigated whether irrelevant visual information has an analogous effect on visual short-term memory, using a dynamic visual noise (DVN) technique known to disrupt visual imagery (Quinn & McConnell, 1996b). Experiment 1 replicated the effect of DVN on pegword imagery. Experiments 2 and 3 showed no effect of DVN on recall of static matrix patterns, despite a significant effect of a concurrent spatial tapping task. Experiment 4 showed no effect of DVN on encoding or maintenance of arrays of matrix patterns, despite testing memory by a recognition procedure to encourage visual rather than spatial processing. Serial position curves showed a one-item recency effect typical of visual short-term memory. Experiment 5 showed no effect of DVN on short-term recognition of Chinese characters, despite effects of visual similarity and a concurrent colour memory task that confirmed visual processing of the characters. We conclude that irrelevant visual noise does not impair visual short-term memory. Visual working memory may not be functionally analogous to verbal working memory, and different cognitive processes may underlie visual short-term memory and visual imagery.


Author(s):  
Kevin Dent

In two experiments participants retained a single color or a set of four spatial locations in memory. During a 5 s retention interval participants viewed either flickering dynamic visual noise or a static matrix pattern. In Experiment 1 memory was assessed using a recognition procedure, in which participants indicated if a particular test stimulus matched the memorized stimulus or not. In Experiment 2 participants attempted to either reproduce the locations or they picked the color from a whole range of possibilities. Both experiments revealed effects of dynamic visual noise (DVN) on memory for colors but not for locations. The implications of the results for theories of working memory and the methodological prospects for DVN as an experimental tool are discussed.


Author(s):  
Yuhong Jiang

Abstract. When two dot arrays are briefly presented, separated by a short interval of time, visual short-term memory of the first array is disrupted if the interval between arrays is shorter than 1300-1500 ms ( Brockmole, Wang, & Irwin, 2002 ). Here we investigated whether such a time window was triggered by the necessity to integrate arrays. Using a probe task we removed the need for integration but retained the requirement to represent the images. We found that a long time window was needed for performance to reach asymptote even when integration across images was not required. Furthermore, such window was lengthened if subjects had to remember the locations of the second array, but not if they only conducted a visual search among it. We suggest that a temporal window is required for consolidation of the first array, which is vulnerable to disruption by subsequent images that also need to be memorized.


Sign in / Sign up

Export Citation Format

Share Document