scholarly journals EPOC outside the shield: comparing the performance of a consumer-grade EEG device in shielded and unshielded environments

2020 ◽  
Author(s):  
Jordan Wehrman ◽  
Sidsel Sörensen ◽  
Peter de Lissa ◽  
Nicholas A. Badcock

AbstractLow-cost, portable electroencephalographic (EEG) headsets have become commercially available in the last 10 years. One such system, Emotiv’s EPOC, has been modified to allow event-related potential (ERP) research. Because of these innovations, EEG research may become more widely available in non-traditional settings. Although the EPOC has previously been shown to provide data comparable to research-grade equipment and has been used in real-world settings, how EPOC performs without the electrical shielding used in research-grade laboratories is yet to be systematically tested. In the current article we address this gap in the literature by asking participants to perform a simple EEG experiment in shielded and unshielded contexts. The experiment was the observation of human versus wristwatch faces which were either inverted or noninverted. This method elicited the face-sensitive N170 ERP.In both shielded and unshielded contexts, the N170 amplitude was larger when participants viewed human faces and peaked later when a human face was inverted. More importantly, Bayesian analysis showed no difference in the N170 measured in the shielded and unshielded contexts. Further, the signal recorded in both contexts was highly correlated. The EPOC appears to reliably record EEG signals without a purpose-built electrically-shielded room or laboratory-grade preamplifier.

Author(s):  
Pavan Narayana A ◽  
◽  
Janardhan Guptha S ◽  
Deepak S ◽  
Pujith Sai P ◽  
...  

January 27 2020, a day that will be remembered by the Indian people for a few decades, where a deadly virus peeped into a life of a young lady and till now it has been so threatening as it took up the life of 3.26 lakh people just in India. With the start of the virus government has made mandatory to wear masks when we go out in to crowded or public areas such as markets, malls, private gatherings and etc. So, it will be difficult for a person in the entrance to check whether everyone one are entering with a mask, in this paper we have designed a smart door face mask detection to check whether who are wearing or not wearing mask. By using different technologies such as Open CV, MTCNN, CNN, IFTTT, ThingSpeak we have designed this face mask detection. We use python to program the code. MTCNN using Viola- Jones algorithm detects the human faces present in the screen The Viola-Jones algorithm first detects the face on the grayscale image and then finds the location on the colored image. In this algorithm MTCNN first detects the face in grayscale image locates it and then finds this location on colored image. CNN for detecting masks in the human face is constructed using sample datasets and MobileNetV2 which acts as an object detector in our case the object is mask. ThingSpeak is an open-source Internet of things application used to display the information we get form the smart door. This deployed application can also detect when people are moving. So, with this face mask detection, as a part to stop the spread of the virus, we ensure that with this smart door we can prevent the virus from spreading and can regain our happy life.


Author(s):  
Tanjimul Ahad Asif ◽  
Baidya Nath Saha

Instagram is one of the famous and fast-growing media sharing platforms. Instagram allows users to share photos and videos with followers. There are plenty of ways to search for images on Instagram, but one of the most familiar ways is ’hashtag.’ Hashtag search enables the users to find the precise search result on Instagram. However, there are no rules for using the hashtag; that is why it often does not match the uploaded image, and for this reason, Users are unable to find the relevant search results. This research aims to filter any human face images on search results based on hashtags on Instagram. Our study extends the author’s [2] work by implementing image processing techniques that detect human faces and separate the identified images on search results based on hashtags using the face detection technique.


2021 ◽  
Author(s):  
Srividya Pattisapu ◽  
Supratim Ray

Stimulus-induced narrow-band gamma oscillations (30-70 Hz) in human electro - encephalograph (EEG) have been linked to attentional and memory mechanisms and are abnormal in mental health conditions such as autism, schizophrenia and Alzheimer's Disease. This suggests that gamma oscillations could be valuable both as a research tool and an inexpensive, non-invasive biomarker for disease evaluation. However, since the absolute power in EEG decreases rapidly with increasing frequency following a "1/f" power law, and the gamma band includes line noise frequency, these oscillations are highly susceptible to instrument noise. Previous studies that recorded stimulus-induced gamma oscillations used expensive research-grade EEG amplifiers to address this issue. While low-cost EEG amplifiers have become popular in Brain Computer Interface applications that mainly rely on low-frequency oscillations (<30 Hz) or steady-state-visually-evoked-potentials, whether they can also be used to measure stimulus-induced gamma oscillations is unknown. We recorded EEG signals using a low-cost, open-source amplifier (OpenBCI) and a traditional, research-grade amplifier (Brain Products GmbH) in male (N = 6) and female (N = 5) subjects (22-29 years) while they viewed full-screen static gratings that are known to induce gamma oscillations. OpenBCI recordings showed gamma response in almost all the subjects who showed a gamma response in Brain Products recordings, and the spectral and temporal profiles of these responses in alpha (8-13 Hz) and gamma bands were highly correlated between OpenBCI and Brain Products recordings. These results suggest that low-cost amplifiers can potentially be used in stimulus induced gamma response detection, making its research, and application in medicine more accessible.


Author(s):  
Meduri Sree Vidya

Eigen faces(PCA) approach for face recognition ,The face is an important part of who you are how people identify you. In face recognition there are two types of comparisons verification and identification. There are about 80 nodal points on a human face here are few nodal points that are measured by software that are distance between eyes ,width of the nose, Depth of the eye socket, Check bones, Jaw line and Chin By this method we can take automatic attendance .face recognition is done by projecting new image onto a low dimensional linear “face space” defined by the Eigen faces. This method is reliable , low cost , faster access and reduce man power.


2007 ◽  
Vol 19 (11) ◽  
pp. 1815-1826 ◽  
Author(s):  
Roxane J. Itier ◽  
Claude Alain ◽  
Katherine Sedore ◽  
Anthony R. McIntosh

Unlike most other objects that are processed analytically, faces are processed configurally. This configural processing is reflected early in visual processing following face inversion and contrast reversal, as an increase in the N170 amplitude, a scalp-recorded event-related potential. Here, we show that these face-specific effects are mediated by the eye region. That is, they occurred only when the eyes were present, but not when eyes were removed from the face. The N170 recorded to inverted and negative faces likely reflects the processing of the eyes. We propose a neural model of face processing in which face- and eye-selective neurons situated in the superior temporal sulcus region of the human brain respond differently to the face configuration and to the eyes depending on the face context. This dynamic response modulation accounts for the N170 variations reported in the literature. The eyes may be central to what makes faces so special.


2019 ◽  
Author(s):  
Yasmin Allen-Davidian ◽  
Manuela Russo ◽  
Naohide Yamamoto ◽  
Jordy Kaufman ◽  
Alan J. Pegna ◽  
...  

Face Inversion Effects (FIEs) – differences in response to upside down faces compared to upright faces – occur for both behavioural and electrophysiological responses when people view face stimuli. In EEG, the inversion of a face is often reported to evoke an enhanced amplitude and delayed latency of the N170 event-related potential. This response has historically been attributed to the indexing of specialised face processing mechanisms within the brain. However, inspection of the literature revealed that while the N170 is consistently delayed to photographed, schematic, Mooney and line drawn face stimuli, only naturally photographed faces enhance the amplitude upon inversion. This raises the possibility that the increased N170 amplitudes to inverted faces may have other origins than the inversion of the face’s structural components. In line with previous research establishing the N170 as a prediction error signal, we hypothesise that the unique N170 amplitude response to inverted photographed faces stems from multiple expectation violations, over and above structural inversion. For instance, rotating an image of a face upside down not only violates the expectation that faces appear upright, but also lifelong priors that illumination comes from above and gravity pulls from below. To test this hypothesis, we recorded EEG whilst participants viewed face stimuli (upright versus inverted), where the faces were illuminated from above versus below, and where the models were photographed upright versus hanging upside down. The N170 amplitudes were found to be modulated by a complex interaction between orientation, lighting and gravity factors, with the amplitudes largest when faces consistently violated all three expectations and smallest when all these factors concurred with expectations. These results confirm our hypothesis that FIEs on N170 amplitudes are driven by a violation of the viewer’s expectations across several parameters that characterise faces, rather than a disruption in the configurational disposition of its features.


2001 ◽  
Vol 13 (7) ◽  
pp. 937-951 ◽  
Author(s):  
Noam Sagiv ◽  
Shlomo Bentin

The range of specificity and the response properties of the extrastriate face area were investigated by comparing the N170 event-related potential (ERP) component elicited by photographs of natural faces, realistically painted portraits, sketches of faces, schematic faces, and by nonface meaningful and meaningless visual stimuli. Results showed that the N170 distinguished between faces and nonface stimuli when the concept of a face was clearly rendered by the visual stimulus, but it did not distinguish among different face types: Even a schematic face made from simple line fragments triggered the N170. However, in a second experiment, inversion seemed to have a different effect on natural faces in which face components were available and on the pure gestalt-based schematic faces: The N170 amplitude was enhanced when natural faces were presented upside down but reduded when schematic faces were inverted. Inversion delayed the N170 peak latency for both natural and schematic faces. Together, these results suggest that early face processing in the human brain is subserved by a multiple-component neural system in which both whole-face configurations and face parts are processed. The relative involvement of the two perceptual processes is probably determined by whether the physiognomic value of the stimuli depends upon holistic configuration, or whether the individual components can be associated with faces even when presented outside the face context.


2002 ◽  
Vol 14 (2) ◽  
pp. 199-209 ◽  
Author(s):  
Michelle de Haan ◽  
Olivier Pascalis ◽  
Mark H. Johnson

Newborn infants respond preferentially to simple face-like patterns, raising the possibility that the face-specific regions identified in the adult cortex are functioning from birth. We sought to evaluate this hypothesis by characterizing the specificity of infants' electrocortical responses to faces in two ways: (1) comparing responses to faces of humans with those to faces of nonhuman primates; and 2) comparing responses to upright and inverted faces. Adults' face-responsive N170 event-related potential (ERP) component showed specificity to upright human faces that was not observable at any point in the ERPs of infants. A putative “infant N170” did show sensitivity to the species of the face, but the orientation of the face did not influence processing until a later stage. These findings suggest a process of gradual specialization of cortical face processing systems during postnatal development.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Allie R. Geiger ◽  
Benjamin Balas

AbstractFace recognition is supported by selective neural mechanisms that are sensitive to various aspects of facial appearance. These include event-related potential (ERP) components like the P100 and the N170 which exhibit different patterns of selectivity for various aspects of facial appearance. Examining the boundary between faces and non-faces using these responses is one way to develop a more robust understanding of the representation of faces in extrastriate cortex and determine what critical properties an image must possess to be considered face-like. Robot faces are a particularly interesting stimulus class to examine because they can differ markedly from human faces in terms of shape, surface properties, and the configuration of facial features, but are also interpreted as social agents in a range of settings. In the current study, we thus chose to investigate how ERP responses to robot faces may differ from the response to human faces and non-face objects. In two experiments, we examined how the P100 and N170 responded to human faces, robot faces, and non-face objects (clocks). In Experiment 1, we found that robot faces elicit intermediate responses from face-sensitive components relative to non-face objects (clocks) and both real human faces and artificial human faces (computer-generated faces and dolls). These results suggest that while human-like inanimate faces (CG faces and dolls) are processed much like real faces, robot faces are dissimilar enough to human faces to be processed differently. In Experiment 2 we found that the face inversion effect was only partly evident in robot faces. We conclude that robot faces are an intermediate stimulus class that offers insight into the perceptual and cognitive factors that affect how social agents are identified and categorized.


2020 ◽  
Vol 2020 (11) ◽  
pp. 267-1-267-8
Author(s):  
Mitchell J.P. van Zuijlen ◽  
Sylvia C. Pont ◽  
Maarten W.A. Wijntjes

The human face is a popular motif in art and depictions of faces can be found throughout history in nearly every culture. Artists have mastered the depiction of faces after employing careful experimentation using the relatively limited means of paints and oils. Many of the results of these experimentations are now available to the scientific domain due to the digitization of large art collections. In this paper we study the depiction of the face throughout history. We used an automated facial detection network to detect a set of 11,659 faces in 15,534 predominately western artworks, from 6 international, digitized art galleries. We analyzed the pose and color of these faces and related those to changes over time and gender differences. We find a number of previously known conventions, such as the convention of depicting the left cheek for females and vice versa for males, as well as unknown conventions, such as the convention of females to be depicted looking slightly down. Our set of faces will be released to the scientific community for further study.


Sign in / Sign up

Export Citation Format

Share Document