scholarly journals Super-Resolution for Improving EEG Spatial Resolution using Deep Convolutional Neural Network—Feasibility Study

Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5317 ◽  
Author(s):  
Moonyoung Kwon ◽  
Sangjun Han ◽  
Kiwoong Kim ◽  
Sung Chan Jun

Electroencephalography (EEG) has relatively poor spatial resolution and may yield incorrect brain dynamics and distort topography; thus, high-density EEG systems are necessary for better analysis. Conventional methods have been proposed to solve these problems, however, they depend on parameters or brain models that are not simple to address. Therefore, new approaches are necessary to enhance EEG spatial resolution while maintaining its data properties. In this work, we investigated the super-resolution (SR) technique using deep convolutional neural networks (CNN) with simulated EEG data with white Gaussian and real brain noises, and experimental EEG data obtained during an auditory evoked potential task. SR EEG simulated data with white Gaussian noise or brain noise demonstrated a lower mean squared error and higher correlations with sensor information, and detected sources even more clearly than did low resolution (LR) EEG. In addition, experimental SR data also demonstrated far smaller errors for N1 and P2 components, and yielded reasonable localized sources, while LR data did not. We verified our proposed approach’s feasibility and efficacy, and conclude that it may be possible to explore various brain dynamics even with a small number of sensors.

2019 ◽  
Vol 11 (9) ◽  
pp. 1005
Author(s):  
Jiahui Qu ◽  
Yunsong Li ◽  
Qian Du ◽  
Wenqian Dong ◽  
Bobo Xi

Hyperspectral pansharpening is an effective technique to obtain a high spatial resolution hyperspectral (HS) image. In this paper, a new hyperspectral pansharpening algorithm based on homomorphic filtering and weighted tensor matrix (HFWT) is proposed. In the proposed HFWT method, open-closing morphological operation is utilized to remove the noise of the HS image, and homomorphic filtering is introduced to extract the spatial details of each band in the denoised HS image. More importantly, a weighted root mean squared error-based method is proposed to obtain the total spatial information of the HS image, and an optimized weighted tensor matrix based strategy is presented to integrate spatial information of the HS image with spatial information of the panchromatic (PAN) image. With the appropriate integrated spatial details injection, the fused HS image is generated by constructing the suitable gain matrix. Experimental results over both simulated and real datasets demonstrate that the proposed HFWT method effectively generates the fused HS image with high spatial resolution while maintaining the spectral information of the original low spatial resolution HS image.


2019 ◽  
Vol 11 (15) ◽  
pp. 1767 ◽  
Author(s):  
Francesca Pasquetti ◽  
Monica Bini ◽  
Andrea Ciampalini

The aim of this paper is to evaluate the usefulness of TanDEM-X DEM (digital elevation model) for remote geomorphological analysis in Argentinian Patagonia. The use of a DEM with appropriate resolution and coverage might be very helpful and advantageous in vast and hardly accessible areas. TanDEM-X DEM could represent an unprecedented opportunity to identify geomorphological features because of its global coverage, ~12 m spatial resolution and low cost. In this regard, we assessed the vertical accuracy of TanDEM-X DEM through comparison with Differential Global Positioning System (DGPS) datasets collected in two areas of the Patagonia Region during a field survey; we then investigated different types of landforms by creating the elevation profiles. The comparison indicates a high agreement between TanDEM-X DEM and reference values, with a mean absolute vertical error (MAE) of 0.53 m, and a root mean squared error (RMSE) of 0.73 m. The results of landform analysis show an appropriate spatial resolution to detect different features such as beach ridges, which are impossible to delineate with other lower resolution DEMs. For these reasons, TanDEM-X DEM constitutes a useful tool for detailed geomorphological analyses in Argentinian Patagonia.


2018 ◽  
Vol 215 ◽  
pp. 01002
Author(s):  
Yuhendra ◽  
Minarni

Image fusion is a useful tool for integrating low spatial resolution multispectral (MS) images with a high spatial resolution panchromatic (PAN) image, thus producing a high resolution multispectral image for better understanding of the observed earth surface. A main proposed the research were the effectiveness of different image fusion methods while filtering methods added to speckle suppression in synthetic aperture radar (SAR) images. The quality assessment of the filtering fused image implemented by statistical parameter namely mean, standard deviation, bias, universal index quality image (UIQI) and root mean squared error (RMSE). In order to test the robustness of the image quality, either speckle noise (Gamma map filter) is intentionally added to the fused image. When comparing and testing result, Gram Scmidth (GS) methods have shown better results for good colour reproduction, as compared with high pass filtering (HPF). And the other hands, GS, and wavelet intensity hue saturation (W-IHS) have shown the preserving good colour with original image for Landsat TM data.


2018 ◽  
Author(s):  
Ramesh Balasubramaniam ◽  
Daniel Comstock

Tapping in synchrony to an isochronous rhythm involves several key functions of the sensorimotor system including timing, prediction and error correction. While auditory sensorimotor synchronization (SMS) has been well studied, much less is known about mechanisms involved in visual SMS. By comparing error correction in auditory and visual SMS, it can be determined if the neural mechanisms for detection and correction of synchronization errors are generalized or domain specific. To study this problem, we measured EEG while subjects tapped in synchrony to separate visual and auditory metronomes that both contained small temporal perturbations to induce errors. The metronomes had inter-onset intervals of 600 milliseconds and the perturbations where of 4 kinds: +/- 66 milliseconds to induce period corrections, and +/- 16 milliseconds to induce phase corrections. We hypothesize that given the less precise nature of visual SMS, error correction to perturbed visual flashing rhythms will be more gradual than with the equivalent auditory perturbations. Additionally, we expect this more gradual error correction will be reflected in the visual evoked potentials. Our findings indicate that the visual system is only capable of more gradual phase corrections to even the larger induced errors. This is opposed to the swifter period correction of the auditory system to large induced errors. EEG data found the peak N1 auditory evoked potential is modulated by the size and direction of an induced error in line with previous research, while the P1 visual evoked potential was only effected by the large late-coming perturbations resulting in reduced peak latency. Looking at the error response EEG data, an Error Related Negativity (ERN) and related Error Positivity (pE) was found only in the auditory +66 condition, while no ERN or pE were found in any of the visual perturbation conditions. In addition to the ERPs, we performed a dipole source localization and clustering analysis indicating that the anterior cingulate was active in the error detection of the perturbed stimulus for both auditory and visual conditions in addition to being involved in producing the ERN and pE induced by the auditory +66 perturbation. Taken together, these results confirm that the visual system is less developed for synchronizing and error correction with flashing rhythms by its more gradual error correction. The reduced latency of the P1 to the visual +66 suggests that the visual system can detect these errors, but that detection does not translate into any meaningful improvement in error correction. This indicates that the visual system is not as tightly coupled to the motor system as the auditory system is for SMS, suggesting the mechanisms of SMS are not completely domain general.


Author(s):  
Hasnaa Khalifi ◽  
Marc Compere ◽  
Patrick Currier

Battery models can be developed from first principles or from empirical methods. Simulink Parameter Estimation toolbox was used to identify the battery parameters and validate the battery model with test data. Experimental data was obtained by discharging the battery of a modified 2013 Chevrolet Malibu hybrid electric vehicle. The resulting battery model provided accurate simulation results over the validation data. For the constant current discharge, the mean squared error between measured and simulated data was 0.26 volts for the 298V terminal voltage, and 6.07E−4 (%) for state of charge. For the extended variable current discharge, the mean squared error between measured and simulated data was 0.21 volts for terminal voltage and 9.25E−4 (%) for state of charge.


Author(s):  
Wael Abdelrahman ◽  
Saeid Nahavandi ◽  
Douglas Creighton ◽  
Matthias Harders

This study represents a preliminary step towards data-driven computation of contact dynamics during manipulation of deformable objects at two points of contact. A modeling approach is proposed that characterizes the individual interaction at both points and the mutual effects of the two interactions on each other via a set of parameters. Both global as well as local coordinate systems are tested for encoding the contact mechanics. Artificial neural networks are trained on simulated data to capture the object behavior. A comparison of test data with the output of the trained system reveals a mean squared error percentage between 1% and 3% for simple interactions.


Author(s):  
Nagaraj P ◽  
Muthamilsudar K ◽  
Naga Nehanth S ◽  
Mohammed Shahid R ◽  
Sujith Kumar V

The main objective of Perceptual Image Super Resolution is to obtain a high resoluted image from a normal low resolution image. The task is very simple that we just want to make a Low firmness appearance into a extraordinary resolution image. To perform this task we have various methods like Classical Approach in which we try to maximize the mean squared error, evaluate by PSNR(Peak-Signal-to-Noise-Ratio). The first method used to perform this operation was SRCNN (Super Resolution Convolution Neural Network) and these days many of them use DRCN and VDSR which are slightly upgraded methods. Another technique used for the purpose of upscaling to get a high resoluted image from normal little resolution image is the state of art by PSNR. This method was a quite simple one in which we take a low determination image as input and place in a convolution neural network(CNN) and produce a high resolution image as the output. In this technique the edges will be clearly defined, but the whole image will be blurred. This method is unable to produce good-looking textures.


2017 ◽  
Vol 9 (1) ◽  
pp. 67-78
Author(s):  
M. R. Hasan ◽  
A. R. Baizid

The Bayesian estimation approach is a non-classical estimation technique in statistical inference and is very useful in real world situation. The aim of this paper is to study the Bayes estimators of the parameter of exponential distribution under different loss functions and compared among them as well as with the classical estimator named maximum likelihood estimator (MLE). Since exponential distribution is the life time distribution, we have studied exponential distribution using gamma prior. Here the gamma prior is used as the prior distribution of exponential distribution for finding the Bayes estimator. In our study we also used different symmetric and asymmetric loss functions such as squared error loss function, quadratic loss function, modified linear exponential (MLINEX) loss function and non-linear exponential (NLINEX) loss function. We have used simulated data using R-coding to find out the mean squared error (MSE) of different loss functions and hence found that non-classical estimator is better than classical estimator. Finally, mean square error (MSE) of the estimators of different loss functions are presented graphically.


2012 ◽  
Vol 61 (2) ◽  
pp. 277-290 ◽  
Author(s):  
Ádám Csorba ◽  
Vince Láng ◽  
László Fenyvesi ◽  
Erika Michéli

Napjainkban egyre nagyobb igény mutatkozik olyan technológiák és módszerek kidolgozására és alkalmazására, melyek lehetővé teszik a gyors, költséghatékony és környezetbarát talajadat-felvételezést és kiértékelést. Ezeknek az igényeknek felel meg a reflektancia spektroszkópia, mely az elektromágneses spektrum látható (VIS) és közeli infravörös (NIR) tartományában (350–2500 nm) végzett reflektancia-mérésekre épül. Figyelembe véve, hogy a talajokról felvett reflektancia spektrum információban nagyon gazdag, és a vizsgált tartományban számos talajalkotó rendelkezik karakterisztikus spektrális „ujjlenyomattal”, egyetlen görbéből lehetővé válik nagyszámú, kulcsfontosságú talajparaméter egyidejű meghatározása. Dolgozatunkban, a reflektancia spektroszkópia alapjaira helyezett, a talajok ösz-szetételének meghatározását célzó módszertani fejlesztés első lépéseit mutatjuk be. Munkánk során talajok szervesszén- és CaCO3-tartalmának megbecslését lehetővé tévő többváltozós matematikai-statisztikai módszerekre (részleges legkisebb négyzetek módszere, partial least squares regression – PLSR) épülő prediktív modellek létrehozását és tesztelését végeztük el. A létrehozott modellek tesztelése során megállapítottuk, hogy az eljárás mindkét talajparaméter esetében magas R2értéket [R2(szerves szén) = 0,815; R2(CaCO3) = 0,907] adott. A becslés pontosságát jelző közepes négyzetes eltérés (root mean squared error – RMSE) érték mindkét paraméter esetében közepesnek mondható [RMSE (szerves szén) = 0,467; RMSE (CaCO3) = 3,508], mely a reflektancia mérési előírások standardizálásával jelentősen javítható. Vizsgálataink alapján arra a következtetésre jutottunk, hogy a reflektancia spektroszkópia és a többváltozós kemometriai eljárások együttes alkalmazásával, gyors és költséghatékony adatfelvételezési és -értékelési módszerhez juthatunk.


Sign in / Sign up

Export Citation Format

Share Document