scholarly journals An Averaging Technique for the P300 Spatial Distribution

2015 ◽  
Vol 54 (03) ◽  
pp. 215-220 ◽  
Author(s):  
M. Matteucci ◽  
L. Mainardi ◽  
A. Tahirovic

Summary Introduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Biosignal Interpretation: Advanced Methods for Neural Signals and Images”. Objectives: The main objectives of the paper regard the analysis of amplitude spatial distribution of the P300 evoked potential over a scalp of a particular subject and finding an averaged spatial distribution template for that subject. This template, which may differ for two different subjects, can help in getting a more accurate P300 detection for all BCIs that inherently use spatial filtering to detect P300 signal. Finally, the proposed averaging technique for a particular subject obtains an averaged spatial distribution template through only several epochs, which makes the proposed averaging technique fast and possible to use without applying any prior training data as in case of data enhancement technique. Methods: The method used in the proposed framework for the averaging of spatial distribution of P300 evoked potentials is based on the statistical properties of independent components (ICs). These components are obtained by using independent component analysis (ICA) from different target epochs. Results: This paper gives a novel averaging technique for the spatial distribution of P300 evoked potentials, which is based on the P300 signals obtained from different target epochs using the ICA algorithm. Such a technique provides a more reliable P300 spatial distribution for a subject of interest, which can be used either for an improved spatial selection of ICs, or more accurate P300 detection and extraction. In addition, the experiments demonstrate that the values of spatial intensity computed by the proposed technique for P300 signal converge after only several target epochs for each electrode allocation. Such a speed of convergence allows the proposed algorithm to easily adapt to a subject of interest without any additional artificial data preparation prior the algorithm execution such in case of data enhancement technique. Conclusion: The proposed technique averages the P300 spatial distribution for a particular subject over all electrode allocations. First, the technique combines P300-like components obtained by the ICA run within a target epoch in order to obtainan averaged P300 spatial distribution. Second, the technique averages spatial distributions of P300 signals obtained from different target epochs in order to get the final averaged template. Such an template can be useful for any BCI technique where spatial selection is used to detect evoked potentials.

2018 ◽  
Author(s):  
Simon Ruske ◽  
David O. Topping ◽  
Virginia E. Foot ◽  
Andrew P. Morse ◽  
Martin W. Gallagher

Abstract. Primary biological aerosol including bacteria, fungal spores and pollen have important implications for public health and the environment. Such particles may have different concentrations of chemical fluorophores and will provide different responses in the presence of ultraviolet light which potentially could be used to discriminate between different types of biological aerosol. Development of ultraviolet light induced fluorescence (UV-LIF) instruments such as the Wideband Integrated Bioaerosol Sensor (WIBS) has made is possible to collect size, morphology and fluorescence measurements in real-time. However, it is unclear without studying responses from the instrument in the laboratory, the extent to which we can discriminate between different types of particles. Collection of laboratory data is vital to validate any approach used to analyse the data and to ensure that the data available is utilised as effectively as possible. In this manuscript we test a variety of methodologies on traditional reference particles and a range of laboratory generated aerosols. Hierarchical Agglomerative Clustering (HAC) has been previously applied to UV-LIF data in a number of studies and is tested alongside other algorithms that could be used to solve the classification problem: Density Based Spectral Clustering and Noise (DBSCAN), k-means and gradient boosting. Whilst HAC was able to effectively discriminate between the reference particles, yielding a classification error of only 1.8 %, similar results were not obtained when testing on laboratory generated aerosol where the classification error was found to be between 11.5 % and 24.2 %. Furthermore, there is a worryingly large uncertainty in this approach in terms of the data preparation and the cluster index used, and we were unable attain consistent results across the different sets of laboratory generated aerosol tested. The best results were obtained using gradient boosting, where the misclassification rate was between 4.38 % and 5.42 %. The largest contribution to this error was the pollen samples where 28.5 % of the samples were misclassified as fungal spores. The technique was also robust to changes in data preparation provided a fluorescent threshold was applied to the data. Where laboratory training data is unavailable, DBSCAN was found to be an potential alternative to HAC. In the case of one of the data sets where 22.9 % of the data was left unclassified we were able to produce three distinct clusters obtaining a classification error of only 1.42 % on the classified data. These results could not be replicated however for the other data set where 26.8 % of the data was not classified and a classification error of 13.8 % was obtained. This method, like HAC, also appeared to be heavily dependent on data preparation, requiring different selection of parameters dependent on the preparation used. Further analysis will also be required to confirm our selection of parameters when using this method on ambient data. There is a clear need for the collection of additional laboratory generated aerosol to improve interpretation of current databases and to aid in the analysis of data collected from an ambient environment. New instruments with a greater resolution are likely improve on current discrimination between pollen, bacteria and fungal spores and even between their different types, however the need for extensive laboratory training data sets will grow as a result.


2021 ◽  
Vol 7 (3) ◽  
pp. 59
Author(s):  
Yohanna Rodriguez-Ortega ◽  
Dora M. Ballesteros ◽  
Diego Renza

With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, known as the copy-move technique. Traditional image processing approaches manually look for patterns related to the duplicated content, limiting their use in mass data classification. In contrast, approaches based on deep learning have shown better performance and promising results, but they present generalization problems with a high dependence on training data and the need for appropriate selection of hyperparameters. To overcome this, we propose two approaches that use deep learning, a model by a custom architecture and a model by transfer learning. In each case, the impact of the depth of the network is analyzed in terms of precision (P), recall (R) and F1 score. Additionally, the problem of generalization is addressed with images from eight different open access datasets. Finally, the models are compared in terms of evaluation metrics, and training and inference times. The model by transfer learning of VGG-16 achieves metrics about 10% higher than the model by a custom architecture, however, it requires approximately twice as much inference time as the latter.


2010 ◽  
Vol 82 (2) ◽  
pp. 230-237 ◽  
Author(s):  
Toshihiro Okubo ◽  
Pierre M. Picard ◽  
Jacques-François Thisse

Solid Earth ◽  
2011 ◽  
Vol 2 (1) ◽  
pp. 53-63 ◽  
Author(s):  
S. Tavani ◽  
P. Arbues ◽  
M. Snidero ◽  
N. Carrera ◽  
J. A. Muñoz

Abstract. In this work we present the Open Plot Project, an open-source software for structural data analysis, including a 3-D environment. The software includes many classical functionalities of structural data analysis tools, like stereoplot, contouring, tensorial regression, scatterplots, histograms and transect analysis. In addition, efficient filtering tools are present allowing the selection of data according to their attributes, including spatial distribution and orientation. This first alpha release represents a stand-alone toolkit for structural data analysis. The presence of a 3-D environment with digitalising tools allows the integration of structural data with information extracted from georeferenced images to produce structurally validated dip domains. This, coupled with many import/export facilities, allows easy incorporation of structural analyses in workflows for 3-D geological modelling. Accordingly, Open Plot Project also candidates as a structural add-on for 3-D geological modelling software. The software (for both Windows and Linux O.S.), the User Manual, a set of example movies (complementary to the User Manual), and the source code are provided as Supplement. We intend the publication of the source code to set the foundation for free, public software that, hopefully, the structural geologists' community will use, modify, and implement. The creation of additional public controls/tools is strongly encouraged.


2021 ◽  
Vol 69 (4) ◽  
pp. 297-306
Author(s):  
Julius Krause ◽  
Maurice Günder ◽  
Daniel Schulz ◽  
Robin Gruna

Abstract The selection of training data determines the quality of a chemometric calibration model. In order to cover the entire parameter space of known influencing parameters, an experimental design is usually created. Nevertheless, even with a carefully prepared Design of Experiment (DoE), redundant reference analyses are often performed during the analysis of agricultural products. Because the number of possible reference analyses is usually very limited, the presented active learning approaches are intended to provide a tool for better selection of training samples.


2021 ◽  
Author(s):  
Octavian Dumitru ◽  
Gottfried Schwarz ◽  
Mihai Datcu ◽  
Dongyang Ao ◽  
Zhongling Huang ◽  
...  

<p>During the last years, much progress has been reached with machine learning algorithms. Among the typical application fields of machine learning are many technical and commercial applications as well as Earth science analyses, where most often indirect and distorted detector data have to be converted to well-calibrated scientific data that are a prerequisite for a correct understanding of the desired physical quantities and their relationships.</p><p>However, the provision of sufficient calibrated data is not enough for the testing, training, and routine processing of most machine learning applications. In principle, one also needs a clear strategy for the selection of necessary and useful training data and an easily understandable quality control of the finally desired parameters.</p><p>At a first glance, one could guess that this problem could be solved by a careful selection of representative test data covering many typical cases as well as some counterexamples. Then these test data can be used for the training of the internal parameters of a machine learning application. At a second glance, however, many researchers found out that a simple stacking up of plain examples is not the best choice for many scientific applications.</p><p>To get improved machine learning results, we concentrated on the analysis of satellite images depicting the Earth’s surface under various conditions such as the selected instrument type, spectral bands, and spatial resolution. In our case, such data are routinely provided by the freely accessible European Sentinel satellite products (e.g., Sentinel-1, and Sentinel-2). Our basic work then included investigations of how some additional processing steps – to be linked with the selected training data – can provide better machine learning results.</p><p>To this end, we analysed and compared three different approaches to find out machine learning strategies for the joint selection and processing of training data for our Earth observation images:</p><ul><li>One can optimize the training data selection by adapting the data selection to the specific instrument, target, and application characteristics [1].</li> <li>As an alternative, one can dynamically generate new training parameters by Generative Adversarial Networks. This is comparable to the role of a sparring partner in boxing [2].</li> <li>One can also use a hybrid semi-supervised approach for Synthetic Aperture Radar images with limited labelled data. The method is split in: polarimetric scattering classification, topic modelling for scattering labels, unsupervised constraint learning, and supervised label prediction with constraints [3].</li> </ul><p>We applied these strategies in the ExtremeEarth sea-ice monitoring project (http://earthanalytics.eu/). As a result, we can demonstrate for which application cases these three strategies will provide a promising alternative to a simple conventional selection of available training data.</p><p>[1] C.O. Dumitru et. al, “Understanding Satellite Images: A Data Mining Module for Sentinel Images”, Big Earth Data, 2020, 4(4), pp. 367-408.</p><p>[2] D. Ao et. al., “Dialectical GAN for SAR Image Translation: From Sentinel-1 to TerraSAR-X”, Remote Sensing, 2018, 10(10), pp. 1-23.</p><p>[3] Z. Huang, et. al., "HDEC-TFA: An Unsupervised Learning Approach for Discovering Physical Scattering Properties of Single-Polarized SAR Images", IEEE Transactions on Geoscience and Remote Sensing, 2020, pp.1-18.</p>


2021 ◽  
Author(s):  
Maximilian Peter Dammann ◽  
Wolfgang Steger ◽  
Ralph Stelzer

Abstract Product visualization in AR/VR applications requires a largely manual process of data preparation. Previous publications focus on error-free triangulation or transformation of product structure data and display attributes for AR/VR applications. This paper focuses on the preparation of the required geometry data. In this context, a significant reduction in effort can be achieved through automation. The steps of geometry preparation are identified and examined with respect to their automation potential. In addition, possible couplings of sub-steps are discussed. Based on these explanations, a structure for the geometry preparation process is proposed. With this structured preparation process it becomes possible to consider the available computing power of the target platform during the geometry preparation. The number of objects to be rendered, the tessellation quality and the level of detail can be controlled by the automated choice of transformation parameters. We present a software tool in which partial steps of the automatic preparation are already implemented. After an analysis of the product structure of a CAD file, the transformation is executed for each component. Functions implemented so far allow, for example, the selection of assemblies and parts based on filter options, the transformation of geometries in batch mode, the removal of certain details and the creation of UV maps. Flexibility, transformation quality and time savings are described and discussed.


Author(s):  
Maximilian Peter Dammann ◽  
Wolfgang Steger ◽  
Ralph Stelzer

Abstract Product visualization in AR/VR applications requires a largely manual process of data preparation. Previous publications focus on error-free triangulation or transformation of product structure data and display attributes for AR/VR applications. This paper focuses on the preparation of the required geometry data. In this context, a significant reduction in effort can be achieved through automation. The steps of geometry preparation are identified and examined concerning their automation potential. In addition, possible couplings of sub-steps are discussed. Based on these explanations, a structure for the geometry preparation process is proposed. With this structured preparation process, it becomes possible to consider the available computing power of the target platform during the geometry preparation. The number of objects to be rendered, the tessellation quality, and the level of detail can be controlled by the automated choice of transformation parameters. Through this approach, tedious preparation tasks and iterative performance optimization can be avoided in the future, which also simplifies the integration of AR/VR applications into product development and use. A software tool is presented in which partial steps of the automatic preparation are already implemented. After an analysis of the product structure of a CAD file, the transformation is executed for each component. Functions implemented so far allow, for example, the selection of assemblies and parts based on filter options, the transformation of geometries in batch mode, the removal of certain details, and the creation of UV maps. Flexibility, transformation quality, and timesavings are described and discussed.


2014 ◽  
Vol 40 (5) ◽  
pp. 543-551 ◽  
Author(s):  
Marcelino Santos-Neto ◽  
Mellina Yamamura ◽  
Maria Concebida da Cunha Garcia ◽  
Marcela Paschoal Popolin ◽  
Tatiane Ramos dos Santos Silveira ◽  
...  

OBJECTIVE: To characterize deaths from pulmonary tuberculosis, according to sociodemographic and operational variables, in the city of São Luís, Brazil, and to describe their spatial distribution. METHODS: This was an exploratory ecological study based on secondary data from death certificates, obtained from the Brazilian Mortality Database, related to deaths from pulmonary tuberculosis. We included all deaths attributed to pulmonary tuberculosis that occurred in the urban area of São Luís between 2008 and 2012. We performed univariate and bivariate analyses of the sociodemographic and operational variables of the deaths investigated, as well as evaluating the spatial distribution of the events by kernel density estimation. RESULTS: During the study period, there were 193 deaths from pulmonary tuberculosis in São Luís. The median age of the affected individuals was 52 years. Of the 193 individuals who died, 142 (73.60%) were male, 133 (68.91%) were Mulatto, 102 (53.13%) were single, and 64 (33.16%) had completed middle school. There was a significant positive association between not having received medical care prior to death and an autopsy having been performed (p = 0.001). A thematic map by density of points showed that the spatial distribution of those deaths was heterogeneous and that the density was as high as 8.12 deaths/km2. CONCLUSIONS: The sociodemographic and operational characteristics of the deaths from pulmonary tuberculosis evaluated in this study, as well as the identification of priority areas for control and surveillance of the disease, could promote public health policies aimed at reducing health inequities, allowing the optimization of resources, as well as informing decisions regarding the selection of strategies and specific interventions targeting the most vulnerable populations.


Sign in / Sign up

Export Citation Format

Share Document