scholarly journals Highly Automated Dipole EStimation (HADES)

2011 ◽  
Vol 2011 ◽  
pp. 1-11 ◽  
Author(s):  
C. Campi ◽  
A. Pascarella ◽  
A. Sorrentino ◽  
M. Piana

Automatic estimation of current dipoles from biomagnetic data is still a problematic task. This is due not only to the ill-posedness of the inverse problem but also to two intrinsic difficulties introduced by the dipolar model: the unknown number of sources and the nonlinear relationship between the source locations and the data. Recently, we have developed a new Bayesian approach, particle filtering, based on dynamical tracking of the dipole constellation. Contrary to many dipole-based methods, particle filtering does not assume stationarity of the source configuration: the number of dipoles and their positions are estimated and updated dynamically during the course of the MEG sequence. We have now developed a Matlab-based graphical user interface, which allows nonexpert users to do automatic dipole estimation from MEG data with particle filtering. In the present paper, we describe the main features of the software and show the analysis of both a synthetic data set and an experimental dataset.

2010 ◽  
Vol 14 (3) ◽  
pp. 545-556 ◽  
Author(s):  
J. Rings ◽  
J. A. Huisman ◽  
H. Vereecken

Abstract. Coupled hydrogeophysical methods infer hydrological and petrophysical parameters directly from geophysical measurements. Widespread methods do not explicitly recognize uncertainty in parameter estimates. Therefore, we apply a sequential Bayesian framework that provides updates of state, parameters and their uncertainty whenever measurements become available. We have coupled a hydrological and an electrical resistivity tomography (ERT) forward code in a particle filtering framework. First, we analyze a synthetic data set of lysimeter infiltration monitored with ERT. In a second step, we apply the approach to field data measured during an infiltration event on a full-scale dike model. For the synthetic data, the water content distribution and the hydraulic conductivity are accurately estimated after a few time steps. For the field data, hydraulic parameters are successfully estimated from water content measurements made with spatial time domain reflectometry and ERT, and the development of their posterior distributions is shown.


2009 ◽  
Vol 6 (5) ◽  
pp. 6387-6424 ◽  
Author(s):  
J. Rings ◽  
J. A. Huisman ◽  
H. Vereecken

Abstract. Coupled hydrogeophysical methods infer hydrological and petrophysical parameters directly from geophysical measurements. Widespread methods do not explicitly recognize uncertainty in parameter estimates. Therefore, we apply a sequential Bayesian framework that provides updates of state, parameters and their uncertainty whenever measurements become available. We have coupled a hydrological and an electrical resistivity tomography (ERT) forward code in a particle filtering framework. First, we analyze a synthetic data set of lysimeter infiltration monitored with ERT. In a second step, we apply the approach to field data measured during an infiltration event on a full-scale dike model. For the synthetic data, the water content distribution and the hydraulic conductivity are accurately estimated after a few time steps. For the field data, hydraulic parameters are successfully estimated from water content measurements made with spatial time domain reflectometry and ERT, and the development of their posterior distributions is shown.


2021 ◽  
Vol 4 ◽  
Author(s):  
Till-Hendrik Macher ◽  
Arne Beermann ◽  
Florian Leese

DNA-based identification methods, such as DNA metabarcoding, are increasingly used as biodiversity assessment tools in research and environmental management. Although powerful analysis software exists to process raw data, the translation of sequence read data into biological information and downstream analyses may be difficult for end users with limited expertise in bioinformatics. Thus, the need for easy-to-use, graphical user interface (GUI) software to analyze and visualize DNA metabarcoding data is growing. Here we present TaxonTableTools (TTT), a new platform-independent GUI that aims to fill this gap by providing simple, reproducible analysis and visualization workflows. The input format of TTT is a so-called "TaXon table". This data format can easily be generated within TTT from two common file formats that can be obtained using various published DNA metabarcoding pipelines: a read table and a taxonomy table. TTT offers a wide range of processing, filtering and analysis modules. The user can analyze and visualize basic statistics, such as read proportion per taxon, as well as more sophisticated visualizations such as interactive Krona charts for taxonomic data exploration, or complex parallel category diagrams to assess species distribution patterns. Venn diagrams can be calculated to compare taxon overlap among replicates, samples, or analysis methods. Various ecological analyses can be produced directly, including alpha or beta diversity estimates, rarefaction analyses, and principal coordinate or non-metric multidimensional scaling plots. The taxonomy of a data set can be validated via the Global Biodiversity Information Facility (GBIF) API to check for synonyms and spelling mistakes. Furthermore, geographical distribution data can be automatically downloaded from GBIF. Additionally, TTT offers a conversion tool for DNA metabarcoding data into formats required for traditional, taxonomy-based analyses performed by regulatory bioassessment programs. Beyond that, TTT is able to produce fully interactive html-based graphics that can be analyzed in any web browser. The software comes with a manual and tutorial, is free and publicly available through GitHub (https://github.com/TillMacher/TaxonTableTools) or the Python package index (https://pypi.org/project/taxontabletools/).


2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
J. Zachary Gazak ◽  
John A. Johnson ◽  
John Tonry ◽  
Diana Dragomir ◽  
Jason Eastman ◽  
...  

We present an IDL graphical user-interface-driven software package designed for the analysis of exoplanet transit light curves. The Transit Analysis Package (TAP) software uses Markov Chain Monte Carlo (MCMC) techniques to fit light curves using the analytic model of Mandal and Agol (2002). The package incorporates a wavelet-based likelihood function developed by Carter and Winn (2009), which allows the MCMC to assess parameter uncertainties more robustly than classicχ2methods by parameterizing uncorrelated “white” and correlated “red” noise. The software is able to simultaneously analyze multiple transits observed in different conditions (instrument, filter, weather, etc.). The graphical interface allows for the simple execution and interpretation of Bayesian MCMC analysis tailored to a user’s specific data set and has been thoroughly tested on ground-based andKeplerphotometry. This paper describes the software release and provides applications to new and existing data. Reanalysis of ground-based observations of TrES-1b, WASP-4b, and WASP-10b (Winn et al., 2007, 2009; Johnson et al., 2009; resp.) and space-basedKepler4b–8b (Kipping and Bakos 2010) show good agreement between TAP and those publications. We also present new multi-filter light curves of WASP-10b and we find excellent agreement with previously published values for a smaller radius.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 321
Author(s):  
David Mayor ◽  
Deepak Panday ◽  
Hari Kala Kandel ◽  
Tony Steffert ◽  
Duncan Banks

Background: We developed CEPS as an open access MATLAB® GUI (graphical user interface) for the analysis of Complexity and Entropy in Physiological Signals (CEPS), and demonstrate its use with an example data set that shows the effects of paced breathing (PB) on variability of heart, pulse and respiration rates. CEPS is also sufficiently adaptable to be used for other time series physiological data such as EEG (electroencephalography), postural sway or temperature measurements. Methods: Data were collected from a convenience sample of nine healthy adults in a pilot for a larger study investigating the effects on vagal tone of breathing paced at various different rates, part of a development programme for a home training stress reduction system. Results: The current version of CEPS focuses on those complexity and entropy measures that appear most frequently in the literature, together with some recently introduced entropy measures which may have advantages over those that are more established. Ten methods of estimating data complexity are currently included, and some 28 entropy measures. The GUI also includes a section for data pre-processing and standard ancillary methods to enable parameter estimation of embedding dimension m and time delay τ (‘tau’) where required. The software is freely available under version 3 of the GNU Lesser General Public License (LGPLv3) for non-commercial users. CEPS can be downloaded at https://bitbucket.org/deepak_panday/ceps/src/pipeline_v2/. In our illustration on PB, most complexity and entropy measures decreased significantly in response to breathing at 7 breaths per minute, differentiating more clearly than conventional linear, time- and frequency-domain measures between breathing states. In contrast, Higuchi fractal dimension increased during paced breathing. Conclusions: We have developed CEPS software as a physiological data visualiser able to integrate state of the art techniques. The interface is designed for clinical research and has a structure designed for integrating new tools. The aim is to strengthen collaboration between clinicians and the biomedical community, as demonstrated here by using CEPS to analyse various physiological responses to paced breathing.


2020 ◽  
pp. paper67-1-paper67-10
Author(s):  
Ilya Zarubin ◽  
Aleksander Filinskikh

Features of using the regression test selection method for automated testing of the graphical user interface in the development of information systems that consist of a set of modules are considered. The source of the need to create additional test environments required in the development of multi-module information systems that are using databases is specified. The three most popular approaches to organizing test environments – Copying, Scaling, and Scaling with synthetic data generation – are considered. The positive and negative sides are considered in terms of implementation, using, and resources spent on creating and maintaining resources, as well as in terms of the reliability of the results obtained in the process of testing models created using these approaches. The positive aspects of checking the quality of complex multi-module information systems from the point of view of the graphical user interface by various testing methods and, in particular, in the process of performing regression testing are presented. The positive aspects of using regression testing automation in conditions of lack of resources using various software platforms are indicated. The advantages of using the dynamic selection method for regression tests for automated testing are also given, as well as recommendations for implementing the selection method in existing and beginning projects.


2008 ◽  
Vol 06 (06) ◽  
pp. 1193-1211 ◽  
Author(s):  
MIHAILO KAPLAREVIC ◽  
ALISON E. MURRAY ◽  
STEPHEN C. CARY ◽  
GUANG R. GAO

Short-insert shotgun sequencing approaches have been applied in recent years to environmental genomic libraries. In the case of complex multispecies microbial communities, there can be many sequence reads that are not incorporated into assemblies, and thus need to be annotated and accessible as single reads. Most existing annotation systems and genome databases accommodate assembled genomes containing contiguous gene-encoding sequences. Thus, a solution is required that can work effectively with environmental genomic annotation information to facilitate data analysis. The Environmental Genome Informational Utility System (EnGenIUS) is a comprehensive environmental genome (metagenome) research toolset that was specifically designed to accommodate the needs of large (> 250 K sequence reads) environmental genome sequencing efforts. The core EnGenIUS modules consist of a set of UNIX scripts and PHP programs used for data preprocessing, an annotation pipeline with accompanying analysis tools, two entity relational databases, and a graphical user interface. The annotation pipeline has a modular structure and can be customized to best fit input data set properties. The integrated entity relational databases store raw data and annotation analysis results. Access to the underlying databases and services is facilitated through a web-based graphical user interface. Users have the ability to browse, upload, download, and analyze preprocessed data, based on diverse search criteria. The EnGenIUS toolset was successfully tested using the Alvinella pompejana epibiont environmental genome data set, which comprises more than 300 K sequence reads. A fully browsable EnGenIUS portal is available at (access code: "guest"). The scope of this paper covers the implementation details and technical aspects of the EnGenIUS toolset.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. E209-E223
Author(s):  
Juan Luis Fernández-Martínez ◽  
Zulima Fernández-Muñiz ◽  
Shan Xu ◽  
Ana Cernea ◽  
Colette Sirieix ◽  
...  

We have evaluated the uncertainty analysis of the 3D electrical tomography inverse problem using model reduction via singular-value decomposition and performed sampling of the nonlinear equivalence region via an explorative member of the particle swarm optimization (PSO) family. The procedure begins with the local inversion of the observed data to find a good resistivity model located in the nonlinear equivalence region. Then, the dimensionality is reduced via the spectral decomposition of the 3D geophysical model. Finally, the exploration of the uncertainty space is performed via an exploratory version of PSO (RR-PSO). This sampling methodology does not prejudge where the initial model comes from as long as this model has a geologic meaning. The 3D subsurface conductivity distribution is arranged as a 2D matrix by ordering the conductivity values contained in a given earth section as a column array and stacking parallel sections as columns of the matrix. There are three basic modes of ordering: mode 1 and mode 2, by using vertical sections in two perpendicular directions, and mode 3, by using horizontal sections. The spectral decomposition is then performed using these three 2D modes. Using this approach, it is possible to sample the uncertainty space of the 3D electrical resistivity inverse problem very efficiently. This methodology is intrinsically parallelizable and could be run for different initial models simultaneously. We found the application to a synthetic data set that is well-known in the literature related to this subject, obtaining a set of surviving geophysical models located in the nonlinear equivalence region that can be used to approximate numerically the posterior distribution of the geophysical model parameters (frequentist approach). Based on these models, it is possible to perform the probabilistic segmentation of the inverse solution found, meanwhile answering geophysical questions with its corresponding uncertainty assessment. This methodology has a general character could be applied to any other 3D nonlinear inverse problems by implementing their corresponding forward model.


2012 ◽  
Vol 45 (3) ◽  
pp. 568-572 ◽  
Author(s):  
Michael Krug ◽  
Manfred S. Weiss ◽  
Udo Heinemann ◽  
Uwe Mueller

XDSAPPis a Tcl/Tk-based graphical user interface for the easy and convenient processing of diffraction data sets usingXDS. It provides easy access to allXDSfunctionalities, automates the data processing and generates graphical plots of various data set statistics provided byXDS. By incorporating additional software, further information on certain features of the data set, such as radiation decay during data collection or the presence of pseudo-translational symmetry and/or twinning, can be obtained. Intensity files suitable forCCP4,CNSandSHELXare generated.


Sign in / Sign up

Export Citation Format

Share Document