scholarly journals Computational Methods for Metabolomic Data Analysis of Ion Mobility Spectrometry Data—Reviewing the State of the Art

Metabolites ◽  
2012 ◽  
Vol 2 (4) ◽  
pp. 733-755 ◽  
Author(s):  
Anne-Christin Hauschild ◽  
Till Schneider ◽  
Josch Pauling ◽  
Kathrin Rupp ◽  
Mi Jang ◽  
...  
Author(s):  
Paolo Marcatili ◽  
Anna Tramontano

This chapter provides an overview of the current computational methods for PPI network cleansing. The authors first present the issue of identifying reliable PPIs from noisy and incomplete experimental data. Next, they address the questions of which are the expected results of the different experimental studies, of what can be defined as true interactions, of which kind of data are to be integrated in assigning reliability levels to PPIs and which gold standard should the authors use in training and testing PPI filtering methods. Finally, Marcatili and Tramontano describe the state of the art in the field, presenting the different classes of algorithms and comparing their results. The aim of the chapter is to guide the reader in the choice of the most convenient methods, experiments and integrative data and to underline the most common biases and errors to obtain a portrait of PINs which is not only reliable but as well able to correctly retrieve the biological information contained in such data.


The Analyst ◽  
2016 ◽  
Vol 141 (20) ◽  
pp. 5689-5708 ◽  
Author(s):  
Ewa Szymańska ◽  
Antony N. Davies ◽  
Lutgarde M. C. Buydens

This is the first comprehensive review on chemometric techniques used in ion mobility spectrometry data analysis.


2013 ◽  
Vol 80 (3) ◽  
Author(s):  
Domenico Villano ◽  
Francesco Galliccia

The purpose of this paper is to verify the applicability of innovative technologies for manufacturing controlled fragmentation warheads, with particular attention paid to guided ammunition. Several studies were conducted by the authors during the warhead development of DART and Vulcano family munitions. The lethality of the guided munitions can be considerably increased with controlled fragmentation warheads. This increase can compensate a lower payload of the guided munitions. After introducing the concept of warhead and its natural fragmentation, the paper describes both the elements of fracture mechanics related to the fragmentation and the state of the art of controlled fragmentation. A preliminary evaluation of controlled fragmentation technologies is illustrated along with the numerical models developed for predicting the natural and controlled fragmentations. The most promising technologies are presented in detail and the features of the warheads used for the experiments are defined. A description of the entire experimental phase is provided, including results of arena tests, data analysis and revision of numerical models. The applicability of some innovative technologies for controlled fragmentation warheads is fully demonstrated. Two technologies in particular, the laser microdrilling and the double casing solution, provide a high increase of the reference warhead lethality.


KWALON ◽  
2016 ◽  
Vol 21 (1) ◽  
Author(s):  
Susanne Friese

Summary The aim of this paper is to provide an overview of the ‘state of the art of QDA or CAQDAS software. The author uses Kahneman’s ideas about slow and fast thinking as a framework. Slow thinking in the context of CAQDAS is related to researcher driven analysis and fast thinking to tool- and data driven analysis. The paper is divided into two parts. In the first part, the author describes trends and new developments and in the second part, she offers a critical appraisal.


GigaScience ◽  
2020 ◽  
Vol 9 (11) ◽  
Author(s):  
Milton Silva ◽  
Diogo Pratas ◽  
Armando J Pinho

Abstract Background The increasing production of genomic data has led to an intensified need for models that can cope efficiently with the lossless compression of DNA sequences. Important applications include long-term storage and compression-based data analysis. In the literature, only a few recent articles propose the use of neural networks for DNA sequence compression. However, they fall short when compared with specific DNA compression tools, such as GeCo2. This limitation is due to the absence of models specifically designed for DNA sequences. In this work, we combine the power of neural networks with specific DNA models. For this purpose, we created GeCo3, a new genomic sequence compressor that uses neural networks for mixing multiple context and substitution-tolerant context models. Findings We benchmark GeCo3 as a reference-free DNA compressor in 5 datasets, including a balanced and comprehensive dataset of DNA sequences, the Y-chromosome and human mitogenome, 2 compilations of archaeal and virus genomes, 4 whole genomes, and 2 collections of FASTQ data of a human virome and ancient DNA. GeCo3 achieves a solid improvement in compression over the previous version (GeCo2) of $2.4\%$, $7.1\%$, $6.1\%$, $5.8\%$, and $6.0\%$, respectively. To test its performance as a reference-based DNA compressor, we benchmark GeCo3 in 4 datasets constituted by the pairwise compression of the chromosomes of the genomes of several primates. GeCo3 improves the compression in $12.4\%$, $11.7\%$, $10.8\%$, and $10.1\%$ over the state of the art. The cost of this compression improvement is some additional computational time (1.7–3 times slower than GeCo2). The RAM use is constant, and the tool scales efficiently, independently of the sequence size. Overall, these values outperform the state of the art. Conclusions GeCo3 is a genomic sequence compressor with a neural network mixing approach that provides additional gains over top specific genomic compressors. The proposed mixing method is portable, requiring only the probabilities of the models as inputs, providing easy adaptation to other data compressors or compression-based data analysis tools. GeCo3 is released under GPLv3 and is available for free download at https://github.com/cobilab/geco3.


2012 ◽  
Vol 23 (5) ◽  
pp. 792-805 ◽  
Author(s):  
Natalia L. Zakharova ◽  
Christina L. Crawford ◽  
Brian C. Hauck ◽  
Jacob K. Quinton ◽  
William F. Seims ◽  
...  

1985 ◽  
Vol 107 (1) ◽  
pp. 6-22 ◽  
Author(s):  
William D. McNally ◽  
Peter M. Sockol

A review is given of current computational methods for analyzing flows in turbomachinery and other related internal propulsion components. The methods are divided primarily into two classes, inviscid and viscous. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, on the other hand, due to the state-of-the-art, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler approaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures.


Sign in / Sign up

Export Citation Format

Share Document