scholarly journals ssNake: A cross-platform open-source NMR data processing and fitting application

2019 ◽  
Vol 301 ◽  
pp. 56-66 ◽  
Author(s):  
S.G.J. van Meerten ◽  
W.M.J. Franssen ◽  
A.P.M. Kentgens
2019 ◽  
Author(s):  
Kyle W. East ◽  
Andrew Leith ◽  
Ashok Ragavendran ◽  
Frank Delaglio ◽  
George P. Lisi

ABSTRACTNMR is a widely employed tool in chemistry, biology, and physics for the study of molecular structure and dynamics. Advances in computation have produced scores of software programs necessary for the processing and analysis of NMR data. However, the production of NMR software has been largely overseen by academic labs, each with their own preferred OS, environment, and dependencies. This lack of broader standardization and the complexity of installing and maintaining NMR-related software creates a barrier of entry into the field. To further complicate matters, as computation evolves, many aging software packages become deprecated. To reduce the barrier for newcomers and to prevent deprecation of aging software, we have created the NMRdock container. NMRdock utilizes containerization to package NMR processing and analysis programs into a single, easy-to-install Docker image that can be run on any modern OS. The current image contains two bedrock NMR data processing programs (NMRPipe and NMRFAM Sparky). However, future development of NMRdock aims to add modules for additional analysis programs to build a library of tools in a standardized and easy-to-implement manner. NMRdock is open source and free to download at https://compbiocore.github.io/nmrdock/.


2016 ◽  
Vol 65 (3-4) ◽  
pp. 205-216 ◽  
Author(s):  
Michael Norris ◽  
Bayard Fetler ◽  
Jan Marchant ◽  
Bruce A. Johnson

2020 ◽  
Author(s):  
K. Thirumalesh ◽  
Salgeri Puttaswamy Raju ◽  
Hiriyur Mallaiah Somashekarappa ◽  
Kumaraswamy Swaroop

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daming Yang ◽  
Yongjian Huang ◽  
Zongyang Chen ◽  
Qinghua Huang ◽  
Yanguang Ren ◽  
...  

AbstractFischer plots are widely used in paleoenvironmental research as graphic representations of sea- and lake-level changes through mapping linearly corrected variation of accumulative cycle thickness over cycle number or stratum depth. Some kinds of paleoenvironmental proxy data (especially subsurface data, such as natural gamma-ray logging data), which preserve continuous cyclic signals and have been largely collected, are potential materials for constructing Fischer Plots. However, it is laborious to count the cycles preserved in these proxy data manually and map Fischer plots with these cycles. In this paper, we introduce an original open-source Python code “PyFISCHERPLOT” for constructing Fischer Plots in batches utilizing paleoenvironmental proxy data series. The principle of constructing Fischer plots based on proxy data, the data processing and usage of the PyFISCHERPLOT code and the application cases of the code are presented. The code is compared with existing methods for constructing Fischer plots.


2012 ◽  
Vol 51 (05) ◽  
pp. 441-448 ◽  
Author(s):  
P. F. Neher ◽  
I. Reicht ◽  
T. van Bruggen ◽  
C. Goch ◽  
M. Reisert ◽  
...  

SummaryBackground: Diffusion-MRI provides a unique window on brain anatomy and insights into aspects of tissue structure in living humans that could not be studied previously. There is a major effort in this rapidly evolving field of research to develop the algorithmic tools necessary to cope with the complexity of the datasets.Objectives: This work illustrates our strategy that encompasses the development of a modularized and open software tool for data processing, visualization and interactive exploration in diffusion imaging research and aims at reinforcing sustainable evaluation and progress in the field.Methods: In this paper, the usability and capabilities of a new application and toolkit component of the Medical Imaging and Interaction Toolkit (MITK, www.mitk.org), MITKDI, are demonstrated using in-vivo datasets.Results: MITK-DI provides a comprehensive software framework for high-performance data processing, analysis and interactive data exploration, which is designed in a modular, extensible fashion (using CTK) and in adherence to widely accepted coding standards (e.g. ITK, VTK). MITK-DI is available both as an open source software development toolkit and as a ready-to-use in stallable application.Conclusions: The open source release of the modular MITK-DI tools will increase verifiability and comparability within the research community and will also be an important step towards bringing many of the current techniques towards clinical application.


2020 ◽  
Vol 13 (2) ◽  
pp. 1-9
Author(s):  
Farid Jatri Abiyyu ◽  
Ibnu Ziad ◽  
Ade Silvia Handayani

Diskless server is a cluster computer network which uses SSH (Secure Shell) protocol to grant the client an access to the host's directory and modify it's content so that the client don't need a hardisk (Thin Client). One way to design a diskless server is by utilizing "Linux Terminal Server Project", an open source-based script for Linux. However, using Linux has it own drawback, such as it can't cross platform for running an aplication based on Windows system which are commonly used. This drawback can be overcomed by using a compatibility layer that converts a windows-based application's source code. The data which will be monitored is the compatibility layer implementation's result, and the throughput, packet loss, delay, and jitter. The result of measurement from those four parameters resulting in "Excellent" for throughput, "Perfect" for packet loss and delay, and "Good" for jitter.


2019 ◽  
Author(s):  
H. Soon Gweon ◽  
Liam P. Shaw ◽  
Jeremy Swann ◽  
Nicola De Maio ◽  
Manal AbuOun ◽  
...  

ABSTRACTBackgroundShotgun metagenomics is increasingly used to characterise microbial communities, particularly for the investigation of antimicrobial resistance (AMR) in different animal and environmental contexts. There are many different approaches for inferring the taxonomic composition and AMR gene content of complex community samples from shotgun metagenomic data, but there has been little work establishing the optimum sequencing depth, data processing and analysis methods for these samples. In this study we used shotgun metagenomics and sequencing of cultured isolates from the same samples to address these issues. We sampled three potential environmental AMR gene reservoirs (pig caeca, river sediment, effluent) and sequenced samples with shotgun metagenomics at high depth (∼200 million reads per sample). Alongside this, we cultured single-colony isolates ofEnterobacteriaceaefrom the same samples and used hybrid sequencing (short- and long-reads) to create high-quality assemblies for comparison to the metagenomic data. To automate data processing, we developed an open-source software pipeline, ‘ResPipe’.ResultsTaxonomic profiling was much more stable to sequencing depth than AMR gene content. 1 million reads per sample was sufficient to achieve <1% dissimilarity to the full taxonomic composition. However, at least 80 million reads per sample were required to recover the full richness of different AMR gene families present in the sample, and additional allelic diversity of AMR genes was still being discovered in effluent at 200 million reads per sample. Normalising the number of reads mapping to AMR genes using gene length and an exogenous spike ofThermus thermophilusDNA substantially changed the estimated gene abundance distributions. While the majority of genomic content from cultured isolates from effluent was recoverable using shotgun metagenomics, this was not the case for pig caeca or river sediment.ConclusionsSequencing depth and profiling method can critically affect the profiling of polymicrobial animal and environmental samples with shotgun metagenomics. Both sequencing of cultured isolates and shotgun metagenomics can recover substantial diversity that is not identified using the other methods. Particular consideration is required when inferring AMR gene content or presence by mapping metagenomic reads to a database. ResPipe, the open-source software pipeline we have developed, is freely available (https://gitlab.com/hsgweon/ResPipe).


2019 ◽  
Vol 11 (1) ◽  
pp. 1-1
Author(s):  
Sabrina Kletz ◽  
Marco Bertini ◽  
Mathias Lux

Having already discussed MatConvNet and Keras, let us continue with an open source framework for deep learning, which takes a new and interesting approach. TensorFlow.js is not only providing deep learning for JavaScript developers, but it's also making applications of deep learning available in the WebGL enabled web browsers, or more specifically, Chrome, Chromium-based browsers, Safari and Firefox. Recently node.js support has been added, so TensorFlow.js can be used to directly control TensorFlow without the browser. TensorFlow.js is easy to install. As soon as a browser is installed one is ready to go. Browser based, cross platform applications, e.g. running with Electron, can also make use of TensorFlow.js without an additional install. The performance, however, depends on the browser the client is running, and memory and GPU on the client device. More specifically, one cannot expect to analyze 4K videos on a mobile phone in real time. While it's easy to install, and it's easy to develop based on TensorFlow.js, there are drawbacks: (i) developers have less control over where the machine learning actually takes place (e.g. on CPU or GPU), that it is running in the same sandbox as all web pages in the browser do, and (ii) that in the current release it still has rough edges and is not considered stable enough to use in production.


Sign in / Sign up

Export Citation Format

Share Document