scholarly journals NeuDATool: An open source neutron data analysis tools, supporting GPU hardware acceleration, and across-computer cluster nodes parallel

2020 ◽  
Vol 33 (6) ◽  
pp. 727-732 ◽  
Author(s):  
Chang-li Ma ◽  
He Cheng ◽  
Tai-sen Zuo ◽  
Gui-sheng Jiao ◽  
Ze-hua Han ◽  
...  
Solid Earth ◽  
2011 ◽  
Vol 2 (1) ◽  
pp. 53-63 ◽  
Author(s):  
S. Tavani ◽  
P. Arbues ◽  
M. Snidero ◽  
N. Carrera ◽  
J. A. Muñoz

Abstract. In this work we present the Open Plot Project, an open-source software for structural data analysis, including a 3-D environment. The software includes many classical functionalities of structural data analysis tools, like stereoplot, contouring, tensorial regression, scatterplots, histograms and transect analysis. In addition, efficient filtering tools are present allowing the selection of data according to their attributes, including spatial distribution and orientation. This first alpha release represents a stand-alone toolkit for structural data analysis. The presence of a 3-D environment with digitalising tools allows the integration of structural data with information extracted from georeferenced images to produce structurally validated dip domains. This, coupled with many import/export facilities, allows easy incorporation of structural analyses in workflows for 3-D geological modelling. Accordingly, Open Plot Project also candidates as a structural add-on for 3-D geological modelling software. The software (for both Windows and Linux O.S.), the User Manual, a set of example movies (complementary to the User Manual), and the source code are provided as Supplement. We intend the publication of the source code to set the foundation for free, public software that, hopefully, the structural geologists' community will use, modify, and implement. The creation of additional public controls/tools is strongly encouraged.


Author(s):  
Taiga Abe ◽  
Ian Kinsella ◽  
Shreya Saxena ◽  
Liam Paninski ◽  
John P. Cunningham

AbstractA major goal of computational neuroscience is to develop powerful analysis tools that operate on large datasets. These methods provide an essential toolset to unlock scientific insights from new experiments. Unfortunately, a major obstacle currently impedes progress: while existing analysis methods are frequently shared as open source software, the infrastructure needed to deploy these methods – at scale, reproducibly, cheaply, and quickly – remains totally inaccessible to all but a minority of expert users. As a result, many users can not fully exploit these tools, due to constrained computational resources (limited or costly compute hardware) and/or mismatches in expertise (experimentalists vs. large-scale computing experts). In this work we develop Neuroscience Cloud Analysis As a Service (NeuroCAAS): a fully-managed infrastructure platform, based on modern large-scale computing advances, that makes state-of-the-art data analysis tools accessible to the neuroscience community. We offer NeuroCAAS as an open source service with a drag-and-drop interface, entirely removing the burden of infrastructure expertise, purchasing, maintenance, and deployment. NeuroCAAS is enabled by three key contributions. First, NeuroCAAS cleanly separates tool implementation from usage, allowing cutting-edge methods to be served directly to the end user with no need to read or install any analysis software. Second, NeuroCAAS automatically scales as needed, providing reliable, highly elastic computational resources that are more efficient than personal or lab-supported hardware, without management overhead. Finally, we show that many popular data analysis tools offered through NeuroCAAS outperform typical analysis solutions (in terms of speed and cost) while improving ease of use and maintenance, dispelling the myth that cloud compute is prohibitively expensive and technically inaccessible. By removing barriers to fast, efficient cloud computation, NeuroCAAS can dramatically accelerate both the dissemination and the effective use of cutting-edge analysis tools for neuroscientific discovery.


2016 ◽  
Author(s):  
Gaurav Kaushik ◽  
Sinisa Ivkovic ◽  
Janko Simonovic ◽  
Nebojsa Tijanic ◽  
Brandi Davis-Dusenbery ◽  
...  

As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optim1izations tocomputation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executora, an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions.


ZOO-Journal ◽  
2019 ◽  
Vol 5 ◽  
pp. 23-26
Author(s):  
Chitra B Baniya

Data analysis tools and software have been increasing each day in the academic field. They were classified into two categories: one category of statistical tool that needs money to buy its licence to use and install into owns computer. Such software are called closed software. Another category of tool or software is called open source tool or open software. This later group comes freely and one can get licence freely. Academicians are free to choose one. Definitely 'R' and its associated tools which we can download freely and use freely has no better alternatives. 'R' users have been increasing rampantly these days in this world.


2011 ◽  
Vol 3 (2) ◽  
pp. 305-318 ◽  
Author(s):  
Michele Crosetto ◽  
Oriol Monserrat ◽  
María Cuevas ◽  
Bruno Crippa

2021 ◽  
Author(s):  
Scott A. Jarmusch ◽  
Justin J. J. van der Hooft ◽  
Pieter C. Dorrestein ◽  
Alan K. Jarmusch

This review covers the current and potential use of mass spectrometry-based metabolomics data mining in natural products. Public data, metadata, databases and data analysis tools are critical. The value and success of data mining rely on community participation.


Sign in / Sign up

Export Citation Format

Share Document