scholarly journals PackIO and EphysViewer: software tools for acquisition and analysis of neuroscience data

2016 ◽  
Author(s):  
Brendon O. Watson ◽  
Rafael Yuste ◽  
Adam M. Packer

AbstractWe present an open-source synchronization software package, PackIO, that can record and generate voltage signals to enable complex experimental paradigms across multiple devices. This general purpose package is built on National Instruments data acquisition and generation hardware and has temporal precision up to the limit of the hardware. PackIO acts as a flexibly programmable master clock that can record experimental data (e.g. voltage traces), timing data (e.g. event times such as imaging frame times) while generating stimuli (e.g. voltage waveforms, voltage triggers to drive other devices, etc.). PackIO is particularly useful to record from and synchronize multiple devices, for example when simultaneously acquiring electrophysiology while generating and recording imaging timing data. Experimental control is easily enabled by an intuitive graphical user interface. We also release an open-source data visualisation and analysis tool, EphysViewer, written in MATLAB, as well as a module to import data into Python. These flexible and programmable tools allow experimenters to configure and set up customised input and output protocols in a synchronized fashion for controlling, recording, and analysing experiments.

Big Data is a term used to represent huge volume of both unstructured and structured data which cannot be processed by the traditional data processing techniques. This data is too huge, grows exponentially and doesn't fit into the structure of the traditional database systems. Analyzing Big Data is a very challenging task since it involves the processing of huge amount of data. As the industry or its business grows, the data related to the industries also tend to grow on a larger scale. Prominent data analysis tools are required to analyze the data in order to gain value out of it. Hadoop is a sought-after open source framework that uses MapReduce techniques to store and process huge datasets. However, the programs written using MapReduce techniques are not flexible and also require maintenance. This problem is overcome by making use of HiveQL. In order to execute queries in HiveQL, the platform required is Hive. It is an open-source data warehousing set-up built on Hadoop. HiveQL queries are compiled into MapReduce jobs that are executed utilizing Hadoop. In this paper we have analyzed the Indian Premier League dataset using HiveQL and compared its execution time with that of traditional SQL queries. It was found that the HiveQL provided better performance with larger dataset while SQL performed better with smaller datasets


2021 ◽  
Author(s):  
Wouter Knoben ◽  
Shervan Gharari ◽  
Martyn Clark

<p>Setting up earth system models can be cumbersome and time-consuming. Model-agnostic tasks are typically the same regardless of model used and include definition and delineation of the modeling domain and preprocessing of forcing data and parameter fields. Model-specific tasks include conversion of preprocessed data into model-specific formats and generation of model inputs and run scripts. We present a workflow that includes both model-agnostic and model-specific steps needed to set up the Structure for Unifying Multiple Modeling Alternatives (SUMMA) anywhere on the planet, with the goal of providing a baseline SUMMA set up that can easily be adapted for specific study purposes. The workflow therefore uses open source data with global coverage to derive basin delineations, climatic forcing, and geophysical inputs such as topography, soil and land use parameters. The use of open source data, an open source model and an open source workflow that relies on established software packages results in transparent and reproducible scientific outputs, open to verification and adaptation by the community. The workflow substantially reduces model configuration time for new studies and paves the way for more and stronger scientific contributions in the long term, as it lets the modeler focus on science instead of set up.</p>


2018 ◽  
Vol 80 (6) ◽  
pp. 457-461
Author(s):  
Carlos A. Morales-Ramirez ◽  
Pearlyn Y. Pang

Open-source data are information provided free online. It is gaining popularity in science research, especially for modeling species distribution. MaxEnt is an open-source software that models using presence-only data and environmental variables. These variables can also be found online and are generally free. Using all of these open-source data and tools makes species distribution modeling (SDM) more accessible. With the rapid changes our planet is undergoing, SDM helps understand future habitat suitability for species. Due to increasing interest in biogeographic research, SDM has increased for marine species, which were previously not commonly found in this modeling. Here we provide examples of where to obtain the data and how the modeling can be performed and taught.


2018 ◽  
Vol 231 ◽  
pp. 1100-1108 ◽  
Author(s):  
Alaa Alhamwi ◽  
Wided Medjroubi ◽  
Thomas Vogt ◽  
Carsten Agert

Aerospace ◽  
2020 ◽  
Vol 7 (11) ◽  
pp. 158
Author(s):  
Andrew Weinert

As unmanned aerial systems (UASs) increasingly integrate into the US national airspace system, there is an increasing need to characterize how commercial and recreational UASs may encounter each other. To inform the development and evaluation of safety critical technologies, we demonstrate a methodology to analytically calculate all potential relative geometries between different UAS operations performing inspection missions. This method is based on a previously demonstrated technique that leverages open source geospatial information to generate representative unmanned aircraft trajectories. Using open source data and parallel processing techniques, we performed trillions of calculations to estimate the relative horizontal distance between geospatial points across sixteen locations.


Sign in / Sign up

Export Citation Format

Share Document