scholarly journals Information Visualization for CSV Open Data Files Structure Analysis

Author(s):  
Paulo Carvalho ◽  
Patrik Hitzelberger ◽  
Benoît Otjacques ◽  
Fatma Bouali ◽  
Gilles Venturini
Author(s):  
Paulo Carvalho ◽  
Patrik Hitzelberger ◽  
Benoît Otjacques ◽  
Fatma Bouali ◽  
Gilles Venturini

2020 ◽  
Author(s):  
Denis Cousineau

Born-Open Data experiments are encouraged for better open science practices. To be adopted, Born-Open data practices must be easy to implement. Herein, I introduce a package for E-Prime such that the data files are automatically saved on a GitHub repository. The BornOpenData package for E-Prime works seamlessly and performs the upload as soon as the experiment is finished so that there is no additional steps to perform beyond placing a package call within E-Prime. Because E-Prime files are not standard tab-separated files, I also provide an R function that retrieves the data directly from GitHub into a data frame ready to be analyzed. At this time, there are no standards as to what should constitute an adequate open-access data repository so I propose a few suggestions that any future Born-Open data system could follow for easier use by the research community.


2021 ◽  
Author(s):  
Gordon T Luu ◽  
Itzel Lizama-Chamu ◽  
Catherine S McCaughey ◽  
Laura M Sanchez ◽  
Mingxun Wang

Advances in mass spectrometry instrumentation have led to the development of mass spectrometers with ion mobility separation (IMS) capabilities and dual source instrumentation, but the current software ecosystem lacks interoperability with downstream data analysis using open-source software/pipelines. Here, we present TIMSCONVERT, a data conversion workflow from timsTOF fleX MS raw data files to size conscious mzML and imzML formats with minimal preprocessing to allow for compatibility with downstream data analysis tools, which we showcase with several examples using data acquired across different experiments and acquisition modalities on the timsTOF fleX. Availability and Implementation: TIMSCONVERT and its documentation can be found at https://github.com/gtluu/timsconvert and is available as a standalone command line interface, Nextflow workflow, and online in the Global Natural Products Social (GNPS) platform (https://proteomics2.ucsd.edu/ProteoSAFe/index.jsp?params={%22workflow%22%3A%20%22T IMSCONVERT%22}).


2015 ◽  
Vol 31 (4) ◽  
pp. 298-305 ◽  
Author(s):  
Jaime A. Teixeira da Silva ◽  
Judit Dobránszki

2016 ◽  
Vol 8 (2) ◽  
pp. 1-20
Author(s):  
Teresa Scassa ◽  
Alexandra Diebel

This paper explores how real-time data are made available as “open data” using municipal transit data as a case study. Many transit authorities in North America and elsewhere have installed technology to gather GPS data in real-time from transit vehicles. These data are in high demand in app developer communities because of their use in communicating predicted, rather than scheduled, transit vehicle arrival times. While many municipalities have chosen to treat real-time GPS data as “open data”, the particular nature of real-time GPS data requires a different mode of access for developers than what is needed for static data files. This, in turn, has created a conflict between the “openness” of the underlying data and the sometimes restrictive terms of use which govern access to the real-time data through transit authority Application Program Interfaces (APIs). This paper explores the implications of these terms of use and considers whether real-time data require a separate standard for openness. While the focus is on the transit data context, the lessons from this area will have broader implications, particularly for open real-time data in the emerging ‘smart cities’ environment.


Hypothesis ◽  
2020 ◽  
Vol 32 (1) ◽  
Author(s):  
Marianne D Burke

Background Journals in health sciences increasingly require or recommend that authors deposit the data from their research in open repositories. The rationale for publicly available data is well understood, but many researchers lack the time, knowledge, and skills to do it well, if at all. There are few descriptions of the pragmatic process a researcher author undertakes to complete the open data deposit in the literature. When my manuscript for a mixed methods study was accepted by a journal that required shared data as condition of publication, I proceeded to comply despite uncertainty with the process. Purpose The purpose of this work is to describe the experience of an information science researcher and first-time data depositor to complete an open data deposit. The narrative illustrates the questions encountered and choices made in the process. Process Methods To begin the data deposit process, I found guidance from the accepting journal’s policy and rationale for its shared data requirement. A checklist of pragmatic steps from an open repository provided a framework used to outline and organize the process. Process steps included organizing data files, preparing documentation, determining rights and licensing, and determining sharing and permissions. Choices and decisions included which data versions to share, how much data to share, repository choice, and file naming. Processes and decisions varied between the quantitative and qualitative data prepared.   Results  Two datasets and documentation for each were deposited in the Figshare open repository, thus meeting the journal policy requirements to deposit sufficient data and documentation to replicate the results reported in the journal article, and also meeting the deadline to include a Data Availability Statement with the published article. Conclusion This experience illustrated some practical data sharing issues faced by a librarian author seeking to comply with a journal data sharing policy requirement for publication of an accepted manuscript. Both novice data depositors and data librarians may find this individual experience useful for their own work and the advice they give to others.


Author(s):  
H. O. Colijn

Many labs today wish to transfer data between their EDS systems and their existing PCs and minicomputers. Our lab has implemented SpectraPlot, a low- cost PC-based system to allow offline examination and plotting of spectra. We adopted this system in order to make more efficient use of our microscopes and EDS consoles, to provide hardcopy output for an older EDS system, and to allow students to access their data after leaving the university.As shown in Fig. 1, we have three EDS systems (one of which is located in another building) which can store data on 8 inch RT-11 floppy disks. We transfer data from these systems to a DEC MINC computer using “SneakerNet”, which consists of putting on a pair of sneakers and running down the hall. We then use the Hermit file transfer program to download the data files with error checking from the MINC to the PC.


Author(s):  
M. Iwatsuki ◽  
Y. Kokubo ◽  
Y. Harada ◽  
J. Lehman

In recent years, the electron microscope has been significantly improved in resolution and we can obtain routinely atomic-level high resolution images without any special skill. With this improvement, the structure analysis of organic materials has become one of the interesting targets in the biological and polymer crystal fields.Up to now, X-ray structure analysis has been mainly used for such materials. With this method, however, great effort and a long time are required for specimen preparation because of the need for larger crystals. This method can analyze average crystal structure but is insufficient for interpreting it on the atomic or molecular level. The electron microscopic method for organic materials has not only the advantage of specimen preparation but also the capability of providing various information from extremely small specimen regions, using strong interactions between electrons and the substance. On the other hand, however, this strong interaction has a big disadvantage in high radiation damage.


Sign in / Sign up

Export Citation Format

Share Document