Evolution of the Linux Credits file: Methodological challenges and reference data for Open Source research (originally published in Volume 9, Number 6, June 2004)

First Monday ◽  
2005 ◽  
Author(s):  
Ilkka Tuomi

This paper presents time–series data that can be extracted from the Linux Credits files and discusses methodological challenges of automatic extraction of research data from open source files. The extracted data is used to describe the geographical expansion of the core Linux developer community. The paper also comments on attempts to use the Linux Credits data to derive policy recommendations for open source software.

2020 ◽  
Vol 52 (3) ◽  
pp. 1244-1253 ◽  
Author(s):  
Diederick C. Niehorster ◽  
Roy S. Hessels ◽  
Jeroen S. Benjamins

AbstractWe present GlassesViewer, open-source software for viewing and analyzing eye-tracking data of the Tobii Pro Glasses 2 head-mounted eye tracker as well as the scene and eye videos and other data streams (pupil size, gyroscope, accelerometer, and TTL input) that this headset can record. The software provides the following functionality written in MATLAB: (1) a graphical interface for navigating the study- and recording structure produced by the Tobii Glasses 2; (2) functionality to unpack, parse, and synchronize the various data and video streams comprising a Glasses 2 recording; and (3) a graphical interface for viewing the Glasses 2’s gaze direction, pupil size, gyroscope and accelerometer time-series data, along with the recorded scene and eye camera videos. In this latter interface, segments of data can furthermore be labeled through user-provided event classification algorithms or by means of manual annotation. Lastly, the toolbox provides integration with the GazeCode tool by Benjamins et al. (2018), enabling a completely open-source workflow for analyzing Tobii Pro Glasses 2 recordings.


2016 ◽  
Vol 2016 ◽  
pp. 1-13
Author(s):  
Munish Saini ◽  
Sandeep Mehmi ◽  
Kuljit Kaur Chahal

Source code management systems (such as Concurrent Versions System (CVS), Subversion, and git) record changes to code repositories of open source software projects. This study explores a fuzzy data mining algorithm for time series data to generate the association rules for evaluating the existing trend and regularity in the evolution of open source software project. The idea to choose fuzzy data mining algorithm for time series data is due to the stochastic nature of the open source software development process. Commit activity of an open source project indicates the activeness of its development community. An active development community is a strong contributor to the success of an open source project. Therefore commit activity analysis along with the trend and regularity analysis for commit activity of open source software project acts as an important indicator to the project managers and analyst regarding the evolutionary prospects of the project in the future.


2017 ◽  
Vol 9 (1) ◽  
pp. 103
Author(s):  
Gumgum Darmawan ◽  
Budhi Handoko ◽  
Zulhanif Zulhanif

Rainfall is time series data that has seasonal pattern, usually period 12. The pattern of rainfall seasonal itself often changes. In this paper, the pattern of rainfall will identify by Periodogram Analysis. We use Time series data from one of city in west Java province. By this analysis, it is proved that the pattern of rainfall has been change. Computation itself, we use macro of Open Source Software R (OSSR).   


2009 ◽  
Vol 51 (4) ◽  
pp. 626-633
Author(s):  
Alban D’Amours

Abstract CANDIDE-R is a huge simultaneous macro-economic model which raises estimations difficulties. We avoid the problem of identification assuming that the great number of variables in our model makes it impossible that the necessary condition be not satisfied. We assume that our system converges to a solution solving this way the problem of identification. The core of the paper gives justifications of the procedure we adopted to estimate CANDIDE-R. Because of the presence of regional equations and the limited amount of regional data, we are bound to pool cross sections and time series data. We then justified the use of Zellner's approach instead of the error components models within the class of regional models built on national premises.


2011 ◽  
Vol 12 (1) ◽  
pp. 119 ◽  
Author(s):  
Michael Lindner ◽  
Raul Vicente ◽  
Viola Priesemann ◽  
Michael Wibral

2019 ◽  
Author(s):  
Birgit Möller ◽  
Hongmei Chen ◽  
Tino Schmidt ◽  
Axel Zieschank ◽  
Roman Patzak ◽  
...  

AbstractBackground and aimsMinirhizotrons are commonly used to study root turnover which is essential for understanding ecosystem carbon and nutrient cycling. Yet, extracting data from minirhizotron images requires intensive annotation effort. Existing annotation tools often lack flexibility and provide only a subset of the required functionality. To facilitate efficient root annotation in minirhizotrons, we present the user-friendly open source tool rhizoTrak.Methods and resultsrhizoTrak builds on TrakEM2 and is publically available as Fiji plugin. It uses treelines to represent branching structures in roots and assigns customizable status labels per root segment. rhizoTrak offers configuration options for visualization and various functions for root annotation mostly accessible via keyboard shortcuts. rhizoTrak allows time-series data import and particularly supports easy handling and annotation of time series images. This is facilitated via explicit temporal links (connectors) between roots which are automatically generated when copying annotations from one image to the next. rhizoTrak includes automatic consistency checks and guided procedures for resolving conflicts. It facilitates easy data exchange with other software by supporting open data formats.ConclusionsrhizoTrak covers the full range of functions required for user-friendly and efficient annotation of time-series images. Its flexibility and open source nature will foster efficient data acquisition procedures in root studies using minirhizotrons.


2015 ◽  
Author(s):  
Andrew MacDonald

PhilDB is an open-source time series database. It supports storage of time series datasets that are dynamic, that is recording updates to existing values in a log as they occur. Recent open-source systems, such as InfluxDB and OpenTSDB, have been developed to indefinitely store long-period, high-resolution time series data. Unfortunately they require a large initial installation investment before use because they are designed to operate over a cluster of servers to achieve high-performance writing of static data in real time. In essence, they have a ‘big data’ approach to storage and access. Other open-source projects for handling time series data that don’t take the ‘big data’ approach are also relatively new and are complex or incomplete. None of these systems gracefully handle revision of existing data while tracking values that changed. Unlike ‘big data’ solutions, PhilDB has been designed for single machine deployment on commodity hardware, reducing the barrier to deployment. PhilDB eases loading of data for the user by utilising an intelligent data write method. It preserves existing values during updates and abstracts the update complexity required to achieve logging of data value changes. PhilDB improves accessing datasets by two methods. Firstly, it uses fast reads which make it practical to select data for analysis. Secondly, it uses simple read methods to minimise effort required to extract data. PhilDB takes a unique approach to meta-data tracking; optional attribute attachment. This facilitates scaling the complexities of storing a wide variety of data. That is, it allows time series data to be loaded as time series instances with minimal initial meta-data, yet additional attributes can be created and attached to differentiate the time series instances as a wider variety of data is needed. PhilDB was written in Python, leveraging existing libraries. This paper describes the general approach, architecture, and philosophy of the PhilDB software.


2017 ◽  
Author(s):  
Trevor Owens

Developing, deploying and maintaining open source software is increasingly a core part of the core operations of cultural heritage organizations. From preservation infrastructure, to tools for acquiring digital and digitized content, to platforms that provide access, enhance content, and enable various modes for users to engage with and make use of content, much of the core work of libraries, archives and museums is entangled with software. As a result, cultural heritage organizations of all sizes are increasingly involved in roles as open source software creators, contributors, maintainers, and adopters. Participants in this workshop shared their respective perspectives on institutional roles in this emerging open source ecosystem. Through discussion, participants created drafts of a checklist for establishing FOSS projects, documentation of project sustainability techniques, a model for conceptualizing the role of open source community building activities throughout projects and an initial model for key institutional roles for projects at different levels of maturity.


Sign in / Sign up

Export Citation Format

Share Document