version management
Recently Published Documents


TOTAL DOCUMENTS

138
(FIVE YEARS 10)

H-INDEX

13
(FIVE YEARS 1)

Author(s):  
Karen L Hanson

Scholars are experimenting with increasingly diverse digital technologies to express their research in new ways. Publishers, in turn, are working to support complex, dynamic, born-digital publications that can no longer be represented in print. New forms of scholarship contain enhancements such as embedded media and viewers, data visualizations, different approaches to version management, complex interdependent networks of supporting materials such as software and data, reader-contributed content (annotations, comments), interactive features, and nonlinear forms of navigation. These features can create challenges for the long-term sustainability of the publication – without planning for longevity the most innovative scholarship today may lose the characteristics that make them unique or become expensive to maintain. These challenges are magnified for preservation services that aim to ensure the publications will be available for future scholars. It is in this context that NYU Libraries initiated a project to bring together preservation services that focus on scholarly content with publishers concerned about the long-term survival of their most innovative publications. By analyzing examples of dynamic and enhanced open access monographs, the preservation services determined what could be preserved at scale using current tools. From this the team produced a set of guidelines that those involved in creating and publishing content could use to make these new forms of publications more preservable. The project was also an opportunity to start a conversation between preservation services and publishers about ways to collaborate around the shared goal of perpetuating access to unique and often costly publications.


Author(s):  
Aaron Peikert ◽  
Andreas M. Brandmaier

In this tutorial, we describe a workflow to ensure long-term reproducibility of R-based data analyses. The workflow leverages established tools and practices from software engineering. It combines the benefits of various open-source software tools including R Markdown, Git, Make, and Docker, whose interplay ensures seamless integration of version management, dynamic report generation conforming to various journal styles, and full cross-platform and long-term computational reproducibility. The workflow ensures meeting the primary goals that 1) the reporting of statistical results is consistent with the actual statistical results (dynamic report generation), 2) the analysis exactly reproduces at a later point in time even if the computing platform or software is changed (computational reproducibility), and 3) changes at any time (during development and post-publication) are tracked, tagged, and documented while earlier versions of both data and code remain accessible. While the research community increasingly recognizes dynamic document generation and version management as tools to ensure reproducibility, we demonstrate with practical examples that these alone are not sufficient to ensure long-term computational reproducibility. Combining containerization, dependence management, version management, and dynamic document generation, the proposed workflow increases scientific productivity by facilitating later reproducibility and reuse of code and data.


2021 ◽  
Vol 10 (2) ◽  
pp. 55
Author(s):  
Helen Eriksson ◽  
Lars Harrie

The use of 3D city models is changing from visualization to complex use cases where they act as 3D base maps. This requires links to registers and continuous updating of the city models. Still, most models never change or are recreated instead of updated. This study identifies obstacles to version management of 3D city models and proposes recommendations to overcome them, with a main focus on the municipality perspective, foremost in the planning and building processes. As part of this study, we investigate whether national building registers can control the version management of 3D city models. A case study based on investigations of standards, interviews and a review of tools is presented. The study uses an architectural model divided into four layers: data collection, building theme, city model and application. All layers require changes when implementing a new versioning method: the data collection layer requires restructuring of technical solutions and work processes, storage of the national building register requires restructuring, versioning capabilities must be propagated to the city model layer, and tools at the application layer must handle temporal information better. Strong incentives for including versioning in 3D city models are essential, as substantial investment is required to implement versioning in all the layers. Only capabilities required by applications should be implemented, as the complexity grows with the number of versioning functionalities. One outcome of the study is a recommendation to link 3D city models more closely to building registers. This enables more complex use in, e.g., building permits and 3D cadastres, and authorities can fetch required (versioning) information directly from the city model layer.


2020 ◽  
Vol 53 (2) ◽  
pp. 7827-7832
Author(s):  
K. Land ◽  
B. Vogel-Heuser ◽  
A. Gallasch ◽  
M. Sagerer ◽  
D. Förster ◽  
...  

2019 ◽  
Author(s):  
Aaron Peikert ◽  
Andreas Markus Brandmaier

In this tutorial, we describe a workflow to ensure long-term reproducibility of R-based data analyses. The workflow leverages established tools and practices from software engineering. It combines the benefits of various open-source software tools including R Markdown, Git, Make, and Docker, whose interplay ensures seamless integration of version management, dynamic report generation conforming to various journal styles, and full cross-platform and long-term computational reproducibility. The workflow ensures meeting the primary goals that 1) the reporting of statistical results is consistent with the actual statistical results (dynamic report generation), 2) the analysis exactly reproduces at a later point in time even if the computing platform or software is changed (computational reproducibility), and 3) changes at any time (during development and post-publication) are tracked, tagged, and documented while earlier versions of both data and code remain accessible. While the research community increasingly recognizes dynamic document generation and version management as tools to ensure reproducibility, we demonstrate with practical examples that these alone are not sufficient to ensure long-term computational reproducibility. Combining containerization, dependence management, version management, and dynamic document generation, the proposed workflow increases scientific productivity by facilitating later reproducibility and reuse of code and data.


2019 ◽  
Vol 40 (2) ◽  
pp. 93-95
Author(s):  
V.V. Shymanska

The article investigates the essence of agile management, is to improve organizational flexibility (agility) in order to achieve competitive advantages. The author reveals the main advantages and disadvantages of agile approaches to management. The evolution of management from version "management 1.0" to version "management 3.0"is considered.


Sign in / Sign up

Export Citation Format

Share Document