A Community Perspective on Research Software in the Hydrological Sciences 

Author(s):  
Robert Reinecke ◽  
Tim Trautmann ◽  
Thorsten Wagener ◽  
Katja Schüler

<div> <p>Software development has become an integral part of the earth system sciences as models and data processing get more sophisticated. Paradoxically, it poses a threat to scientific progress as the pillar of science, reproducibility, is seldomly reached. Software code tends to be either poorly written and documented or not shared at all; proper software licenses are rarely attributed. This is especially worrisome as scientific results have potential controversial implications for stakeholders and policymakers and may influence the public opinion for a long time. </p> </div><div> <p>In recent years, progress towards open science has led to more publishers demanding access to data and source code alongside peer-reviewed manuscripts. Still, recent studies find that results in hydrology can rarely be reproduced. </p> </div><div> <p>In this talk, we present first results of a poll conducted in spring 2021 among the hydrological science community. Therein, we strive to investigate the causes for that lack of reproducibility. We take a peek behind the curtain and unveil how the community develops and maintains complex code and what that entails for reproducibility. Our survey includes background knowledge, community opinion, and behaviour practices regarding reproducible software development.  </p> </div><div> <p>We postulate that this lack of reproducibility might be rooted in insufficient reward within the scientific community, insecurity regarding proper licencing of software and other parts of the research compendium as well as scientists’ unawareness about how to make software available in a way that allows for proper attribution of their work. We question putative causes such as unclear guidelines of research institutions or that software has been developed over decades by researchers' cohorts without a proper software engineering process and transparent licensing. </p> </div><div> <p>To this end, we also summarize solutions like the adaption of modern project management methods from the computer engineering community that will eventually reduce costs while increasing the reproducibility of scientific research. </p> </div>

2015 ◽  
Vol 38 (1) ◽  
pp. 107-111
Author(s):  
Ljupche Kochoski ◽  
Zoran Filipov ◽  
Ilcho Joshevski ◽  
Stevche Ilievski ◽  
Filip Davkov

Abstract Science has been searching for a long time for a reliable method for controlling the sex of mammalian offspring. Recently, the application of specific modern cellular methodologies has led to the development of a flow cytometric system capable of differentiating and separating living X- and Y-chromosome-bearing sperm cells in amounts suitable for AI and therefore, commercialization of this sexing technology. The aim of this work was to present the first results of heifers that introduce bovine AI with sex sorted semen, for the first time in Macedonia. Insemination with sex sorted cryopreserved semen (2×106 spermatozoa per dose) imported from the USA was done at two dairy farms in ZK Pelagonija. In total, 74 heifers (Holstein Friesian) were inseminated. Inseminations were carried out in a timely manner following a modified OvSynch protocol. During the insemination, the sperm was deposited into the uterine horn ipsi lateral to the ovary where a follicle larger than 1.6 cm was detected by means of transrectal ultrasound examination. Pregnancy was checked by ultrasound on day 30 after the insemination. Overall, the average pregnancy rate in both farms was 43,24% (40,54% and 45,95%, for farm 1 and farm 2, respectively). All pregnant heifers delivered their calves following a normal gestation length (274,3 days in average) and of the 32 born calves, 30 (93,75%) were female. In conclusion, since the first results from inseminations with sex-sorted semen in dairy heifers in Macedonia are very promising, the introduction of this technique may bring much benefit to the local dairy sector. Average pregnancy rate seems similar with results obtained following ‘regular’ inseminations, notwithstanding the relatively low number of spermatozoa per insemination dose. Due to the latter, we however recommend inseminations only to be carried out by experienced technicians followinga TAI protocol and ultrasound examinations of the ovaries prior to insemination.


2021 ◽  
Author(s):  
Gisela Govaart ◽  
Simon M. Hofmann ◽  
Evelyn Medawar

Ever-increasing anthropogenic greenhouse gas emissions narrow the timeframe for humanity to mitigate the climate crisis. Scientific research activities are resource demanding and, consequently, contribute to climate change; at the same time, scientists have a central role in advancing knowledge, also on climate-related topics. In this opinion piece, we discuss (1) how open science – adopted on an individual as well as on a systemic level – can contribute to making research more environmentally friendly, and (2) how open science practices can make research activities more efficient and thereby foster scientific progress and solutions to the climate crises. While many building blocks are already at hand, systemic changes are necessary in order to create academic environments that support open science practices and encourage scientists from all fields to become more carbon-conscious, ultimately contributing to a sustainable future.


2021 ◽  
Author(s):  
Bernadette Fritzsch ◽  
Daniel Nüst

<p>Open Science has established itself as a movement across all scientific disciplines in recent years. It supports good practices in science and research that lead to more robust, comprehensible, and reusable results. The aim is to improve the transparency and quality of scientific results so that more trust is achieved, both in the sciences themselves and in society. Transparency requires that uncertainties and assumptions are made explicit and disclosed openly. <br>Currently, the Open Science movement is largely driven by grassroots initiatives and small scale projects. We discuss some examples that have taken on different facets of the topic:</p><ul><li>The software developed and used in the research process is playing an increasingly important role. The Research Software Engineers (RSE) communities have therefore organized themselves in national and international initiatives to increase the quality of research software.</li> <li>Evaluating reproducibility of scientific articles as part of peer review requires proper creditation and incentives for both authors and specialised reviewers to spend extra efforts to facilitate workflow execution. The Reproducible AGILE initiative has established a reproducibility review at a major community conference in GIScience.</li> <li>Technological advances for more reproducible scholarly communication beyond PDFs, such as containerisation, exist, but are often inaccessible to domain experts who are not programmers. Targeting geoscience and geography, the project Opening Reproducible Research (o2r) develops infrastructure to support publication of research compendia, which capture data, software (incl. execution environment), text, and interactive figures and maps.</li> </ul><p>At the core of scientific work lie replicability and reproducibility. Even if different scientific communities use these terms differently, the recognition that these aspects need more attention is commonly shared and individual communities can learn a lot from each other. Networking is therefore of great importance. The newly founded initiative German Reproducibility Network (GRN) wants to be a platform for such networking and targets all of the above initiatives. GRN is embedded in a growing network of similar initiatives, e.g. in the UK, Switzerland and Australia. Its goals include </p><ul><li>Support of local open science groups</li> <li>Connecting local or topic-centered initiatives for the exchange of experiences</li> <li>Attracting facilities for the goals of Open Science </li> <li>Cultivate contacts to funding organizations, publishers and other actors in the scientific landscape</li> </ul><p>In particular, the GRN aims to promote the dissemination of best practices through various formats of further education, in order to sensitize particularly early career researchers to the topic. By providing a platform for networking, local and domain-specific groups should be able to learn from one another, strengthen one another, and shape policies at a local level.</p><p>We present the GRN in order to address the existing local initiatives and to win them for membership in the GRN or sibling networks in other countries.</p>


2019 ◽  
Author(s):  
Martine G. de Vos ◽  
Wilco Hazeleger ◽  
Driss Bari ◽  
Jorg Behrens ◽  
Sofiane Bendoukha ◽  
...  

Abstract. The need for open science has been recognized by the communities of meteorology and climate science. However, while these domains are mature in terms of applying digital technologies, these are lagging behind where the implementation of open science methodologies is concerned. In a session on Weather and Climate Science in the Digital Era at the 14th IEEE International eScience conference domain specialists and data and computer scientists discussed the road towards open weather and climate science. The studies presented in the conference session showed the added value of shared data, software and platforms through, for instance, combining data sets from disparate sources, increased accuracy and skill of simulations and forecasts at local scales, and improved consistency of data products. We observed that sharing data and code is important, but not sufficient to achieve open weather and climate science and that here are important issues to address. At the level of technology, the implementation of the FAIR principles to many datasets used in weather and climate science remains a challenge due to their origin, scalability, or legal barriers. Furthermore, the complexity of current software platforms limits collaboration between researchers and optimal use of open science tools and methods. The main challenges we observed, however, were non-technical and impact the system of science as a whole. There is a need for new roles and responsibilities at the interface of science and digital technology, e.g., data stewards and research software engineers. This requires the personnel portfolio of academic institutions to be more diverse, and in addition, a broader consideration of the impact of academic work, beyond publishing and teaching. Besides, new policies regarding open weather and climate science should be developed in an inclusive way to engage all stakeholders, including non-academic parties such as meteorological institutions. We acknowledge that open weather and climate science requires effort to change, but the benefits are large. As can already be observed from the studies presented in the conference it leads to much faster progress in understanding the world.


2002 ◽  
Vol 1 (1) ◽  
pp. 81-94
Author(s):  
Manoj Tharian

This paper presents an overview of the Rational Unified Process. The Rational unified Process is a software engineering process, delivered through a web-enabled, searchable knowledge base. The process enhances team productivity and delivers software best practices via guidelines, templates and tool mentors for all critical software lifecycle activities. The knowledge base allows development teams to gain the full benefits of the industry-standard Unified Modeling Language(UML). The rational Unified Process is a software Engineering Process. It provides a disciplined approach to assigning tasks and responsibilities within development organization. Its goal is to ensure the production of high-quality software that meets the needs of its end-users, within a predictable schedule and budget.[11,13] The rational Unified Process is a process product , developed and maintained by Rational Software. The development team for the Rational Unified Process are working closely with customers, partners, Rational's that the process is continuously updated and improved upon to reflect recent experiences and evolving and proven best practices. The Rational Unified Process is a guide for how to effectively use the Unified modeling Language(UML). The UML is a industry-standard language that allows us to clearly communicate requirements, architectures and designs. The UML is a industry-standard language that allows us to clearly communicate requirements architectures and designs. the UML originally created by Rational Software, and is now maintained by the standards organization Object Management Group(OMG).[4] the Rational Unified Process captures many of the best practices in modern software development in a form that is suitable for a wide range of projects and organizations. Deploying these best practices 3/4 using the Rational Unified Process as your guide 3/4 offers development teams a number of key advantages. In next section, we describe the six fundamental best practices of the Rational Unified Process. The Rational Unified Process describes how to effectively deploy commercially proven approaches to software development for software development teams. These are called "best practices" not so much because you can precisely quantify their value, but rather, because they are observed to be commonly used in industry by successful organizations.


Author(s):  
Valerio Fernandes del Maschi ◽  
Luciano S. Souza ◽  
Mauro de Mesquita Spínola ◽  
Wilson Vendramel ◽  
Ivanir Costa ◽  
...  

The quality in software projects is related the deliveries that are adjusted to the use, and that they take care of to the objectives. In this way, Brazilian organizations of software development, especially the small and medium ones, need to demonstrate to future customers whom an initial understand of the business problem has enough. This chapter has as objective to demonstrate methodology, strategy, main phases and procedures adopted beyond the gotten ones of a small organization of development of software in the implantation of a Customized Software Engineering Process and of a Tool of Support to the Process in the period of 2004 to 2006 on the basis of rational unified process (RUP) and in the Microsoft solutions framework (MSF).


2021 ◽  
Author(s):  
Arkadiusz Liber

This paper presents the results of the author’s research on the design of hidden communication algorithms employed in the context of global exchange services. The solutions proposed enable communication between trading participants without use of such traditional communication routes as email, telephone, instant messaging, discussion forums, etc. The solutions described are based on modification of entries in the exchange tables of orders and transactions. Through modification of the entries associated with share buy and sell orders, a secret channel can be constructed through which hidden messages can be sent. Such messages could, for example, be used to manipulate stock prices by an organized group of people. The proposed solutions can classified as steganographic methods where the message carrier is a stock transaction or stock order table, in which a message is embedded by means of algorithmic modification of buy and sell records. Also presented are specific proposals for static, dynamic, and mixed static-dynamic solutions based on the results of the author’s research. In the static methods group, an imperceptible communication channel is formed through a series of asynchronous modifications that create a complete, readable message that is present for a relatively long time. In dynamic methods, the embedded message is synchronized in time and creates a sequence of events that create statements. The third group of methods presented, mixed methods, use static and dynamic techniques to construct hidden messages. In particular, the method of extreme orders (MEO), mono-table method (MTM), multi-table method (MUTM), method of price-indexed vectors (MPIV), method of quantity-indexed vectors (MQIV), clustered order method (COM), distributed order method (DOM), position-encoded method (PEM), method with quantity coding (MQC), method with error correction (MEC), method limited to buy orders (MLBO), method limited to sell orders (MLSO), and self-synchronizing method (SSM) are presented. The solutions presented in this work can be applied practically in any publicly available stock trading system in which order tables are available. The algorithms presented in the paper were implemented and verified on a real trading service, and the research software used was implemented based on the API provided by the brokerage office.


Author(s):  
Tom Güldemann ◽  
Harald Hammarström

Taking up Diamond’s (1999) geographical axis hypothesis regarding the different population histories of continental areas, Güldemann (2008, 2010) proposed that macro-areal aggregations of linguistic features are influenced by geographical factors. This chapter explores this idea by extending it to the whole world in testing whether the way linguistic features assemble over long time spans and large space is influenced by what we call “latitude spread potential” and “longitude spread constraint.” Regarding the former, the authors argue in particular that contact-induced feature distributions as well as genealogically defined language groups with a sufficient geographical extension tend to have a latitudinal orientation. Regarding the latter, the authors provide first results suggesting that linguistic diversity within language families tends to be higher along longitude axes. If replicated by more extensive and diverse testing, the authors’ findings promise to become important ingredients for a comprehensive theory of human history across space and time within linguistics and beyond.


2018 ◽  
Vol 104 (1) ◽  
pp. 59-69
Author(s):  
Anke Weber

The tomb of Ramesses III (KV 11) in the Valley of the Kings is one of the archaeological sites of ancient Egypt that has received very little attention from the scientific community. The tomb was open to the public for a long time and is in danger of quick deterioration. The site was closed from August 2016 until October 2017 for the installation of new walkways, glass panels, and an improved lighting system. The Ramesses III (KV 11) Publication and Conservation Project now aims to record, document, and preserve the entire tomb. This article is a first report on the planned publication and conservation of the tomb of Ramesses III (KV 11) in the Valley of the Kings. Like so many other tombs in the wadi, it presents the astonishing case of a tomb that has been known for a long time but was never thoroughly studied. In the following, we present the research aims of the newly formed Ramesses III (KV 11) Publication and Conservation Project1 as well as the preparatory work that has been undertaken by our team members. All observations and notes were made over the last six years during short campaigns, partly within the framework of a previous research project,2 which constitutes the basis of preliminary work in KV 11. The article focuses on the historical background of the tomb, its research history, including former investigators and concessions, supposed causes of destruction, suggestions for preservation, and procedures for research and documentation. We present the main problems we have to deal with at the outset of the project and describe the methods we propose to adopt. Further annual reports on the progress and first results of our work in order to preserve this important site of Egypt’s cultural heritage will follow in due course.


2019 ◽  
Vol 46 (1) ◽  
pp. 41-52 ◽  
Author(s):  
Yimei Zhu

Data sharing can be defined as the release of research data that can be used by others. With the recent open-science movement, there has been a call for free access to data, tools and methods in academia. In recent years, subject-based and institutional repositories and data centres have emerged along with online publishing. Many scientific records, including published articles and data, have been made available via new platforms. In the United Kingdom, most major research funders had a data policy and require researchers to include a ‘data-sharing plan’ when applying for funding. However, there are a number of barriers to the full-scale adoption of data sharing. Those barriers are not only technical, but also psychological and social. A survey was conducted with over 1800 UK-based academics to explore the extent of support of data sharing and the characteristics and factors associated with data-sharing practice. It found that while most academics recognised the importance of sharing research data, most of them had never shared or reused research data. There were differences in the extent of data sharing between different gender, academic disciplines, age and seniority. It also found that the awareness of Research Council UK’s (RCUK) Open-Access (OA) policy, experience of Gold and Green OA publishing, attitudes towards the importance of data sharing and experience of using secondary data were associated with the practice of data sharing. A small group of researchers used social media such as Twitter, blogs and Facebook to promote the research data they had shared online. Our findings contribute to the knowledge and understanding of open science and offer recommendations to academic institutions, journals and funding agencies.


Sign in / Sign up

Export Citation Format

Share Document