scholarly journals Open Data Resources for Clean Energy and Water Sectors in India

2019 ◽  
Vol 39 (06) ◽  
pp. 300-307
Author(s):  
Deep Jyoti Francis ◽  
Anup Kumar Das

With the wave of digitalisation, institutions across countries are pushing for the creation of open data and their governance. FAIR Data Principles have initiated the publishing of open research data to the key stakeholders and practitioners in the low- and middle-income countries to meet their developmental goals through practical usage in problem-solving. Open Data, which is part of the Open Science movement, has transformed the regime structure at a transnational level for the governance of critical issues surrounding water and energy. This paper provides a baseline survey to look into the various open data initiatives in the areas of water and clean energy across countries in general and India in particular. Given the multifaceted challenges around the water-energy nexus existing in India, it is critical to identifying the open data initiatives and studying their governance at the country level. Since governance requires the participation of various institutions and multiple stakeholders, the research aims at highlighting the various initiatives such as participation of institutions and the application of Creative Commons (CC) licensing terms in the open data governance for clean energy and water sectors in India.

2018 ◽  
Author(s):  
Benjamin Wood ◽  
Rui Müller ◽  
Annette Nicole Brown

Objective: In past years, research audit exercises conducted across several fields of study have found a high prevalence of published empirical research that cannot be reproduced using the original dataset and software code (replication files). The failure to reproduce arises either because the original authors refuse to make replication files available or because third party researchers are unable to produce the published results using the provided files. Both causes create a credibility challenge for empirical research, as it means those published findings are not verifiable. In recent years, increasing numbers of journals, funders, and academics have embraced research transparency, which should reduce the prevalence of failures to reproduce. This study reports the results of a research audit exercise, known as the push button replication (PBR) project, which tested a sample of studies published in 2014 that use similar empirical methods but span a variety of academic fields.Methods: To draw our sample of articles, we used the 3ie Impact Evaluation Repository to identify the ten journals that published the most impact evaluations (experimental and quasi-experimental intervention studies) from low- and middle-income countries from 2010 through 2012. This set includes health, economics, and development journals. We then selected all articles in these journals published in 2014 that meet the same inclusion criteria. We developed and piloted a detailed protocol for conducting push button replication and determining the level of comparability of the replication findings to the original. To ensure all materials and processes for the PBR project were transparent, we established a project site on the Open Science Framework. We divided the sample of articles across several researchers who followed the protocol to request data and conduct the replications.Results: Of the 109 articles in our sample, only 27 are push button replicable, meaning the provided code run on the provided dataset produces comparable findings for the key results in the published article. The authors of 59 of the articles refused to provide replication files. Thirty of these 59 articles were published in journals that had replication file requirements in 2014, meaning these articles are non-compliant with their journal requirements. For the remaining 23 articles, we confirmed that three had proprietary data, we received incomplete replication files for 15, and we found minor differences in the replication results for five. We found open data for only 14 of the articles in our sample.


Author(s):  
Daniel Noesgaard

The work required to collect, clean and publish biodiversity datasets is significant, and those who do it deserve recognition for their efforts. Researchers publish studies using open biodiversity data available from GBIF—the Global Biodiversity Information Facility—at a rate of about two papers a day. These studies cover areas such as macroecology, evolution, climate change, and invasive alien species, relying on data sharing by hundreds of publishing institutions and the curatorial work of thousands of individual contributors. With more than 90 per cent of these datasets licensed under Creative Commons Attribution licenses (CC BY and CC BY-NC), data users are required to credit the dataset providers. For GBIF, it is crucial to link these scientific uses to the underlying data as one means of demonstrating the value and impact of open science, while seeking to ensure attribution of individual, organizational and national contributions to the global pool of open data about biodiversity. Every single authenticated download of occurrence records from GBIF.org is issued a unique Digital Object Identifier (DOI). These DOIs each resolve to a landing page that contains details of the search parameters used to generate the download a quantitative map of the underlying datasets that contributed to the download a simple citation to be included in works that rely on the data the search parameters used to generate the download a quantitative map of the underlying datasets that contributed to the download a simple citation to be included in works that rely on the data When used properly by authors and deposited correctly by journals in the article metadata, the DOI citation establishes a direct link between a scientific paper and the underlying data. Crossref—the main DOI Registration Agency for academic literature— exposes such links in Event Data, which can be consumed programmatically to report direct use of individual datasets. GBIF also records these links, permanently preserving the download archives while exposing a citation count on download landing pages that is also summarized on the landing pages of each contributing datasets and publishers. The citation counts can be expanded to produce lists of all papers unambiguously linked to use of specific datasets. In 2018, just 15 per cent of papers based on GBIF-mediated data used DOIs to cite or acknowledge the datasets used in the studies. To promote crediting of data publishers and digital recognition of data sharing, the GBIF Secretariat has been reaching out systematically to authors and publishers since April 2018 whenever a paper fails to include a proper data citation. While publishing lags may hinder immediate effects, preliminary findings suggest that uptake is improving—as the number of papers with DOI data citations during the first part of 2019 is up more than 60 per cent compared to 2018. Focusing on the value of linking scientific publications and data, this presentation will explore the potential for establishing automatic linkage through DOI metadata while demonstrating efforts to improve metrics of data use and attribution of data providers through outreach campaigns to authors and journal publishers.


2020 ◽  
Author(s):  
Kyle Copas

<p>GBIF—the Global Biodiversity Information Facility—and its network of more than 1,500 institutions maintain the world's largest index of biodiversity data (https://www.gbif.org), containing nearly 1.4 billion species occurrence records. This infrastructure offers a model of best practices, both technological and cultural, that other domains may wish to adapt or emulate to ensure that its users have free, FAIR and open access to data.</p><p>The availability of community-supported data and metadata standards in the biodiversity informatics community, combined with the adoption (in 2014) of open Creative Commons licensing for data shared with GBIF, established the necessary preconditions for the network's recent growth.</p><p>But GBIF's development of a data citation system based on the uses of DOIs—Digital Object Identifiers—has established an approach for using unique identifiers to establish direct links between scientific research and the underlying data on which it depends. The resulting state-of-the-art system tracks uses and reuses of data in research and credits data citations back to individual datasets and publishers, helping to ensure the transparency of biodiversity-related scientific analyses.</p><p>In 2015, GBIF began issuing a unique Digital Object Identifier (DOI) for every data download. This system resolves each download to a landing page containing 1) the taxonomic, geographic, temporal and other search parameters used to generate the download; 2) a quantitative map of the underlying datasets that contributed to the download; and 3) a simple citation to be included in works that rely on the data.</p><p>When authors cite these download DOIs, they in effect assert direct links between scientific papers and underlying data. Crossref registers these links through Event Data, enabling GBIF to track citation counts automatically for each download, dataset and publisher. These counts expand to display a bibliography of all research reuses of the data.This system improves the incentives for institutions to share open data by providing quantifiable measures demonstrating the value and impact of sharing data for others' research.</p><p>GBIF is a mature infrastructure that supports a wide pool of researchers publish two peer-reviewed journal articles that rely on this data every day. That said, the citation-tracking and -crediting system has room for improvement. At present, 21% of papers using GBIF-mediated data provide DOI citations—which represents a 30% increase over 2018. Through outreach to authors and collaboration with journals, GBIF aims to continue this trend.</p><p>In addition, members of the GBIF network are seeking to extend citation credits to individuals through tools like Bloodhound Tracker (https://www.bloodhound-tracker.net) using persistent identifiers from ORCID and Wikidata IDs. This approach provides a compelling model for the scientific and scholarly benefits of treating individual data records from specimens as micro- or nanopublications—first-class research objects that advancing both FAIR data and open science.</p>


Publications ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 31
Author(s):  
Manh-Toan Ho ◽  
Manh-Tung Ho ◽  
Quan-Hoang Vuong

This paper seeks to introduce a strategy of science communication: Total SciComm or all-out science communication. We proposed that to maximize the outreach and impact, scientists should use different media to communicate different aspects of science, from core ideas to methods. The paper uses an example of a debate surrounding a now-retracted article in the Nature journal, in which open data, preprints, social media, and blogs are being used for a meaningful scientific conversation. The case embodied the central idea of Total SciComm: the scientific community employs every medium to communicate scientific ideas and engages all scientists in the process.


2021 ◽  
Author(s):  
Samir Das ◽  
Rida Abou-Haidar ◽  
Henri Rabalais ◽  
Sonia Denise Lai Wing Sun ◽  
Zaliqa Rosli ◽  
...  

AbstractIn January 2016, the Montreal Neurological Institute-Hospital (The Neuro) declared itself an Open Science organization. This vision extends beyond efforts by individual scientists seeking to release individual datasets, software tools, or building platforms that provide for the free dissemination of such information. It involves multiple stakeholders and an infrastructure that considers governance, ethics, computational resourcing, physical design, workflows, training, education, and intra-institutional reporting structures. The C-BIG repository was built in response as The Neuro’s institutional biospecimen and clinical data repository, and collects biospecimens as well as clinical, imaging, and genetic data from patients with neurological disease and healthy controls. It is aimed at helping scientific investigators, in both academia and industry, advance our understanding of neurological diseases and accelerate the development of treatments. As many neurological diseases are quite rare, they present several challenges to researchers due to their small patient populations. Overcoming these challenges required the aggregation of datasets from various projects and locations. The C-BIG repository achieves this goal and stands as a scalable working model for institutions to collect, track, curate, archive, and disseminate multimodal data from patients. In November 2020, a Registered Access layer was made available to the wider research community at https://cbigr-open.loris.ca, and in May 2021 fully open data will be released to complement the Registered Access data. This article outlines many of the aspects of The Neuro’s transition to Open Science by describing the data to be released, C-BIG’s full capabilities, and the design aspects that were implemented for effective data sharing.


Author(s):  
Andrea Bizzego ◽  
Giulio Gabrieli ◽  
Marc H. Bornstein ◽  
Kirby Deater-Deckard ◽  
Jennifer E. Lansford ◽  
...  

Child Mortality (CM) is a worldwide concern, annually affecting as many as 6.81% children in low- and middle-income countries (LMIC). We used data of the Multiple Indicators Cluster Survey (MICS) (N = 275,160) from 27 LMIC and a machine-learning approach to rank 37 distal causes of CM and identify the top 10 causes in terms of predictive potency. Based on the top 10 causes, we identified households with improved conditions. We retrospectively validated the results by investigating the association between variations of CM and variations of the percentage of households with improved conditions at country-level, between the 2005–2007 and the 2013–2017 administrations of the MICS. A unique contribution of our approach is to identify lesser-known distal causes which likely account for better-known proximal causes: notably, the identified distal causes and preventable and treatable through social, educational, and physical interventions. We demonstrate how machine learning can be used to obtain operational information from big dataset to guide interventions and policy makers.


2020 ◽  
Vol 36 (3) ◽  
pp. 263-279
Author(s):  
Isabel Steinhardt

Openness in science and education is increasing in importance within the digital knowledge society. So far, less attention has been paid to teaching Open Science in bachelor’s degrees or in qualitative methods. Therefore, the aim of this article is to use a seminar example to explore what Open Science practices can be taught in qualitative research and how digital tools can be involved. The seminar focused on the following practices: Open data practices, the practice of using the free and open source tool “Collaborative online Interpretation, the practice of participating, cooperating, collaborating and contributing through participatory technologies and in social (based) networks. To learn Open Science practices, the students were involved in a qualitative research project about “Use of digital technologies for the study and habitus of students”. The study shows the practices of Open Data are easy to teach, whereas the use of free and open source tools and participatory technologies for collaboration, participation, cooperation and contribution is more difficult. In addition, a cultural shift would have to take place within German universities to promote Open Science practices in general.


2016 ◽  
Vol 11 (2) ◽  
pp. 61-64 ◽  
Author(s):  
Kenneth D. Ward

Treating tobacco dependence is paramount for global tobacco control efforts, but is often overshadowed by other policy priorities. As stated by Jha (2009), “cessation by current smokers is the only practical way to avoid a substantial proportion of tobacco deaths worldwide before 2050.” Its importance is codified in Article 14 of the Framework Convention on Tobacco Control (FCTC), and in the WHO's MPOWER package of effective country-level policies. Unfortunately, only 15% of the world's population have access to appropriate cessation support (WHO, 2015). Moreover, parties to the FCTC have implemented only 51% of the indicators within Article 14, on average, which is far lower than many other articles (WHO, 2014). Further, commenting on the use of “O” measures (Offer help to quit tobacco use) in the MPOWER acronym, WHO recently concluded, “while there has been improvement in implementing comprehensive tobacco cessation services, this is nonetheless a most under-implemented MPOWER measure in terms of the number of countries that have fully implemented it” (WHO, 2015). To the detriment of global tobacco control efforts, only one in eight countries provides comprehensive cost-covered services, only one in four provide some cost coverage for nicotine replacement therapy, and fewer than one third provide a toll-free quit line (WHO, 2015).


2019 ◽  
Vol 3 ◽  
pp. 1442 ◽  
Author(s):  
E. Richard Gold ◽  
Sarah E. Ali-Khan ◽  
Liz Allen ◽  
Lluis Ballell ◽  
Manoel Barral-Netto ◽  
...  

Serious concerns about the way research is organized collectively are increasingly being raised. They include the escalating costs of research and lower research productivity, low public trust in researchers to report the truth, lack of diversity, poor community engagement, ethical concerns over research practices, and irreproducibility. Open science (OS) collaborations comprise of a set of practices including open access publication, open data sharing and the absence of restrictive intellectual property rights with which institutions, firms, governments and communities are experimenting in order to overcome these concerns. We gathered two groups of international representatives from a large variety of stakeholders to construct a toolkit to guide and facilitate data collection about OS and non-OS collaborations. Ultimately, the toolkit will be used to assess and study the impact of OS collaborations on research and innovation. The toolkit contains the following four elements: 1) an annual report form of quantitative data to be completed by OS partnership administrators; 2) a series of semi-structured interview guides of stakeholders; 3) a survey form of participants in OS collaborations; and 4) a set of other quantitative measures best collected by other organizations, such as research foundations and governmental or intergovernmental agencies. We opened our toolkit to community comment and input. We present the resulting toolkit for use by government and philanthropic grantors, institutions, researchers and community organizations with the aim of measuring the implementation and impact of OS partnership across these organizations. We invite these and other stakeholders to not only measure, but to share the resulting data so that social scientists and policy makers can analyse the data across projects.


Author(s):  
Angélica Conceição Dias Miranda ◽  
Milton Shintaku ◽  
Simone Machado Firme

Resumo: Os repositórios têm se tornado comum nas universidades e institutos de pesquisa, como forma de ofertar acesso à produção científica e, com isso, dar visibilidade à instituição. Entretanto, em muitos casos ainda estão restritos aos conceitos do movimento do arquivo aberto e acesso aberto, sendo que já se discute o Movimento da Ciência Aberta, revelando certo descompasso, requerendo estudos que apoiem a atualização dessa importante ferramenta. Nesse sentido, o presente estudo verifica os requisitos envolvidos nos movimentos abertos, de forma a apoiar a discussão técnica e tecnológica. Um estudo bibliográfico, que transforma as informações sobre os movimentos em critérios para avaliação de ferramentas para criação de repositórios, apresentando a implementação da interação como um novo desafio. Nas considerações procura-se contribuir com a discussão sobre a Ciência Aberta, de forma mais aplicada bem como o ajuste dos repositórios a esse movimento.Palavras-chave: Repositórios.  Critérios de avaliação. Arquivo aberto. Acesso aberto. Dados abertos. Ciência aberta.SURVEY OF CRITERIA FOR EVALUATION OF REPOSITORY TOOLS ACCORDING TO OPEN SCIENCE Abstract: Repositories have become common in universities and research institutes, as a way of offering access to scientific production, thereby giving visibility to the institution. Meanwhile, in many cases, repositories are restricted to the concepts of open movement and open access considering that the Open Science Movement is already being discussed. Regarding this matter, this study verifies the requirements involved in the open movements, in order to support a technical and technological discussion.  A bibliographic study that transforms information about movements into criteria to evaluate tools used to create repositories, presenting an implementation of interaction as a new challenge. In the considerations, we contribute with a discussion about an Open Science, in a more applied way, as well as the adjustment of the repositories to this movement.Keywords: Repositories. Evaluation Criteria. Open File. Open Access. Open Data. Open Science.


Sign in / Sign up

Export Citation Format

Share Document