Future progress in Antarctic science: improving data care, sharing and collaboration

Author(s):  
Alan K. Cooper

ABSTRACTData are the foundation of modern observational science. High-quality science relies on high quality data. In Antarctica, unlike elsewhere, researchers must disperse data and conduct science differently. They must work within the laws enacted under Antarctic Treaty that defines Antarctica as a continent for peace and science, where data sharing and international collaboration are requisite keystones. Scientists also work under oversight guidance of the Scientific Committee on Antarctic Research (SCAR). In the last decade, rapid technological advances and vast increase in digital data volumes have changed the ways data are acquired, communicated, analysed, displayed and reported. Yet, the underlying science culture in which data are funded, utilised and cared for has changed little. Science-culture changes are needed for greater progress in Antarctic science.We briefly summarise and discuss aspects of Antarctic ‘data care’, which is a subset of data management. We offer perceptions on how changes to some aspects of current science-culture could inspire greater data sharing and international collaboration, to achieve greater success. The changes would place greater emphasis on data visualisation, higher national priority on data care, implementation of a data-library concept for data sharing, greater individual responsibility for data care, and further integration of cultural arts into data and science presentations.Much effort has gone into data management in the international community, and there are many excellent examples of successful collaborative Antarctic science programs within SCAR built on existing data sets. Yet, challenges in data care remain and specific suggestions we make deserve attention by the science community, to further promote peace and progress in Antarctic science.

2017 ◽  
Vol 47 (1) ◽  
pp. 46-55 ◽  
Author(s):  
S Aqif Mukhtar ◽  
Debbie A Smith ◽  
Maureen A Phillips ◽  
Maire C Kelly ◽  
Renate R Zilkens ◽  
...  

Background: The Sexual Assault Resource Center (SARC) in Perth, Western Australia provides free 24-hour medical, forensic, and counseling services to persons aged over 13 years following sexual assault. Objective: The aim of this research was to design a data management system that maintains accurate quality information on all sexual assault cases referred to SARC, facilitating audit and peer-reviewed research. Methods: The work to develop SARC Medical Services Clinical Information System (SARC-MSCIS) took place during 2007–2009 as a collaboration between SARC and Curtin University, Perth, Western Australia. Patient demographics, assault details, including injury documentation, and counseling sessions were identified as core data sections. A user authentication system was set up for data security. Data quality checks were incorporated to ensure high-quality data. Results: An SARC-MSCIS was developed containing three core data sections having 427 data elements to capture patient’s data. Development of the SARC-MSCIS has resulted in comprehensive capacity to support sexual assault research. Four additional projects are underway to explore both the public health and criminal justice considerations in responding to sexual violence. The data showed that 1,933 sexual assault episodes had occurred among 1881 patients between January 1, 2009 and December 31, 2015. Sexual assault patients knew the assailant as a friend, carer, acquaintance, relative, partner, or ex-partner in 70% of cases, with 16% assailants being a stranger to the patient. Conclusion: This project has resulted in the development of a high-quality data management system to maintain information for medical and forensic services offered by SARC. This system has also proven to be a reliable resource enabling research in the area of sexual violence.


2018 ◽  
Vol 6 (2) ◽  
pp. 89-92 ◽  
Author(s):  
Sarah Whitcher Kansa ◽  
Eric C. Kansa

ABSTRACTThis special section stems from discussions that took place in a forum at the Society for American Archaeology's annual conference in 2017. The forum, Beyond Data Management: A Conversation about “Digital Data Realities”, addressed challenges in fostering greater reuse of the digital archaeological data now curated in repositories. Forum discussants considered digital archaeology beyond the status quo of “data management” to better situate the sharing and reuse of data in archaeological practice. The five papers for this special section address key themes that emerged from these discussions, including: challenges in broadening data literacy by making instructional uses of data; strategies to make data more visible, better cited, and more integral to peer-review processes; and pathways to create higher-quality data better suited for reuse. These papers highlight how research data management needs to move beyond mere “check-box” compliance for granting requirements. The problems and proposed solutions articulated by these papers help communicate good practices that can jumpstart a virtuous cycle of better data creation leading to higher impact reuses of data.


Author(s):  
Christian Ohmann ◽  
Serena Battaglia ◽  
TONEATTI Christine ◽  
Steve Canham ◽  
Jacques Demotes

Enterprise Resource Planning (ERP) and Business Intelligence (BI) system demand progressive rules for maintaining the valuable information about customers, products, suppliers and vendors as data captured through different sources may not be of high quality due to human errors, in many cases. The problem encounters when this information is accessible across multiple systems, within same organization. Providing adequacy to this scattered data is a top agenda for any organization as maintaining the data is complicated, as having high quality data. Master Data Management (MDM) provides a solution to these problems by maintaining “a single reference of truth” with authoritative source of master data (Customer, products, employees etc). Master Data Management (MDM) is a highlighted concern now a day as valid data is the demand for strategic, tactical and operational steering of every organization. The lane to MDM initiates with the quality of data which demands for discovery of master data, profiling and analysis. As inadequacy of data may leads to adverse effects such as wrong decision, loss of time, bad results and unnecessary risk. Thus there is a need to deal with master data and quality of this specific data in a successful and efficient manner. For ensuring this purpose, an approach is proposed in this paper. The research focuses on development of a Model for Data Profiling to assess the level of Quality Traits for Master Data Management. Results are shown by executing the defined steps on TALEND tool over collected dataset. Thus, level of quality traits processes directly correlates with an organization’s ability to make the proper decisions and better outcomes.


2021 ◽  
Author(s):  
Betty Agwang ◽  
Yuka Manabe

Abstract Background: In resource-limited settings, there is a paucity of high quality data management systems for clinical research. The result is that data are often managed in high-income countries disadvantaging researchers at sites where the data are collected. An institutional data management system to address the data collection concerns of the collaborators and sponsors is a key institutional capacity element for high quality research. Our goal was to build a local data management center to streamline data collection and validation compliant with international regulatory bodies. Methods: Leveraging established collaborations between Office of Cyber Infrastructure and Computational Biology of the National Institutes of Health and the John Hopkins University School of Medicine in the United States, the Infectious Diseases Institute at Makerere University built a data management coordinating center. This included mentorship from the NIAID International Centers for Excellence in Research and training of key personnel in South Africa at a functioning data center. The number of studies, case report forms processed and the number of publications emanating from studies using the data management unit since its inception were tabulated. Results: The Infectious Diseases Institute data management core began processing data in 2009 with 3 personnel, hardware (network-enabled scanners, desktops, server held in Bethesda with nightly back up) and software licenses, in addition to on-site support from the NIH. In the last 10 years, 850,869 pages of data have been processed from 60 studies in Uganda, across sub-Saharan Africa, Asia and South America. Real-time data cleaning and data analysis occur routinely and enhance clinical research quality; a total of 212 publications from IDI investigators have been published over the past 10 years. Apart from the server back-up services provided by the NIH, the center is now self-sustaining from fees charged to individual studies. Conclusion: Collaborative partnership among research institutions enabled the IDI to build a core data management and coordination center to support clinical studies, build institutional research capacity, and to advance data quality and integrity for the investigators and sponsors.


2020 ◽  
Author(s):  
◽  

Good data management is essential for ensuring the validity and quality of data in all types of clinical research and is an essential precursor for data sharing. The Data Management Portal has been developed to provide support to researchers to ensure that high-quality data management is fully considered, and planned for, from the outset and throughout the life of a research project. The steps described in the portal will help identify the areas which should be considered when developing a Data Management Plan, with a particular focus on data management systems and how to organise and structure your data. Other elements include best practices for data capture, entry, processing and monitoring, how to prepare data for analysis, sharing, and archiving, and an extensive collection of resources linked to data management which can be searched and filtered depending on their type.


2015 ◽  
Vol 7 (1) ◽  
Author(s):  
Stacey Hoferka ◽  
Marcus Rennick ◽  
Erin E. Austin ◽  
Anne Burke ◽  
Rosa Ergas ◽  
...  

This roundtable will provide a forum for the syndromic surveillance Community of Practice (CoP) to learn about activities of the BioSense 2.0 User Group (BUG) workgroups that address priority issues in syndromic surveillance. The goals of the workgroups are to coordinate efforts nationwide, better inform development of BioSense 2.0 to the Governance Group and CDC, and achieve high-quality outcomes for the practice of syndromic surveillance. Representatives from each workgroup will describe their efforts to date so participants can discuss key challenges and best practices in the areas of data quality, data sharing, onboarding, and developing syndrome definitions.


2020 ◽  

The United Nations Sustainable Development Goals initiative has the potential to set the direction for a future world that works for everyone. Approved by 193 United Nations member countries in September 2016 to help guide global and national development policies in the period to 2030, the 17 goals build on the successes of the Millennium Development Goals, but also include new priority areas, such as climate change, economic inequality, innovation, sustainable consumption, peace and justice. Assessed against common agreed targets and indicators, the goals should facilitate inter-governmental cooperation and the development of regional and even global development strategies. However, each goal presents considerable challenges in terms of collecting and analysing relevant data and producing the statistics needed to measure progress. Most governments in lower resourced countries simply do not yet have the systems and controls in place to produce high quality, reliable data and statistics, and it is questionable whether the quality and integrity of the available information is adequate to support meaningful decisions and set direction for the future. There are substantial implications: where progress cannot be measured accurately because of inadequate or flawed statistics, the result can be misguided decisions, doubts about achievement of the goals and significant wasted resources. Getting statistics ‘right’ depends upon the quality and integrity of the data used to produce them and on the quality of the processes for collecting, manipulating and analysing the data. Without a documentary records as evidence of how the data were gathered and analysed or how statistics were produced and disseminated, it is not possible to confirm that the statistics are complete, accurate and relevant. Various global organisations do recognise the importance of high quality data and statistics for measuring the SDG indicators reliably, but there has been little attention to the role of records in providing the evidence needed to trust the data and statistics. There is, moreover, a lack of awareness that digital information simply will not survive without policies and procedures to manage and preserve it through time. As a result, digital data, statistics and records are being lost regularly on a large scale, particularly in lower resource countries, where the structures needed to protect and preserve them are not yet in place. This book explores, through a series of case studies, the substantial challenges for assembling reliable data and statistics to address pressing development challenges, particularly in Africa. Hopefully, by highlighting the enormous potential value of creating and using high quality data, statistics and records as an interconnected resource and describing how this can be achieved, the book will contribute to defining meaningful and realistic global and national development policies in the critical period to 2030.


Sign in / Sign up

Export Citation Format

Share Document