scholarly journals Survival-Span Method: How to Qualitatively Estimate Lifespan to Improve the Study of Aging, and not Disease, in Aging Studies

2021 ◽  
Vol 2 ◽  
Author(s):  
Julia Adelöf ◽  
Jaime M. Ross ◽  
Madeleine Zetterberg ◽  
Malin Hernebring

Lifespan analyses are important for advancing our understanding of the aging process. There are two major issues in performing lifespan studies: 1) late-stage animal lifespan analysis may include animals with non-terminal, yet advanced illnesses, which can pronounce indirect processes of aging rather than the aging process per se and 2) they often involves challenging welfare considerations. Herein, we present an option to the traditional way of performing lifespan studies by using a novel method that generates high-quality data and allows for the inclusion of excluded animals, even animals removed at early signs of disease. This Survival-span method is designed to be feasibly done with simple means by any researcher and strives to improve the quality of aging studies and increase animal welfare.

Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4486 ◽  
Author(s):  
Mohan Li ◽  
Yanbin Sun ◽  
Yu Jiang ◽  
Zhihong Tian

In sensor-based systems, the data of an object is often provided by multiple sources. Since the data quality of these sources might be different, when querying the observations, it is necessary to carefully select the sources to make sure that high quality data is accessed. A solution is to perform a quality evaluation in the cloud and select a set of high-quality, low-cost data sources (i.e., sensors or small sensor networks) that can answer queries. This paper studies the problem of min-cost quality-aware query which aims to find high quality results from multi-sources with the minimized cost. The measurement of the query results is provided, and two methods for answering min-cost quality-aware query are proposed. How to get a reasonable parameter setting is also discussed. Experiments on real-life data verify that the proposed techniques are efficient and effective.


2021 ◽  
pp. 193896552110254
Author(s):  
Lu Lu ◽  
Nathan Neale ◽  
Nathaniel D. Line ◽  
Mark Bonn

As the use of Amazon’s Mechanical Turk (MTurk) has increased among social science researchers, so, too, has research into the merits and drawbacks of the platform. However, while many endeavors have sought to address issues such as generalizability, the attentiveness of workers, and the quality of the associated data, there has been relatively less effort concentrated on integrating the various strategies that can be used to generate high-quality data using MTurk samples. Accordingly, the purpose of this research is twofold. First, existing studies are integrated into a set of strategies/best practices that can be used to maximize MTurk data quality. Second, focusing on task setup, selected platform-level strategies that have received relatively less attention in previous research are empirically tested to further enhance the contribution of the proposed best practices for MTurk usage.


1994 ◽  
Vol 8 (4) ◽  
pp. 883-886 ◽  
Author(s):  
Janet L. Andersen

The Environmental Protection Agency (EPA) is required by law to assure that the use of pesticides does not cause unreasonable risks to humans or the environment when risks are compared with benefits. Weed scientists conduct hundreds of comparative efficacy tests each year, but the results are often of little use to the Agency in benefit assessments because the tests are unpublished or otherwise unavailable to the Agency, the tests are conducted in a manner unusable for regulatory purposes, or there are inconsistencies between tests conducted year to year or at different sites. Despite the lack of high quality data, the Agency is compelled to make the best regulatory decision possible with the information at hand, and it may appear to some that decisions are based more on policy than science. EPA is looking for experimental methods that will improve the quality of benefits data available to the Agency.


Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 1978 ◽  
Author(s):  
Argyro Mavrogiorgou ◽  
Athanasios Kiourtis ◽  
Konstantinos Perakis ◽  
Stamatios Pitsios ◽  
Dimosthenis Kyriazis

It is an undeniable fact that Internet of Things (IoT) technologies have become a milestone advancement in the digital healthcare domain, since the number of IoT medical devices is grown exponentially, and it is now anticipated that by 2020 there will be over 161 million of them connected worldwide. Therefore, in an era of continuous growth, IoT healthcare faces various challenges, such as the collection, the quality estimation, as well as the interpretation and the harmonization of the data that derive from the existing huge amounts of heterogeneous IoT medical devices. Even though various approaches have been developed so far for solving each one of these challenges, none of these proposes a holistic approach for successfully achieving data interoperability between high-quality data that derive from heterogeneous devices. For that reason, in this manuscript a mechanism is produced for effectively addressing the intersection of these challenges. Through this mechanism, initially, the collection of the different devices’ datasets occurs, followed by the cleaning of them. In sequel, the produced cleaning results are used in order to capture the levels of the overall data quality of each dataset, in combination with the measurements of the availability of each device that produced each dataset, and the reliability of it. Consequently, only the high-quality data is kept and translated into a common format, being able to be used for further utilization. The proposed mechanism is evaluated through a specific scenario, producing reliable results, achieving data interoperability of 100% accuracy, and data quality of more than 90% accuracy.


2020 ◽  
Author(s):  
◽  

Good data management is essential for ensuring the validity and quality of data in all types of clinical research and is an essential precursor for data sharing. The Data Management Portal has been developed to provide support to researchers to ensure that high-quality data management is fully considered, and planned for, from the outset and throughout the life of a research project. The steps described in the portal will help identify the areas which should be considered when developing a Data Management Plan, with a particular focus on data management systems and how to organise and structure your data. Other elements include best practices for data capture, entry, processing and monitoring, how to prepare data for analysis, sharing, and archiving, and an extensive collection of resources linked to data management which can be searched and filtered depending on their type.


Forests ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 99
Author(s):  
Marieke Sandker ◽  
Oswaldo Carrillo ◽  
Chivin Leng ◽  
Donna Lee ◽  
Rémi d’Annunzio ◽  
...  

This article discusses the importance of quality deforestation area estimates for reliable and credible REDD+ monitoring and reporting. It discusses how countries can make use of global spatial tree cover change assessments, but how considerable additional efforts are required to translate these into national deforestation estimates. The article illustrates the relevance of countries’ continued efforts on improving data quality for REDD+ monitoring by looking at Mexico, Cambodia, and Ghana. The experience in these countries show differences between deforestation areas assessed directly from maps and improved sample-based deforestation area estimates, highlighting significant changes in both magnitude and trend of assessed deforestation from both methods. Forests play an important role in achieving the goals of the Paris Agreement, and therefore the ability of countries to accurately measure greenhouse gases from forests is critical. Continued efforts by countries are needed to produce credible and reliable data. Supporting countries to continually increase the quality of deforestation area estimates will also support more efficient allocation of finance that rewards REDD+ results-based payments.


Author(s):  
Mary Kay Gugerty ◽  
Dean Karlan

Without high-quality data, even the best-designed monitoring and evaluation systems will collapse. Chapter 7 introduces some the basics of collecting high-quality data and discusses how to address challenges that frequently arise. High-quality data must be clearly defined and have an indicator that validly and reliably measures the intended concept. The chapter then explains how to avoid common biases and measurement errors like anchoring, social desirability bias, the experimenter demand effect, unclear wording, long recall periods, and translation context. It then guides organizations on how to find indicators, test data collection instruments, manage surveys, and train staff appropriately for data collection and entry.


2019 ◽  
Vol 14 (3) ◽  
pp. 338-366
Author(s):  
Kashif Imran ◽  
Evelyn S. Devadason ◽  
Cheong Kee Cheok

This article analyzes the overall and type of developmental impacts of remittances for migrant-sending households (HHs) in districts of Punjab, Pakistan. For this purpose, an HH-based human development index is constructed based on the dimensions of education, health and housing, with a view to enrich insights into interactions between remittances and HH development. Using high-quality data from a HH micro-survey for Punjab, the study finds that most migrant-sending HHs are better off than the HHs without this stream of income. More importantly, migrant HHs have significantly higher development in terms of housing in most districts of Punjab relative to non-migrant HHs. Thus, the government would need policy interventions focusing on housing to address inequalities in human development at the district-HH level, and subsequently balance its current focus on the provision of education and health.


2017 ◽  
Vol 47 (1) ◽  
pp. 46-55 ◽  
Author(s):  
S Aqif Mukhtar ◽  
Debbie A Smith ◽  
Maureen A Phillips ◽  
Maire C Kelly ◽  
Renate R Zilkens ◽  
...  

Background: The Sexual Assault Resource Center (SARC) in Perth, Western Australia provides free 24-hour medical, forensic, and counseling services to persons aged over 13 years following sexual assault. Objective: The aim of this research was to design a data management system that maintains accurate quality information on all sexual assault cases referred to SARC, facilitating audit and peer-reviewed research. Methods: The work to develop SARC Medical Services Clinical Information System (SARC-MSCIS) took place during 2007–2009 as a collaboration between SARC and Curtin University, Perth, Western Australia. Patient demographics, assault details, including injury documentation, and counseling sessions were identified as core data sections. A user authentication system was set up for data security. Data quality checks were incorporated to ensure high-quality data. Results: An SARC-MSCIS was developed containing three core data sections having 427 data elements to capture patient’s data. Development of the SARC-MSCIS has resulted in comprehensive capacity to support sexual assault research. Four additional projects are underway to explore both the public health and criminal justice considerations in responding to sexual violence. The data showed that 1,933 sexual assault episodes had occurred among 1881 patients between January 1, 2009 and December 31, 2015. Sexual assault patients knew the assailant as a friend, carer, acquaintance, relative, partner, or ex-partner in 70% of cases, with 16% assailants being a stranger to the patient. Conclusion: This project has resulted in the development of a high-quality data management system to maintain information for medical and forensic services offered by SARC. This system has also proven to be a reliable resource enabling research in the area of sexual violence.


Sign in / Sign up

Export Citation Format

Share Document