Best Practices
Recently Published Documents


TOTAL DOCUMENTS

18454
(FIVE YEARS 10725)

H-INDEX

118
(FIVE YEARS 53)

2021 ◽  
Author(s):  
Jara Popp ◽  
Till Biskup

Reproducibility is at the heart of science. However, most published results usually lack the information necessary to be independently reproduced. Even more, most authors will not be able to reproduce the results from a few years ago due to lacking a gap-less record of every processing and analysis step including all parameters involved. There is only one way to overcome this problem: developing robust tools for data analysis that, while maintaining a maximum of flexibility in their application, allow the user to perform advanced processing steps in a scientifically sound way. At the same time, the only viable approach for reproducible and traceable analysis is to relieve the user of the responsibility for logging all processing steps and their parameters. This can only be achieved by using a system that takes care of these crucial though often neglected tasks. Here, we present a solution to this problem: a framework for the analysis of spectroscopic data (ASpecD) written in the Python programming language that can be used without any actual programming needed. This framework is made available open-source and free of charge and focusses on usability, small footprint and modularity while ensuring reproducibility and good scientific practice. Furthermore, we present a set of best practices and design rules for scientific software development and data analysis. Together, this empowers scientists to focus on their research minimising the need to implement complex software tools while ensuring full reproducibility. We anticipate this to have a major impact on reproducibility and good scientific practice, as we raise the awareness of their importance, summarise proven best practices and present a working user-friendly software solution.


Author(s):  
Balázs Sonkodi ◽  
Endre Varga ◽  
László Hangody ◽  
Gyula Poór ◽  
István Berkes

Abstract Background Anterior cruciate ligament injury arises when the knee anterior ligament fibers are stretched, partially torn, or completely torn. Operated patients either end up re-injuring their reconstructed anterior cruciate ligament or majority develop early osteoarthritis regardless of the remarkable improvements of surgical techniques and the widely available rehabilitation best practices. New mechanism theories of non-contact anterior cruciate ligament injury and delayed onset muscle soreness could provide a novel perspective how to respond to this clinical challenge. Main body A tri-phasic injury model is proposed for these non-contact injuries. Mechano-energetic microdamage of the proprioceptive sensory nerve terminals is suggested to be the first-phase injury that is followed by a harsher tissue damage in the second phase. The longitudinal dimension is the third phase and that is the equivalent of the repeated bout effect of delayed onset muscle soreness. Current paper puts this longitudinal injury phase into perspective as the phase when the long-term memory consolidation and reconsolidation of this learning related neuronal injury evolves and the phase when the extent of the neuronal regeneration is determined. Reinstating the mitochondrial energy supply and ‘breathing capacity’ of the injured proprioceptive sensory neurons during this period is emphasized, as avoiding fatigue, overuse, overload and re-injury. Conclusions Extended use, minimum up to a year or even longer, of a current rehabilitation technique, namely moderate intensity low resistance stationary cycling, is recommended preferably at the end of the day. This exercise therapeutic strategy should be a supplementation to the currently used rehabilitation best practices as a knee anti-aging maintenance effort.


2021 ◽  
Author(s):  
Hannah B. Musgrove ◽  
Megan A. Catterton ◽  
Rebecca R. Pompano

Stereolithographic (SL) 3D printing, especially digital light processing (DLP) printing, is a promising rapid fabrication method for bio-microfluidic applications such as clinical tests, lab-on-a-chip devices, and sensor integrated devices. The benefits of 3D printing lead many to believe this fabrication method will accelerate the use of bioanalytical microfluidics, but there are major obstacles to overcome to fully utilize this technology. For commercially available printing materials, this includes challenges in producing prints with the print resolution and mechanical stability required for a particular design, along with cytotoxic components within many SL resins and low optical compatibility for imaging experiments. Potential solutions to these problems are scattered throughout the literature and rarely available in head-to-head comparisons. Therefore, we present here principles for navigation of 3D printing techniques and systematic tests to inform resin selection and optimization of the design and fabrication of SL 3D printed bio-microfluidic devices.


2021 ◽  
Vol 2 (2) ◽  
pp. 843-861
Author(s):  
Yulia Pustovalova ◽  
Frank Delaglio ◽  
D. Levi Craft ◽  
Haribabu Arthanari ◽  
Ad Bax ◽  
...  

Abstract. Although the concepts of nonuniform sampling (NUS​​​​​​​) and non-Fourier spectral reconstruction in multidimensional NMR began to emerge 4 decades ago (Bodenhausen and Ernst, 1981; Barna and Laue, 1987), it is only relatively recently that NUS has become more commonplace. Advantages of NUS include the ability to tailor experiments to reduce data collection time and to improve spectral quality, whether through detection of closely spaced peaks (i.e., “resolution”) or peaks of weak intensity (i.e., “sensitivity”). Wider adoption of these methods is the result of improvements in computational performance, a growing abundance and flexibility of software, support from NMR spectrometer vendors, and the increased data sampling demands imposed by higher magnetic fields. However, the identification of best practices still remains a significant and unmet challenge. Unlike the discrete Fourier transform, non-Fourier methods used to reconstruct spectra from NUS data are nonlinear, depend on the complexity and nature of the signals, and lack quantitative or formal theory describing their performance. Seemingly subtle algorithmic differences may lead to significant variabilities in spectral qualities and artifacts. A community-based critical assessment of NUS challenge problems has been initiated, called the “Nonuniform Sampling Contest” (NUScon), with the objective of determining best practices for processing and analyzing NUS experiments. We address this objective by constructing challenges from NMR experiments that we inject with synthetic signals, and we process these challenges using workflows submitted by the community. In the initial rounds of NUScon our aim is to establish objective criteria for evaluating the quality of spectral reconstructions. We present here a software package for performing the quantitative analyses, and we present the results from the first two rounds of NUScon. We discuss the challenges that remain and present a roadmap for continued community-driven development with the ultimate aim of providing best practices in this rapidly evolving field. The NUScon software package and all data from evaluating the challenge problems are hosted on the NMRbox platform.


2021 ◽  
Author(s):  
Susan G. Magilaro ◽  
Jeremy V. Ernst

This inventory of statewide and regional STEM education networks in the United States is a resource for P-12 schools, higher education, business and industries, and other community stakeholders to advance collaboration, engagement, stakeholder support, and further understanding of best practices to sustain these partnerships.


2022 ◽  
Vol 106 (1) ◽  
pp. 61-80
Author(s):  
Alyssa Peterkin ◽  
Jordana Laks ◽  
Zoe M. Weinstein

Target ◽  
2021 ◽  
Author(s):  
Samuel Läubli ◽  
Patrick Simianer ◽  
Joern Wuebker ◽  
Geza Kovacs ◽  
Rico Sennrich ◽  
...  

Abstract Widely used computer-aided translation (CAT) tools divide documents into segments, such as sentences, and arrange them side-by-side in a spreadsheet-like view. We present the first controlled evaluation of these design choices on translator performance, measuring speed and accuracy in three experimental text-processing tasks. We find significant evidence that sentence-by-sentence presentation enables faster text reproduction and within-sentence error identification compared to unsegmented text, and that a top-and-bottom arrangement of source and target sentences enables faster text reproduction compared to a side-by-side arrangement. For revision, on the other hand, we find that presenting unsegmented text results in the highest accuracy and time efficiency. Our findings have direct implications for best practices in designing CAT tools.


2021 ◽  
Author(s):  
Ying Wang ◽  
Shinichi Namba ◽  
Esteban Lopera ◽  
Sini Kerminen ◽  
Kristin Tsuo ◽  
...  

SummaryWith the increasing availability of biobank-scale datasets that incorporate both genomic data and electronic health records, many associations between genetic variants and phenotypes of interest have been discovered. Polygenic risk scores (PRS), which are being widely explored in precision medicine, use the results of association studies to predict the genetic component of disease risk by accumulating risk alleles weighted by their effect sizes. However, limited studies have thoroughly investigated best practices for PRS in global populations across different diseases. In this study, we utilize data from the Global-Biobank Meta-analysis Initiative (GBMI), which consists of individuals from diverse ancestries and across continents, to explore methodological considerations and PRS prediction performance in 9 different biobanks for 14 disease endpoints. Specifically, we constructed PRS using heuristic (pruning and thresholding, P+T) and Bayesian (PRS-CS) methods. We found that the genetic architecture, such as SNP-based heritability and polygenicity, varied greatly among endpoints. For both PRS construction methods, using a European ancestry LD reference panel resulted in comparable or higher prediction accuracy compared to several other non-European based panels; this is largely attributable to European descent populations still comprising the majority of GBMI participants. PRS-CS overall outperformed the classic P+T method, especially for endpoints with higher SNP-based heritability. For example, substantial improvements are observed in East-Asian ancestry (EAS) using PRS-CS compared to P+T for heart failure (HF) and chronic obstructive pulmonary disease (COPD). Notably, prediction accuracy is heterogeneous across endpoints, biobanks, and ancestries, especially for asthma which has known variation in disease prevalence across global populations. Overall, we provide lessons for PRS construction, evaluation, and interpretation using the GBMI and highlight the importance of best practices for PRS in the biobank-scale genomics era.


Sign in / Sign up

Export Citation Format

Share Document