scholarly journals A new model of third generation artificial turf degradation, maintenance interventions and benefits

Author(s):  
Paul Richard Fleming ◽  
Charlie Watts ◽  
Stephanie Forrester

High performing and safe outdoor third generation (3G) artificial turf (AT) fields demand high quality initial design and construction coupled with a comprehensive maintenance aftercare regime. However, in many cases maintenance of AT fields suffers from either a low-cost inexpert approach, or a one-size-fits-all generic approach based on general guidance with little to no evidence of effectiveness. Little previous research has addressed fundamental questions regarding how 3G AT systems degrade or have provided evidence of the effectiveness of maintenance interventions. Maintenance techniques currently utilised can be grouped into four separate categories (grooming, cleaning, decompaction and infill top-ups). The maintenance tools and processes for each category have been developed empirically through experience, mainly in response to qualitative observations with little quantitative evidence. This paper reports on a unique body of quantitative evidence of specific effects of maintenance interventions, on third generation AT surfaces (3G), collected over the past several years in collaboration with a major UK sports surface maintenance contractor. In addition, the data contributes new and robust evidence of the rate at which 3G surfaces can ‘lose’ quantities of performance infill corroborated by the rate at which fields were topped up to maintain appropriate infill depths. A new quantified pitch degradation and maintenance benefits model is presented explaining 3G AT system degradation factors and mechanisms, their links to changes in system performance and the magnitudes of change effected by specific maintenance techniques. The new model is of direct benefit to both researchers and practitioners impacting on future best practice for assessing and maintaining the safety and performance of 3G AT.

Author(s):  
Kristen Hurtado ◽  
Anusree Saseendran ◽  
John Savicky ◽  
Kenneth Sullivan

Construction project managers (PMs) are critical to the execution of successful construction projects, conducting and orchestrating the intricacies of dynamic and complex projects. A large state university was challenged with attracting and retaining PMs in their construction department during a period of rapid university growth and departmental re-organization. The university tried current models for selecting construction PM firms largely based on commodities-based procurement and/or low bid structures and was unsuccessful. A new model was developed that considered and analyzed both the capabilities and qualifications of the individual PMs. The importance of individuals being able to identify their unique capabilities, be accountable for their performance, and operate in a transparent environment are critical concepts within this model. The new model also created an environment of organizational transparency, requiring PMs to measure their performance and the performance of their projects. The model resulted in an overwhelming amount of high quality PMs seeking to join the university. The university initially sought a specific skills set for new PMs, but revised their criteria and future selection processes for hiring future PMs based on the results of this model. Analysis of the PM capabilities, qualifications, and performance are shared as well as lessons learned to refine the model. This model can also be used to identify high performing individuals in other positions or disciplines.


2021 ◽  
Author(s):  
Reuben M. Buckley ◽  
Alex C. Harris ◽  
Guo-Dong Wang ◽  
D. Thad Whitaker ◽  
Ya-Ping Zhang ◽  
...  

AbstractAlthough DNA array-based approaches for genome-wide association studies (GWAS) permit the collection of thousands of low-cost genotypes, it is often at the expense of resolution and completeness, as SNP chip technologies are ultimately limited by SNPs chosen during array development. An alternative low-cost approach is low-pass whole genome sequencing (WGS) followed by imputation. Rather than relying on high levels of genotype confidence at a set of select loci, low-pass WGS and imputation rely on the combined information from millions of randomly sampled low-confidence genotypes. To investigate low-pass WGS and imputation in the dog, we assessed accuracy and performance by downsampling 97 high-coverage (> 15×) WGS datasets from 51 different breeds to approximately 1× coverage, simulating low-pass WGS. Using a reference panel of 676 dogs from 91 breeds, genotypes were imputed from the downsampled data and compared to a truth set of genotypes generated from high-coverage WGS. Using our truth set, we optimized a variant quality filtering strategy that retained approximately 80% of 14 M imputed sites and lowered the imputation error rate from 3.0% to 1.5%. Seven million sites remained with a MAF > 5% and an average imputation quality score of 0.95. Finally, we simulated the impact of imputation errors on outcomes for case–control GWAS, where small effect sizes were most impacted and medium-to-large effect sizes were minorly impacted. These analyses provide best practice guidelines for study design and data post-processing of low-pass WGS-imputed genotypes in dogs.


2021 ◽  
Author(s):  
Reuben M. Buckley ◽  
Alex C. Harris ◽  
Guo-Dong Wang ◽  
D. Thad Whitaker ◽  
Ya-Ping Zhang ◽  
...  

AbstractAlthough DNA array-based approaches for genome wide association studies (GWAS) permit the collection of thousands of low-cost genotypes, it is often at the expense of resolution and completeness, as SNP chip technologies are ultimately limited by SNPs chosen during array development. An alternative low-cost approach is low-pass whole genome sequencing (WGS) followed by imputation. Rather than relying on high levels of genotype confidence at a set of select loci, low-pass WGS and imputation relies on the combined information from millions of randomly sampled low confidence genotypes. To investigate low-pass WGS and imputation in the dog, we assessed accuracy and performance by downsampling 97 high-coverage (>15x) WGS datasets from 51 different breeds to approximately 1x coverage, simulating low-pass WGS. Using a reference panel of 676 dogs from 91 breeds, genotypes were imputed from the downsampled data and compared to a truth set of genotypes generated from high coverage WGS. Using our truth set, we optimized a variant quality filtering strategy that retained approximately 80% of 14M imputed sites and lowered the imputation error rate from 3.0% to 1.5%. Seven million sites remained with a MAF > 5% and an average imputation quality score of 0.95. Finally, we simulated the impact of imputation errors on outcomes for case-control GWAS, where small effect sizes were most impacted and medium to large effect sizes were minorly impacted. These analyses provide best practice guidelines for study design and data post-processing of low-pass WGS imputed genotypes in dogs.


Author(s):  
José Capmany ◽  
Daniel Pérez

Programmable Integrated Photonics (PIP) is a new paradigm that aims at designing common integrated optical hardware configurations, which by suitable programming can implement a variety of functionalities that, in turn, can be exploited as basic operations in many application fields. Programmability enables by means of external control signals both chip reconfiguration for multifunction operation as well as chip stabilization against non-ideal operation due to fluctuations in environmental conditions and fabrication errors. Programming also allows activating parts of the chip, which are not essential for the implementation of a given functionality but can be of help in reducing noise levels through the diversion of undesired reflections. After some years where the Application Specific Photonic Integrated Circuit (ASPIC) paradigm has completely dominated the field of integrated optics, there is an increasing interest in PIP justified by the surge of a number of emerging applications that are and will be calling for true flexibility, reconfigurability as well as low-cost, compact and low-power consuming devices. This book aims to provide a comprehensive introduction to this emergent field covering aspects that range from the basic aspects of technologies and building photonic component blocks to the design alternatives and principles of complex programmable photonics circuits, their limiting factors, techniques for characterization and performance monitoring/control and their salient applications both in the classical as well as in the quantum information fields. The book concentrates and focuses mainly on the distinctive features of programmable photonics as compared to more traditional ASPIC approaches.


1987 ◽  
Vol 14 (3) ◽  
pp. 134-140 ◽  
Author(s):  
K.A. Clarke

Practical classes in neurophysiology reinforce and complement the theoretical background in a number of ways, including demonstration of concepts, practice in planning and performance of experiments, and the production and maintenance of viable neural preparations. The balance of teaching objectives will depend upon the particular group of students involved. A technique is described which allows the embedding of real compound action potentials from one of the most basic introductory neurophysiology experiments—frog sciatic nerve, into interactive programs for student use. These retain all the elements of the “real experiment” in terms of appearance, presentation, experimental management and measurement by the student. Laboratory reports by the students show that the experiments are carefully and enthusiastically performed and the material is well absorbed. Three groups of student derive most benefit from their use. First, students whose future careers will not involve animal experiments do not spend time developing dissecting skills they will not use, but more time fulfilling the other teaching objectives. Second, relatively inexperienced students, struggling to produce viable neural material and master complicated laboratory equipment, who are often left with little time or motivation to take accurate readings or ponder upon neurophysiological concepts. Third, students in institutions where neurophysiology is taught with difficulty because of the high cost of equipment and lack of specific expertise, may well have access to a low cost general purpose microcomputer system.


2021 ◽  
Vol 11 (6) ◽  
pp. 2535
Author(s):  
Bruno E. Silva ◽  
Ramiro S. Barbosa

In this article, we designed and implemented neural controllers to control a nonlinear and unstable magnetic levitation system composed of an electromagnet and a magnetic disk. The objective was to evaluate the implementation and performance of neural control algorithms in a low-cost hardware. In a first phase, we designed two classical controllers with the objective to provide the training data for the neural controllers. After, we identified several neural models of the levitation system using Nonlinear AutoRegressive eXogenous (NARX)-type neural networks that were used to emulate the forward dynamics of the system. Finally, we designed and implemented three neural control structures: the inverse controller, the internal model controller, and the model reference controller for the control of the levitation system. The neural controllers were tested on a low-cost Arduino control platform through MATLAB/Simulink. The experimental results proved the good performance of the neural controllers.


Author(s):  
Antonia Perju ◽  
Nongnoot Wongkaew

AbstractLateral flow assays (LFAs) are the best-performing and best-known point-of-care tests worldwide. Over the last decade, they have experienced an increasing interest by researchers towards improving their analytical performance while maintaining their robust assay platform. Commercially, visual and optical detection strategies dominate, but it is especially the research on integrating electrochemical (EC) approaches that may have a chance to significantly improve an LFA’s performance that is needed in order to detect analytes reliably at lower concentrations than currently possible. In fact, EC-LFAs offer advantages in terms of quantitative determination, low-cost, high sensitivity, and even simple, label-free strategies. Here, the various configurations of EC-LFAs published are summarized and critically evaluated. In short, most of them rely on applying conventional transducers, e.g., screen-printed electrode, to ensure reliability of the assay, and additional advances are afforded by the beneficial features of nanomaterials. It is predicted that these will be further implemented in EC-LFAs as high-performance transducers. Considering the low cost of point-of-care devices, it becomes even more important to also identify strategies that efficiently integrate nanomaterials into EC-LFAs in a high-throughput manner while maintaining their favorable analytical performance.


Nanomaterials ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 28
Author(s):  
Anastasios I. Tsiotsias ◽  
Nikolaos D. Charisiou ◽  
Ioannis V. Yentekakis ◽  
Maria A. Goula

CO2 methanation has recently emerged as a process that targets the reduction in anthropogenic CO2 emissions, via the conversion of CO2 captured from point and mobile sources, as well as H2 produced from renewables into CH4. Ni, among the early transition metals, as well as Ru and Rh, among the noble metals, have been known to be among the most active methanation catalysts, with Ni being favoured due to its low cost and high natural abundance. However, insufficient low-temperature activity, low dispersion and reducibility, as well as nanoparticle sintering are some of the main drawbacks when using Ni-based catalysts. Such problems can be partly overcome via the introduction of a second transition metal (e.g., Fe, Co) or a noble metal (e.g., Ru, Rh, Pt, Pd and Re) in Ni-based catalysts. Through Ni-M alloy formation, or the intricate synergy between two adjacent metallic phases, new high-performing and low-cost methanation catalysts can be obtained. This review summarizes and critically discusses recent progress made in the field of bimetallic Ni-M (M = Fe, Co, Cu, Ru, Rh, Pt, Pd, Re)-based catalyst development for the CO2 methanation reaction.


Polymers ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 785
Author(s):  
Chow Shing Shin ◽  
Yu Chia Chang

Lattice structures are superior to stochastic foams in mechanical properties and are finding increasing applications. Their properties can be tailored in a wide range through adjusting the design and dimensions of the unit cell, changing the constituent materials as well as forming into hierarchical structures. In order to achieve more levels of hierarchy, the dimensions of the fundamental lattice have to be small enough. Although lattice size of several microns can be fabricated using the two-photon polymerization technique, sophisticated and costly equipment is required. To balance cost and performance, a low-cost high resolution micro-stereolithographic system has been developed in this work based on a commercial digital light processing (DLP) projector. Unit cell lengths as small as 100 μm have been successfully fabricated. Decreasing the unit cell size from 150 to 100 μm increased the compressive stiffness by 26%. Different pretreatments to facilitate the electroless plating of nickel on the lattice structure have been attempted. A pretreatment of dip coating in a graphene suspension is the most successful and increased the strength and stiffness by 5.3 and 3.6 times, respectively. Even a very light and incomplete nickel plating in the interior has increase the structural stiffness and strength by more than twofold.


Sign in / Sign up

Export Citation Format

Share Document