sampling density
Recently Published Documents


TOTAL DOCUMENTS

166
(FIVE YEARS 34)

H-INDEX

23
(FIVE YEARS 1)

2022 ◽  
Author(s):  
Sohrab Najafian ◽  
Erin Koch ◽  
Kai-Lun Teh ◽  
Jianzhong Jin ◽  
Hamed Rahimi-Nasrabadi ◽  
...  

The cerebral cortex receives multiple afferents from the thalamus that segregate by stimulus modality forming cortical maps for each sense. In vision, the primary visual cortex also maps the multiple dimensions of the stimulus in patterns that vary across species for reasons unknown. Here we introduce a general theory of cortical map formation, which proposes that map diversity emerges from variations in sampling density of sensory space across species. In the theory, increasing afferent sampling density enlarges the cortical domains representing the same visual point allowing the segregation of afferents and cortical targets by additional stimulus dimensions. We illustrate the theory with a computational model that accurately replicates the maps of different species through afferent segregation followed by thalamocortical convergence pruned by visual experience. Because thalamocortical pathways use similar mechanisms for axon sorting and pruning, the theory may extend to other sensory areas of the mammalian brain.


2021 ◽  
Author(s):  
Malte Willmes ◽  
clement bataille ◽  
Hannah James ◽  
Ian Moffat ◽  
Linda McMorrow ◽  
...  

Strontium isotope ratios (87Sr/86Sr) of archaeological samples (teeth and bones) can be used to track mobility and migration across geologically distinct landscapes. However, traditional interpolation algorithms and classification approaches used to generate Sr isoscapes are often limited in predicting multiscale 87Sr/86Sr patterning. Here we investigate the suitability of plant samples and soil leachates from the IRHUM database (www. irhumdatabase.com) to create a bioavailable 87Sr/86Sr map using a novel geostatistical framework. First, we generated an 87Sr/86Sr map by classifying 87Sr/86Sr values into five geologically representative isotope groups using cluster analysis. The isotope groups were then used as a covariate in kriging to integrate prior geological knowledge of Sr cycling with the information contained in the bioavailable dataset and enhance 87Sr/86Sr predictions. Our approach couples the strengths of classification and geostatistical methods to generate more accurate 87Sr/86Sr predictions (Root Mean Squared Error=0.0029) with an estimate of spatial uncertainty based on lithology and sample density. This bioavailable Sr isoscape is applicable for provenance studies in France, and the method is transferable to other areas with high sampling density. While our method is a step forward in generating accurate 87Sr/86Sr isoscapes, the remaining uncertainty also demonstrates that finemodelling of 87Sr/86Sr variability is challenging and requires more than geological maps for accurately predicting 87Sr/86Sr variations across the landscape. Future efforts should focus on increasing sampling density and developing predictive models to further quantify and predict the processes that lead to 87Sr/86Sr variability.


Agronomy ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 2430
Author(s):  
Dorijan Radočaj ◽  
Irena Jug ◽  
Vesna Vukadinović ◽  
Mladen Jurišić ◽  
Mateo Gašparović

Knowledge of the relationship between soil sampling density and spatial autocorrelation with interpolation accuracy allows more time- and cost-efficient spatial analysis. Previous studies produced contradictory observations regarding this relationship, and this study aims to determine and explore under which conditions the interpolation accuracy of chemical soil properties is affected. The study area covered 823.4 ha of agricultural land with 160 soil samples containing phosphorus pentoxide (P2O5) and potassium oxide (K2O) values. The original set was split into eight subsets using a geographically stratified random split method, interpolated using the ordinary kriging (OK) and inverse distance weighted (IDW) methods. OK and IDW achieved similar interpolation accuracy regardless of the soil chemical property and sampling density, contrary to the majority of previous studies which observed the superiority of kriging as a deterministic interpolation method. The primary dependence of interpolation accuracy to soil sampling density was observed, having R2 in the range of 56.5–83.4% for the interpolation accuracy assessment. While this study enables farmers to perform efficient soil sampling according to the desired level of detail, it could also prove useful to professions dependent on field sampling, such as biology, geology, and mining.


2021 ◽  
Author(s):  
Arong Luo ◽  
Chi Zhang ◽  
Qing-Song Zhou ◽  
Simon Y.W. Ho ◽  
Chao-Dong Zhu

Evolutionary timescales can be estimated using a combination of genetic data and fossil evidence based on the molecular clock. Bayesian phylogenetic methods such as tip dating and total-evidence dating provide a powerful framework for inferring evolutionary timescales, but the most widely used priors for tree topologies and node times often assume that present-day taxa have been sampled randomly or exhaustively. In practice, taxon sampling is often carried out so as to include representatives of major lineages, such as orders or families. We examined the impacts of these diversified sampling schemes on Bayesian molecular dating under the unresolved fossilized birth-death (FBD) process, in which fossil taxa are topologically constrained but their exact placements are not inferred. We used synthetic data generated by simulation of nucleotide sequence evolution, fossil occurrences, and diversified taxon sampling. Our analyses show that increasing sampling density does not substantially improve divergence-time estimates under benign conditions. However, when the tree topologies were fixed to those used for simulation or when evolutionary rates varied among lineages, the performance of Bayesian tip dating improves with sampling density. By exploring three situations of model mismatches, we find that including all relevant fossils without pruning off those inappropriate for the FBD process can lead to underestimation of divergence times. Our reanalysis of a eutherian mammal data set confirms some of the findings from our simulation study, and reveals the complexity of diversified taxon sampling in phylogenomic data sets. In highlighting the interplay of taxon-sampling density and other factors, the results of our study have useful implications for Bayesian molecular dating in the era of phylogenomics.


2021 ◽  
Author(s):  
Ira L. Parsons ◽  
Melanie R. Boudreau ◽  
Brandi B. Karisch ◽  
Amanda E. Stone ◽  
Durham Norman ◽  
...  

Abstract Context Obtaining accurate maps of landscape features often requires intensive spatial sampling and interpolation. The data required to generate reliable interpolated maps varies with spatial scale and landscape heterogeneity. However, there has been no rigorous examination of sampling density relative to landscape characteristics and interpolation methods.ObjectivesOur objective was to characterize the 3-way relationship among sampling density, interpolation method, and landscape heterogeneity on interpolation accuracy in simulated and in situ landscapes. MethodsWe simulated landscapes of variable heterogeneity and sampled at increasing densities using both systematic and random strategies. We applied each of three local interpolation methods: Inverse Distance Weighting, Universal Kriging, and Nearest Neighbor — to the sampled data and estimated accuracy (R2) between interpolated surfaces and the original surface. Finally, we applied these analyses to in situ data, using a normalized difference vegetation index raster collected from pasture with various resolutions.Results All interpolation methods and sampling strategies resulted in similar accuracy; however, low heterogeneity yielded the highest R2 values at high sampling densities. In situ results showed that Universal Kriging performed best with systematic sampling, and inverse distance weighting with random sampling. Heterogeneity decreased with resolution, which increased accuracy of all interpolation methods. Landscape heterogeneity had the greatest effect on accuracy.ConclusionsHeterogeneity of the original landscape is the most significant factor in determining the accuracy of interpolated maps. There is a need to create structured tools to aid in determining sampling design most appropriate for interpolation methods across landscapes of various heterogeneity.


2021 ◽  
Author(s):  
Laurette Mhlanga ◽  
Grebe Eduard ◽  
Alex Welte

Abstract Population-based surveys which ascertain HIV status are conducted in heavily affected countries, with the estimation of incidence being a primary goal. Numerous methods exist under the umbrella of ‘synthetic cohort analysis’, by which we mean estimating incidence from the age/time structure of prevalence (given knowledge on mortality). However, not enough attention has been given to how serostatus data is ‘smoothed’ into a time/age-dependent prevalence, so as to optimise the estimation of incidence.To support this and other related investigations, we developed a comprehensive simulation environment in which we simulate age/time structured SI type epidemics and surveys. Scenarios are flexibly defined by demographic rates (fertility, incidence and mortality – dependent, as appropriate, on age, time, and time-since-infection) without any reference to underlying causative processes/parameters. Primarily using 1) a simulated epidemiological scenario inspired by what is seen in the hyper-endemic HIV affected regions, and 2) pairs of cross-sectional surveys, we explored A) options for extracting the age/time structure of prevalence so as to optimise the use of the formal incidence estimation framework of Mahiane et al, and B) aspects of survey design such as the interaction of epidemic details, sample-size/sampling-density and inter-survey interval.Much as in our companion piece which crucially investigated the use of ‘recent infection’ (whereas the present analysis hinges fundamentally on the estimation of the prevalence gradient) we propose a ‘one size fits most’ process for conducting ‘synthetic cohort’ analyses of large population survey data sets, for HIV incidence estimation: fitting a generalised linear model for prevalence, separately for each age/time point where an incidence estimate is desired, using a ‘moving window’ data inclusion rule. Overall, even in very high incidence settings, sampling density requirements are onerous.The general default approach we propose for fitting HIV prevalence to data as a function of age and time appears to be broadly stable over various epidemiological stages. Particular scenarios of interest, and the applicable options for survey design and analysis, can readily be more closely investigated using our approach. We note that it is often unrealistic to expect even large household based surveys to provide meaningful incidence estimates outside of priority groups like young women, where incidence is often particularly high.


2021 ◽  
Author(s):  
Laurette Mhlanga ◽  
Grebe Eduard ◽  
Alex Welte

Abstract BackgroundPopulation-based surveys which ascertain HIV status are conducted in heavily affected countries, with the estimation of incidence being a primary goal. Numerous methods exist under the umbrella of ‘synthetic cohort analysis’, by which we mean estimating incidence from the age/time structure of prevalence (given knowledge on mortality). However, not enough attention has been given to how serostatus data is ‘smoothed’ into a time/age-dependent prevalence, so as to optimise the estimation of incidence.MethodsTo support this and other related investigations, we developed a comprehensive simulation environment in which we simulate age/time structured SI type epidemics and surveys. Scenarios are flexibly defined by demographic rates (fertility, incidence and mortality – dependent, as appropriate, on age, time, and time-since-infection) without any reference to underlying causative processes/parameters. Primarily using 1) a simulated epidemiological scenario inspired by what is seen in the hyper-endemic HIV affected regions, and 2) pairs of cross-sectional surveys, we explored A) options for extracting the age/time structure of prevalence so as to optimise the use of the formal incidence estimation framework of Mahiane et al, and B) aspects of survey design such as the interaction of epidemic details, sample-size/sampling-density and inter-survey interval.ResultsMuch as in our companion piece which crucially investigated the use of ‘recent infection’ (whereas the present analysis hinges fundamentally on the estimation of the prevalence gradient) we propose a ‘one size fits most’ process for conducting ‘synthetic cohort’ analyses of large population survey data sets, for HIV incidence estimation: fitting a generalised linear model for prevalence, separately for each age/time point where an incidence estimate is desired, using a ‘moving window’ data inclusion rule. Overall, even in very high incidence settings, sampling density requirements are onerous.ConclusionThe general default approach we propose for fitting HIV prevalence to data as a function of age and time appears to be broadly stable over various epidemiological stages. Particular scenarios of interest, and the applicable options for survey design and analysis, can readily be more closely investigated using our approach. We note that it is often unrealistic to expect even large household based surveys to provide meaningful incidence estimates outside of priority groups like young women, where incidence is often particularly high.


Geophysics ◽  
2021 ◽  
pp. 1-40
Author(s):  
Wenhao Xu ◽  
Yang Zhong ◽  
Bangyu Wu ◽  
Jinghuai Gao ◽  
Qing Huo Liu

Solving the Helmholtz equation has important applications in various areas, such as acoustics and electromagnetics. Using an iterative solver together with a proper preconditioner is key for solving large 3D Helmholtz equations. The performance of existing Helmholtz preconditioners usually deteriorates when the minimum spatial sampling density is small (approximately four points per wavelength [PPW]). To improve the efficiency of the Helmholtz preconditioner at a small minimum spatial sampling density, we have adopted a new preconditioner. In our scheme, the preconditioning matrix is constructed based on an adaptive complex frequency that varies with the minimum spatial sampling density in terms of the number of PPWs. Furthermore, the multigrid V-cycle with a GMRES smoother is adopted to effectively solve the corresponding preconditioning linear system. The adaptive complex frequency together with a GMRES smoother can work stably and efficiently at different minimum spatial sampling densities. Numerical results of three typical 3D models show that our scheme is more efficient than the multilevel GMRES method and shifted Laplacian with multigrid full V-cycle and a symmetric Gauss-Seidel smoother for preconditioning the 3D Helmholtz linear system, especially when the minimum spatial sampling density is large (approximately 120 PPW) or small (approximately 4 PPW).


Sign in / Sign up

Export Citation Format

Share Document