scholarly journals LEAVEN - Lightweight Surface and Volume Mesh Sampling Application for Particle-based Simulations

Author(s):  
Alexander Sommer ◽  
Ulrich Schwanecke

We present an easy-to-use and lightweight surface and volume mesh sampling standalone application tailored for the needs of particle-based simulation. We describe the surface and volume sampling algorithms used in LEAVEN in a beginner-friendly fashion. Furthermore, we describe a novel method of generating random volume samples that satisfy blue noise criteria by mod- ifying a surface sampling algorithm. We aim to lower one entry barrier for starting with particle-based simulations while still pose a benefit to advanced users. The goal is to provide a useful tool to the community and lowering the need for heavyweight third-party applications, especially for starters

2000 ◽  
Vol 13 ◽  
pp. 155-188 ◽  
Author(s):  
J. Cheng ◽  
M. J. Druzdzel

Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, AIS-BN, that shows promising convergence rates even under extreme conditions and seems to outperform the existing sampling algorithms consistently. Three sources of this performance improvement are (1) two heuristics for initialization of the importance function that are based on the theoretical properties of importance sampling in finite-dimensional integrals and the structural advantages of Bayesian networks, (2) a smooth learning method for the importance function, and (3) a dynamic weighting function for combining samples from different stages of the algorithm. We tested the performance of the AIS-BN algorithm along with two state of the art general purpose sampling algorithms, likelihood weighting (Fung & Chang, 1989; Shachter & Peot, 1989) and self-importance sampling (Shachter & Peot, 1989). We used in our tests three large real Bayesian network models available to the scientific community: the CPCS network (Pradhan et al., 1994), the PathFinder network (Heckerman, Horvitz, & Nathwani, 1990), and the ANDES network (Conati, Gertner, VanLehn, & Druzdzel, 1997), with evidence as unlikely as 10^-41. While the AIS-BN algorithm always performed better than the other two algorithms, in the majority of the test cases it achieved orders of magnitude improvement in precision of the results. Improvement in speed given a desired precision is even more dramatic, although we are unable to report numerical results here, as the other algorithms almost never achieved the precision reached even by the first few iterations of the AIS-BN algorithm.


PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e5722 ◽  
Author(s):  
Wartini Ng ◽  
Budiman Minasny ◽  
Brendan Malone ◽  
Patrick Filippi

Background The use of visible-near infrared (vis-NIR) spectroscopy for rapid soil characterisation has gained a lot of interest in recent times. Soil spectra absorbance from the visible-infrared range can be calibrated using regression models to predict a set of soil properties. The accuracy of these regression models relies heavily on the calibration set. The optimum sample size and the overall sample representativeness of the dataset could further improve the model performance. However, there is no guideline on which sampling method should be used under different size of datasets. Methods Here, we show different sampling algorithms performed differently under different data size and different regression models (Cubist regression tree and Partial Least Square Regression (PLSR)). We analysed the effect of three sampling algorithms: Kennard-Stone (KS), conditioned Latin Hypercube Sampling (cLHS) and k-means clustering (KM) against random sampling on the prediction of up to five different soil properties (sand, clay, carbon content, cation exchange capacity and pH) on three datasets. These datasets have different coverages: a European continental dataset (LUCAS, n = 5,639), a regional dataset from Australia (Geeves, n = 379), and a local dataset from New South Wales, Australia (Hillston, n = 384). Calibration sample sizes ranging from 50 to 3,000 were derived and tested for the continental dataset; and from 50 to 200 samples for the regional and local datasets. Results Overall, the PLSR gives a better prediction in comparison to the Cubist model for the prediction of various soil properties. It is also less prone to the choice of sampling algorithm. The KM algorithm is more representative in the larger dataset up to a certain calibration sample size. The KS algorithm appears to be more efficient (as compared to random sampling) in small datasets; however, the prediction performance varied a lot between soil properties. The cLHS sampling algorithm is the most robust sampling method for multiple soil properties regardless of the sample size. Discussion Our results suggested that the optimum calibration sample size relied on how much generalization the model had to create. The use of the sampling algorithm is beneficial for larger datasets than smaller datasets where only small improvements can be made. KM is suitable for large datasets, KS is efficient in small datasets but results can be variable, while cLHS is less affected by sample size.


2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
Renata Pacheco ◽  
Heraldo L. Vasconcelos

The use of subterranean traps is a relatively novel method to sample ants, and few studies have evaluated its performance relative to other methods. We collected ants in forests, savannas, and crops in central Brazil using subterranean pitfall traps and conventional pitfall traps placed on the soil surface. Sampling duration, soil depth, and sprinkling vegetal oil around traps all tended to affect the number of species found in subterranean traps. Sixteen percent of the species collected in subterranean traps were unique, and most of these had cryptobiotic morphology (i.e., were truly hypogaeic species). Surprisingly, however, subterranean and conventional traps were similarly efficient at capturing cryptobiotic species. Furthermore, subterranean traps captured far fewer species in total than conventional traps (75 versus 220 species), and this was true in all three habitats sampled. Sampling completeness increased very little using a combination of conventional and subterranean traps than using just conventional traps.


Kybernetes ◽  
2019 ◽  
Vol 48 (10) ◽  
pp. 2353-2372
Author(s):  
Ruoyu Liang ◽  
Linghao Zhang ◽  
Wei Guo

Purpose Members’ sustained participation positively influences success of brand community. Although scholars have confirmed the effects of social capital on continuance intention in third-party hosted communities, little work has been done to explore these relationships in context of enterprise-sponsored brand communities, especially, the precursors of active members’ sustained participation in such context is still unclear. Besides, how to recognize active users with high precision and coverage remains an open question. Therefore, this paper aims to propose a novel method to identify active users effectively and investigate the antecedents of their continuance intention from perspective of social capital in enterprise-sponsored brand community. Design/methodology/approach This work established several social networks based on the information of Xiaomi smartphone forum users’ posts and feedbacks. Node centrality (out-degree) analysis was adopted to identify the users with high degree of active in these networks, and then behaviour analysis was performed to exclude the community managers from the group of active users. Finally, a research model was proposed based on the theory of social capital. It was tested by applying partial least squares technique, and the data were collected from a survey of members (n = 327) of Xiaomi forum. Findings The empirical results showed that the proposed method can recognize the active users effectively. Additionally, social tie, identification, trust and shared vision were proved to be significant predictors of active users’ continuance intention in the context of enterprise-sponsored brand community. Originality/value This paper contributes to the information system usage literature and provides opinions regarding how social capital influence active users’ sustained participation in enterprise-sponsored brand community. Besides, this work proposed a novel method to identify active users, which will be useful to assist enterprises to improve their community management.


2016 ◽  
Vol 27 (05) ◽  
pp. 1650052 ◽  
Author(s):  
Zeinab S. Jalali ◽  
Alireza Rezvanian ◽  
Mohammad Reza Meybodi

Due to the large scales and limitations in accessing most online social networks, it is hard or infeasible to directly access them in a reasonable amount of time for studying and analysis. Hence, network sampling has emerged as a suitable technique to study and analyze real networks. The main goal of sampling online social networks is constructing a small scale sampled network which preserves the most important properties of the original network. In this paper, we propose two sampling algorithms for sampling online social networks using spanning trees. The first proposed sampling algorithm finds several spanning trees from randomly chosen starting nodes; then the edges in these spanning trees are ranked according to the number of times that each edge has appeared in the set of found spanning trees in the given network. The sampled network is then constructed as a sub-graph of the original network which contains a fraction of nodes that are incident on highly ranked edges. In order to avoid traversing the entire network, the second sampling algorithm is proposed using partial spanning trees. The second sampling algorithm is similar to the first algorithm except that it uses partial spanning trees. Several experiments are conducted to examine the performance of the proposed sampling algorithms on well-known real networks. The obtained results in comparison with other popular sampling methods demonstrate the efficiency of the proposed sampling algorithms in terms of Kolmogorov–Smirnov distance (KSD), skew divergence distance (SDD) and normalized distance (ND).


2020 ◽  
Author(s):  
He Zhang ◽  
Liang Zhang ◽  
Sizhen Li ◽  
David Mathews ◽  
Liang Huang

Many RNAs fold into multiple structures at equilibrium. The classical stochastic sampling algorithm can sample secondary structures according to their probabilities in the Boltzmann ensemble, and is widely used, e.g., for accessibility prediction. However, the current sampling algorithm, consisting of a bottom-up partition function phase followed by a top-down sampling phase, suffers from three limitations: (a) the formulation and implementation of the sampling phase are unnecessarily complicated; (b) much redundant work is repeatedly performed in the sampling phase; (c) the partition function runtime scales cubically with the sequence length. These issues prevent it from being used for full-length viral genomes such as SARS-CoV-2. To address these problems, we first present a hypergraph framework under which the sampling algorithm can be greatly simplified. We then present three sampling algorithms under this framework of which two eliminate redundant work in the sampling phase. Finally, we present LinearSampling, an end-to-end linear-time sampling algorithm that is orders of magnitude faster than the standard algorithm. For instance, LinearSampling is 111 times faster (48s vs. 1.5h) than Vienna RNAsubopt on the longest sequence in the RNAcentral dataset that RNAsubopt can run (15,780 nt). More importantly, LinearSampling is the first sampling algorithm to scale to the full genome of SARS-CoV-2, taking only 96 seconds on its reference sequence (29,903 nt). It finds 23 regions of 15 nt with high accessibilities, which can be potentially used for COVID-19 diagnostics and drug design.


2019 ◽  
Vol 31 (1) ◽  
Author(s):  
Rikus Le Roux ◽  
George Van Schoor ◽  
Pieter Van Vuuren

Despite the many advantages run-time reconfiguration of FPGAs brings to the table, its usage is mostly limited to quasi-static applications. This is either due to the throughput of the reconfiguration process, or the time required to create new hardware. In order to optimise the former, the literature proposes a block RAM (BRAM)-based architecture in which a new configuration is stored in localised memory and reconfiguration is facilitated by a controller implemented in the FPGA fabric. The limitation of this architecture is that only a subset of configurations can be stored. When new hardware is required, the slow synthesis process (or a part thereof) has to be repeated for each new configuration. Various third-party tools aim to mitigate this overhead, but since the bitstream is shrouded in obscurity, all rely on a layer of abstraction that make them unusable in real-time. To address this issue, this paper presents a novel method to parse and analyse a Xilinx® FPGA bitstream to extract certain characteristics. It is shown how these characteristics could be used to design and implement a bitstream specialiser, capable of taking a bitstream and modifying the configuration bits of lookup tables in real-time.


2021 ◽  
Vol 11 (3) ◽  
pp. 1258
Author(s):  
Ángel Madridano ◽  
Abdulla Al-Kaff ◽  
Pablo Flores ◽  
David Martín ◽  
Arturo de la Escalera

Advances in the field of unmanned aerial vehicles (UAVs) have led to an exponential increase in their market, thanks to the development of innovative technological solutions aimed at a wide range of applications and services, such as emergencies and those related to fires. In addition, the expansion of this market has been accompanied by the birth and growth of the so-called UAV swarms. Currently, the expansion of these systems is due to their properties in terms of robustness, versatility, and efficiency. Along with these properties there is an aspect, which is still a field of study, such as autonomous and cooperative navigation of these swarms. In this paper we present an architecture that includes a set of complementary methods that allow the establishment of different control layers to enable the autonomous and cooperative navigation of a swarm of UAVs. Among the different layers, there are a global trajectory planner based on sampling, algorithms for obstacle detection and avoidance, and methods for autonomous decision making based on deep reinforcement learning. The paper shows satisfactory results for a line-of-sight based algorithm for global path planner trajectory smoothing in 2D and 3D. In addition, a novel method for autonomous navigation of UAVs based on deep reinforcement learning is shown, which has been tested in 2 different simulation environments with promising results about the use of these techniques to achieve autonomous navigation of UAVs.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e11927
Author(s):  
Aditya A. Shastri ◽  
Kapil Ahuja ◽  
Milind B. Ratnaparkhe ◽  
Yann Busnel

Phenotypic characteristics of a plant species refers to its physical properties as cataloged by plant biologists at different research centers around the world. Clustering species based upon their phenotypic characteristics is used to obtain diverse sets of parents that are useful in their breeding programs. The Hierarchical Clustering (HC) algorithm is the current standard in clustering of phenotypic data. This algorithm suffers from low accuracy and high computational complexity issues. To address the accuracy challenge, we propose the use of Spectral Clustering (SC) algorithm. To make the algorithm computationally cheap, we propose using sampling, specifically, Pivotal Sampling that is probability based. Since application of samplings to phenotypic data has not been explored much, for effective comparison, another sampling technique called Vector Quantization (VQ) is adapted for this data as well. VQ has recently generated promising results for genotypic data. The novelty of our SC with Pivotal Sampling algorithm is in constructing the crucial similarity matrix for the clustering algorithm and defining probabilities for the sampling technique. Although our algorithm can be applied to any plant species, we tested it on the phenotypic data obtained from about 2,400 Soybean species. SC with Pivotal Sampling achieves substantially more accuracy (in terms of Silhouette Values) than all the other proposed competitive clustering with sampling algorithms (i.e. SC with VQ, HC with Pivotal Sampling, and HC with VQ). The complexities of our SC with Pivotal Sampling algorithm and these three variants are almost the same because of the involved sampling. In addition to this, SC with Pivotal Sampling outperforms the standard HC algorithm in both accuracy and computational complexity. We experimentally show that we are up to 45% more accurate than HC in terms of clustering accuracy. The computational complexity of our algorithm is more than a magnitude less than that of HC.


2018 ◽  
Author(s):  
Thomas Carpenter ◽  
Ruth Pogacar ◽  
Chris Pullig ◽  
Michal Kouril ◽  
Stephen J Aguilar ◽  
...  

The Implicit Association Test (IAT) is widely used in psychology. Unfortunately, the IAT cannot be run within online surveys, requiring researchers who conduct online surveys to rely on third-party tools. We introduce a novel method for constructing IATs using online survey software (Qualtrics); we then empirically assess its validity. Study 1 (student n = 239) found good psychometric properties, expected IAT effects, and expected correlations with explicit measures for survey-software IATs. Study 2 (MTurk n = 818) found predicted IAT effects across four survey-software IATs (d’s = 0.82 [Black-White IAT] to 2.13 [insect-flower IAT]). Study 3 (MTurk n = 270) compared survey-software IATs and IATs run via Inquisit, yielding nearly identical results and intercorrelations expected for identical IATs. Survey-software IATs appear reliable and valid, offer numerous advantages, and make IATs accessible for researchers who use survey software to conduct online research. We present all materials, links to tutorials, and an open-source tool that rapidly automates survey-software IAT construction and analysis.


Sign in / Sign up

Export Citation Format

Share Document