scholarly journals Criteria ruling particle agglomeration

2021 ◽  
Vol 12 ◽  
pp. 1093-1100
Author(s):  
Dieter Vollath

Most of the technically important properties of nanomaterials, such as superparamagnetism or luminescence, depend on the particle size. During synthesis and handling of nanoparticles, agglomeration may occur. Agglomeration of nanoparticles may be controlled by different mechanisms. During synthesis one observes agglomeration controlled by the geometry and electrical charges of the particles. Additionally, one may find agglomeration controlled by thermodynamic interaction of the particles in the direction of a minimum of the free enthalpy. In this context, one may observe mechanisms leading to a reduction of the surface energy or controlled by the van der Waals interaction. Additionally, the ensemble may arrange in the direction of a maximum of the entropy. Simulations based on Monte Carlo methods teach that, in case of any energetic interaction of the particles, the influence of the entropy is minor or even negligible. Complementary to the simulations, the extremum of the entropy was determined using the Lagrange method. Both approaches yielded identical result for the particle size distribution of an agglomerated ensemble, that is, an exponential function characterized by two parameters. In this context, it is important to realize that one has to take care of fluctuations of the entropy.

TAPPI Journal ◽  
2015 ◽  
Vol 14 (9) ◽  
pp. 565-576 ◽  
Author(s):  
YUCHENG PENG ◽  
DOUGLAS J. GARDNER

Understanding the surface properties of cellulose materials is important for proper commercial applications. The effect of particle size, particle morphology, and hydroxyl number on the surface energy of three microcrystalline cellulose (MCC) preparations and one nanofibrillated cellulose (NFC) preparation were investigated using inverse gas chromatography at column temperatures ranging from 30ºC to 60ºC. The mean particle sizes for the three MCC samples and the NFC sample were 120.1, 62.3, 13.9, and 9.3 μm. The corresponding dispersion components of surface energy at 30°C were 55.7 ± 0.1, 59.7 ± 1.3, 71.7 ± 1.0, and 57.4 ± 0.3 mJ/m2. MCC samples are agglomerates of small individual cellulose particles. The different particle sizes and morphologies of the three MCC samples resulted in various hydroxyl numbers, which in turn affected their dispersion component of surface energy. Cellulose samples exhibiting a higher hydroxyl number have a higher dispersion component of surface energy. The dispersion component of surface energy of all the cellulose samples decreased linearly with increasing temperature. MCC samples with larger agglomerates had a lower temperature coefficient of dispersion component of surface energy.


Silicon ◽  
2020 ◽  
Author(s):  
Elida Nekovic ◽  
Catherine J. Storey ◽  
Andre Kaplan ◽  
Wolfgang Theis ◽  
Leigh T. Canham

AbstractBiodegradable porous silicon (pSi) particles are under development for drug delivery applications. The optimum particle size very much depends on medical use, and microparticles can outperform nanoparticles in specific instances. Here we demonstrate the ability of sedimentation to size-select ultrasmall (1–10 μm) nanoporous microparticles in common solvents. Size tunability is quantified for 1–24 h of sedimentation. Experimental values of settling times in ethanol and water are compared to those calculated using Stokes’ Law. Differences can arise due to particle agglomeration, internal gas generation and incomplete wetting. Air-dried and supercritically-dried pSi powders are shown to have, for example, their median diameter d (0.5) particle sizes reduced from 13 to 1 μm and from 20 to 3 μm, using sedimentation times of 6 and 2 h respectively. Such filtered microparticles also have much narrower size distributions and are hence suitable for administration in 27 gauge microneedles, commonly used in intravitreal drug delivery.


2012 ◽  
Vol 16 (5) ◽  
pp. 1391-1394 ◽  
Author(s):  
Kun Zhou

A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.


2005 ◽  
Vol 23 (6) ◽  
pp. 429-461
Author(s):  
Ian Lerche ◽  
Brett S. Mudford

This article derives an estimation procedure to evaluate how many Monte Carlo realisations need to be done in order to achieve prescribed accuracies in the estimated mean value and also in the cumulative probabilities of achieving values greater than, or less than, a particular value as the chosen particular value is allowed to vary. In addition, by inverting the argument and asking what the accuracies are that result for a prescribed number of Monte Carlo realisations, one can assess the computer time that would be involved should one choose to carry out the Monte Carlo realisations. The arguments and numerical illustrations are carried though in detail for the four distributions of lognormal, binomial, Cauchy, and exponential. The procedure is valid for any choice of distribution function. The general method given in Lerche and Mudford (2005) is not merely a coincidence owing to the nature of the Gaussian distribution but is of universal validity. This article provides (in the Appendices) the general procedure for obtaining equivalent results for any distribution and shows quantitatively how the procedure operates for the four specific distributions. The methodology is therefore available for any choice of probability distribution function. Some distributions have more than two parameters that are needed to define precisely the distribution. Estimates of mean value and standard error around the mean only allow determination of two parameters for each distribution. Thus any distribution with more than two parameters has degrees of freedom that either have to be constrained from other information or that are unknown and so can be freely specified. That fluidity in such distributions allows a similar fluidity in the estimates of the number of Monte Carlo realisations needed to achieve prescribed accuracies as well as providing fluidity in the estimates of achievable accuracy for a prescribed number of Monte Carlo realisations. Without some way to control the free parameters in such distributions one will, presumably, always have such dynamic uncertainties. Even when the free parameters are known precisely, there is still considerable uncertainty in determining the number of Monte Carlo realisations needed to achieve prescribed accuracies, and in the accuracies achievable with a prescribed number of Monte Carol realisations because of the different functional forms of probability distribution that can be invoked from which one chooses the Monte Carlo realisations. Without knowledge of the underlying distribution functions that are appropriate to use for a given problem, presumably the choices one makes for numerical implementation of the basic logic procedure will bias the estimates of achievable accuracy and estimated number of Monte Carlo realisations one should undertake. The cautionary note, which is the main point of this article, and which is exhibited sharply with numerical illustrations, is that one must clearly specify precisely what distributions one is using and precisely what free parameter values one has chosen (and why the choices were made) in assessing the accuracy achievable and the number of Monte Carlo realisations needed with such choices. Without such available information it is not a very useful exercise to undertake Monte Carlo realisations because other investigations, using other distributions and with other values of available free parameters, will arrive at very different conclusions.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Xiao Jiang ◽  
Tat Leung Chan

Purpose The purpose of this paper is to study the soot formation and evolution by using this newly developed Lagrangian particle tracking with weighted fraction Monte Carlo (LPT-WFMC) method. Design/methodology/approach The weighted soot particles are used in this MC framework and is tracked using Lagrangian approach. A detailed soot model based on the LPT-WFMC method is used to study the soot formation and evolution in ethylene laminar premixed flames. Findings The LPT-WFMC method is validated by both experimental and numerical results of the direct simulation Monte Carlo (DSMC) and Multi-Monte Carlo (MMC) methods. Compared with DSMC and MMC methods, the stochastic error analysis shows this new LPT-WFMC method could further extend the particle size distributions (PSDs) and improve the accuracy for predicting soot PSDs at larger particle size regime. Originality/value Compared with conventional weighted particle schemes, the weight distributions in LPT-WFMC method are adjustable by adopting different fraction functions. As a result, the number of numerical soot particles in each size interval could be also adjustable. The stochastic error of PSDs in larger particle size regime can also be minimized by increasing the number of numerical soot particles at larger size interval.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

This chapter introduces Markov Chain Monte Carlo (MCMC) with Gibbs sampling, revisiting the “Maple Syrup Problem” of Chapter 12, where the goal was to estimate the two parameters of a normal distribution, μ‎ and σ‎. Chapter 12 used the normal-normal conjugate to derive the posterior distribution for the unknown parameter μ‎; the parameter σ‎ was assumed to be known. This chapter uses MCMC with Gibbs sampling to estimate the joint posterior distribution of both μ‎ and σ‎. Gibbs sampling is a special case of the Metropolis–Hastings algorithm. The chapter describes MCMC with Gibbs sampling step by step, which requires (1) computing the posterior distribution of a given parameter, conditional on the value of the other parameter, and (2) drawing a sample from the posterior distribution. In this chapter, Gibbs sampling makes use of the conjugate solutions to decompose the joint posterior distribution into full conditional distributions for each parameter.


2019 ◽  
Vol 2019 ◽  
pp. 1-8
Author(s):  
Xin Luo ◽  
Jinlin Zhang

This article proposes a new way to price Chinese convertible bonds by the Longstaff-Schwartz Least Squares Monte Carlo simulation. The default intensity and the volatility are the two important parameters, which are difficultly obtained in the emerging market, in pricing convertible bonds. By developing the Merton theory, we find a new effective method to get the theoretical value of the two parameters. In the pricing method, the default risk is described by the default intensity, and a default on a bond is triggered by the bottom Q(T) (default probability) percentile of the simulated stock prices at the maturity date. In the present simulation, a risk-free interest rate is used to discount the cash flows. So, the new pricing model is considered to tally with the general pricing rule under martingale measure. The empirical results of the CEB and the XIG convertible bonds by the proposed method are compared with those obtained by the credit spreads method. It is also found that the theoretical prices calculated by the method proposed in the article fit the market prices well, especially, in the long run tendency.


Sign in / Sign up

Export Citation Format

Share Document