Hindered and compression settling: parameter measurement and modelling

2007 ◽  
Vol 56 (12) ◽  
pp. 101-110 ◽  
Author(s):  
A.-E. Stricker ◽  
I. Takács ◽  
A. Marquot

The Vesilind settling velocity function forms the basis of flux theory used both in state point analysis (for design and capacity rating) and one-dimensional dynamic models (for dynamic process modelling). This paper proposes new methods to address known shortcomings of these methods, based on an extensive set of batch settling tests conducted at different scales. The experimental method to determine the Vesilind parameters from a series of bench scale settling tests is reviewed. It is confirmed that settling cylinders must be slowly stirred in order to represent settling performance of full scale plants for the whole range of solids concentrations. Two new methods to extract the Vesilind parameters from settling test series are proposed and tested against the traditional manual method. Finally, the same data set is used to propose an extension to one-dimensional (1-D) dynamic settler models to account for compression settling. Using the modified empirical function, the model is able to describe the batch settling interface independently of the number of layers.

2010 ◽  
Vol 143-144 ◽  
pp. 1337-1341
Author(s):  
Wei Feng Yan ◽  
Gen Xiu Wu ◽  
Can Ze Li ◽  
Li Zhou

As only using Euclidean distance KNN algorithm has its limits, many researchers use other distance calculation methods as the replacement it to improve the accuracy of Data Classification. While combining the DS evidence theory with a series of KNN algorithm which discussed in this paper, we found that every algorithm has their merits. All of them ignore the analysis of the data set, through deeply analysis we found that the actual distance is determined by the larger value when two attribute values are in great difference. Therefore, what we do next is to compress the large-dimensional numerical data values. By this way, the accuracy of KNN, VSMKNN, KERKNN algorithm are obviously improved after experiment and then these new methods are called CDSKNN, CDSVSMKNN, CDSKERKNN.


2010 ◽  
Vol 4 (1) ◽  
pp. 35-51 ◽  
Author(s):  
H.-W. Jacobi ◽  
F. Domine ◽  
W. R. Simpson ◽  
T. A. Douglas ◽  
M. Sturm

Abstract. The specific surface area (SSA) of the snow constitutes a powerful parameter to quantify the exchange of matter and energy between the snow and the atmosphere. However, currently no snow physics model can simulate the SSA. Therefore, two different types of empirical parameterizations of the specific surface area (SSA) of snow are implemented into the existing one-dimensional snow physics model CROCUS. The parameterizations are either based on diagnostic equations relating the SSA to parameters like snow type and density or on prognostic equations that describe the change of SSA depending on snow age, snowpack temperature, and the temperature gradient within the snowpack. Simulations with the upgraded CROCUS model were performed for a subarctic snowpack, for which an extensive data set including SSA measurements is available at Fairbanks, Alaska for the winter season 2003/2004. While a reasonable agreement between simulated and observed SSA values is obtained using both parameterizations, the model tends to overestimate the SSA. This overestimation is more pronounced using the diagnostic equations compared to the results of the prognostic equations. Parts of the SSA deviations using both parameterizations can be attributed to differences between simulated and observed snow heights, densities, and temperatures. Therefore, further sensitivity studies regarding the thermal budget of the snowpack were performed. They revealed that reducing the thermal conductivity of the snow or increasing the turbulent fluxes at the snow surfaces leads to a slight improvement of the simulated thermal budget of the snowpack compared to the observations. However, their impact on further simulated parameters like snow height and SSA remains small. Including additional physical processes in the snow model may have the potential to advance the simulations of the thermal budget of the snowpack and, thus, the SSA simulations.


This paper proposes an improved data compression technique compared to existing Lempel-Ziv-Welch (LZW) algorithm. LZW is a dictionary-updation based compression technique which stores elements from the data in the form of codes and uses them when those strings recur again. When the dictionary gets full, every element in the dictionary are removed in order to update dictionary with new entry. Therefore, the conventional method doesn’t consider frequently used strings and removes all the entry. This method is not an effective compression when the data to be compressed are large and when there are more frequently occurring string. This paper presents two new methods which are an improvement for the existing LZW compression algorithm. In this method, when the dictionary gets full, the elements that haven’t been used earlier are removed rather than removing every element of the dictionary which happens in the existing LZW algorithm. This is achieved by adding a flag to every element of the dictionary. Whenever an element is used the flag is set high. Thus, when the dictionary gets full, the dictionary entries where the flag was set high are kept and others are discarded. In the first method, the entries are discarded abruptly, whereas in the second method the unused elements are removed once at a time. Therefore, the second method gives enough time for the nascent elements of the dictionary. These techniques all fetch similar results when data set is small. This happens due to the fact that difference in the way they handle the dictionary when it’s full. Thus these improvements fetch better results only when a relatively large data is used. When all the three techniques' models were used to compare a data set with yields best case scenario, the compression ratios of conventional LZW is small compared to improved LZW method-1 and which in turn is small compared to improved LZW method-2.


2013 ◽  
Vol 48 (4) ◽  
pp. 321-332 ◽  
Author(s):  
Thibaud Maruéjouls ◽  
Peter A. Vanrolleghem ◽  
Geneviève Pelletier ◽  
Paul Lessard

Retention tanks (RTs) are commonly used to reduce combined sewer overflows, management of which is an important way of reducing the impacts of urban development on receiving waters. However, overflow characteristics and the processes affecting them are not yet fully understood. In a context of integrated urban wastewater systems, the management of RTs is mainly done to satisfy hydraulic constraints even if the idea behind such structures is to limit the discharge of pollutants to the environment. This study reports new insights in the settling processes and the pollutant behaviour occurring in an off-line RT. The authors first focus on the total suspended solids (TSS) and the total chemical oxygen demand (CODt) dynamics at the inlet and the outlet of a RT. Secondly, they focus on the possible relationship between the variation of the particle settling velocity distribution of particles and the TSS concentration dynamics. Finally, analyses of the TSS and CODt concentration evolution during tank emptying give information on the interaction between wastewater retention time and the settling performance.


1995 ◽  
Vol 04 (01) ◽  
pp. 1-11 ◽  
Author(s):  
Y. ZHAO ◽  
D. HUANG ◽  
C. WU ◽  
R. SHEN

The transmission of electromagnetic radiation through the nonlinear one-dimensional photonic bandgap structure with different configurations are comparatively studied. It is found that the quarter-wavelength thickness arrangement gives rise to a wide window in the visible wavelength range. The modulated superlattice scheme only produces a number of narrow windows. The scheme using random layer thickness is expected to open a very wide window by making use of film nonlinearity when the number of layers is sufficient large. These nonlinear devices can be fabricated by using available materials.


2014 ◽  
Vol 30 (2) ◽  
pp. 316-321 ◽  
Author(s):  
Chris Richter ◽  
Noel E. O’Connor ◽  
Brendan Marshall ◽  
Kieran Moran

The aim of this study is to propose a novel data analysis approach, ananalysis of characterizing phases(ACP), that detects and examines phases of variance within a sample of curves utilizing the time, magnitude, and magnitude-time domains; and to compare the findings of ACP to discrete point analysis in identifying performance-related factors in vertical jumps. Twenty-five vertical jumps were analyzed. Discrete point analysis identified the initial-to-maximum rate of force development (P= .006) and the time from initial-to-maximum force (P= .047) as performance-related factors. However, due to intersubject variability in the shape of the force curves (ie, non-, uni- and bimodal nature), these variables were judged to be functionally erroneous. In contrast, ACP identified the ability to apply forces for longer (P< .038), generate higher forces (P< .027), and produce a greater rate of force development (P< .003) as performance-related factors. Analysis of characterizing phases showed advantages over discrete point analysis in identifying performance-related factors because it (i) analyses only related phases, (ii) analyses the whole data set, (iii) can identify performance-related factors that occur solely as a phase, (iv) identifies the specific phase over which differences occur, and (v) analyses the time, magnitude and combined magnitude-time domains.


1971 ◽  
Vol 38 (1) ◽  
pp. 253-261 ◽  
Author(s):  
P. Lieberman

Blast shield materials are described by idealized equations-of-state in both one-dimensional loading and one-dimensional unloading. A square-wave applied pressure history is one boundary condition at the impingement surface of the material and, at the rear surface of the material(s), there is a stationary boundary condition. The objective is to select the equations-of-state that yield the greatest reduction in reflected stress at the rigid-boundary condition, for the smallest overall length, for application to a blast shield. One and two layers of material are considered. Idealizations of the equations-of-state, boundary conditions, length of each layer, and number of layers are arranged to yield simple analytical expressions for the relation of reflected stress to the overall length of the blast shield.


2011 ◽  
Vol 09 (05) ◽  
pp. 631-645 ◽  
Author(s):  
WENLONG TANG ◽  
HONGBAO CAO ◽  
JUNBO DUAN ◽  
YU-PING WANG

With the development of genomic techniques, the demand for new methods that can handle high-throughput genome-wide data effectively is becoming stronger than ever before. Compressed sensing (CS) is an emerging approach in statistics and signal processing. With the CS theory, a signal can be uniquely reconstructed or approximated from its sparse representations, which can therefore better distinguish different types of signals. However, the application of CS approach to genome-wide data analysis has been rarely investigated. We propose a novel CS-based approach for genomic data classification and test its performance in the subtyping of leukemia through gene expression analysis. The detection of subtypes of cancers such as leukemia according to different genetic markups is significant, which holds promise for the individualization of therapies and improvement of treatments. In our work, four statistical features were employed to select significant genes for the classification. With our selected genes out of 7,129 ones, the proposed CS method achieved a classification accuracy of 97.4% when evaluated with the cross validation and 94.3% when evaluated with another independent data set. The robustness of the method to noise was also tested, giving good performance. Therefore, this work demonstrates that the CS method can effectively detect subtypes of leukemia, implying improved accuracy of diagnosis of leukemia.


2011 ◽  
Vol 149 (6) ◽  
pp. 701-712 ◽  
Author(s):  
R. CAO ◽  
M. FRANCISCO-FERNÁNDEZ ◽  
A. ANAND ◽  
F. BASTIDA ◽  
J. L. GONZÁLEZ-ANDÚJAR

SUMMARYHydrothermal time (HTT) is a valuable environmental synthesis to predict weed emergence. However, weed scientists face practical problems in determining the best soil depth at which to calculate it. Two different types of measures are proposed for this: moment-based indices and probability density-based indices. Due to the monitoring process, it is not possible to observe the exact emergence time of every seedling; therefore, emergence times are not observed individually, seedling by seedling, but in an aggregated way. To address these facts, some new methods to estimate the proposed indices are derived, using grouped data estimators and kernel density estimators. The proposed methods have been exemplified with an emergence data set of Bromus diandrus. The results indicate that hydrothermal timing at 50 mm is more useful than that at 10 mm.


Sign in / Sign up

Export Citation Format

Share Document