scholarly journals INDUSTRIAL MIXING OF PARTICULATE SOLIDS: PRESENT PRACTICES AND FUTURE EVOLUTION

Author(s):  
Cendrine Gatumel ◽  
Henri Berthiaux ◽  
Vadim E. Mizonov

Powder mixing is a part of our everyday life, but is the source of major industrial preoccupations. Mixing is widely used in many industries but until now design of mixing technology and mixing equipment belongs sooner to engineering art than to scientifically based calculation. Each branch of industry develops its own experience in the field mostly based on time and labour consuming expe-rimental research, and very often the obtained results cannot be used directly in another branch, i.e., the problem of mixing simulation and calculation is far from universality. This is why it is very im-portant to separate from particular sectorial problems the general intersectorial problems of theory and practice of mixing and concentrate the attention of researchers and engineers on them solution to build the general basis for scientifically based design of mixing technology and equipment. Current problems are associated with the definition of the homogeneity of the mixtures, the ways of measuring it, the sampling errors and techniques, the segregability of the mixtures in the powder handling operations, mixer choice, as well as mixer conception. In this paper, we review such aspects and try to draw some perspectives from a combined industrial experience – chemical engineering approach: the development of on-line monitoring techniques to assess homogeneity and further con-trol the process; the improvement of mixer’s scale up procedures, as well as the optimisation of mixer design and operation; the development of new mixing technologies, multifunctional, nearly “universal”, with a special emphasis on continuous processes; the completion of the actual standards on powder homogeneity by introducing structural information.  

2001 ◽  
Vol 9 (3) ◽  
pp. 329-354 ◽  
Author(s):  
Michael Emmerich ◽  
Monika Grötzner ◽  
Martin Schütz

This paper describes the adaptation of evolutionary algorithms (EAs) to the structural optimization of chemical engineering plants, using rigorous process simulation combined with realistic costing procedures to calculate target function values. To represent chemical engineering plants, a network representation with typed vertices and variable structure will be introduced. For this representation, we introduce a technique on how to create problem specific search operators and apply them in stochastic optimization procedures. The applicability of the approach is demonstrated by a reference example. The design of the algorithms will be oriented at the systematic framework of metricbased evolutionary algorithms (MBEAs). MBEAs are a special class of evolutionary algorithms, fulfilling certain guidelines for the design of search operators, whose benefits have been proven in theory and practice. MBEAs rely upon a suitable definition of a metric on the search space. The definition of a metric for the graph representation will be one of the main issues discussed in this paper. Although this article deals with the problem domain of chemical plant optimization, the algorithmic design can be easily transferred to similar network optimization problems. A useful distance measure for variable dimensionality search spaces is suggested.


Author(s):  
W.J. de Ruijter ◽  
Sharma Renu

Established methods for measurement of lattice spacings and angles of crystalline materials include x-ray diffraction, microdiffraction and HREM imaging. Structural information from HREM images is normally obtained off-line with the traveling table microscope or by the optical diffractogram technique. We present a new method for precise measurement of lattice vectors from HREM images using an on-line computer connected to the electron microscope. It has already been established that an image of crystalline material can be represented by a finite number of sinusoids. The amplitude and the phase of these sinusoids are affected by the microscope transfer characteristics, which are strongly influenced by the settings of defocus, astigmatism and beam alignment. However, the frequency of each sinusoid is solely a function of overall magnification and periodicities present in the specimen. After proper calibration of the overall magnification, lattice vectors can be measured unambiguously from HREM images.Measurement of lattice vectors is a statistical parameter estimation problem which is similar to amplitude, phase and frequency estimation of sinusoids in 1-dimensional signals as encountered, for example, in radar, sonar and telecommunications. It is important to properly model the observations, the systematic errors and the non-systematic errors. The observations are modelled as a sum of (2-dimensional) sinusoids. In the present study the components of the frequency vector of the sinusoids are the only parameters of interest. Non-systematic errors in recorded electron images are described as white Gaussian noise. The most important systematic error is geometric distortion. Lattice vectors are measured using a two step procedure. First a coarse search is obtained using a Fast Fourier Transform on an image section of interest. Prior to Fourier transformation the image section is multiplied with a window, which gradually falls off to zero at the edges. The user indicates interactively the periodicities of interest by selecting spots in the digital diffractogram. A fine search for each selected frequency is implemented using a bilinear interpolation, which is dependent on the window function. It is possible to refine the estimation even further using a non-linear least squares estimation. The first two steps provide the proper starting values for the numerical minimization (e.g. Gauss-Newton). This third step increases the precision with 30% to the highest theoretically attainable (Cramer and Rao Lower Bound). In the present studies we use a Gatan 622 TV camera attached to the JEM 4000EX electron microscope. Image analysis is implemented on a Micro VAX II computer equipped with a powerful array processor and real time image processing hardware. The typical precision, as defined by the standard deviation of the distribution of measurement errors, is found to be <0.003Å measured on single crystal silicon and <0.02Å measured on small (10-30Å) specimen areas. These values are ×10 times larger than predicted by theory. Furthermore, the measured precision is observed to be independent on signal-to-noise ratio (determined by the number of averaged TV frames). Obviously, the precision is restricted by geometric distortion mainly caused by the TV camera. For this reason, we are replacing the Gatan 622 TV camera with a modern high-grade CCD-based camera system. Such a system not only has negligible geometric distortion, but also high dynamic range (>10,000) and high resolution (1024x1024 pixels). The geometric distortion of the projector lenses can be measured, and corrected through re-sampling of the digitized image.


2020 ◽  
Vol 23 (8) ◽  
pp. 906-921
Author(s):  
R.A. Alborov ◽  
S.M. Kontsevaya ◽  
S.V. Kozmenkova

Subject. This article deals with the theory-and practice-relevant issues of classification and content definition of different types of capital used as sources of operations financing, and recommendations for developing their accounting in agricultural organizations. Objectives. The article aims to substantiate the organizational and methodological aspects of capital accounting development to generate information on value reserve and creation of value as new in the organization's integrated reporting. The article also aims to define the classification and content of capital types as sources of financing for the organization's activities and develop recommendations for developing the accounting of the availability, increase, reduction or transformation of the relevant types of capital in the organization's business activities. Methods. For the study, we used the methods of analysis and synthesis, induction and deduction, analogy, and comparison. The scientific works of domestic specialists and regulations, including the International Standard on Integrated Reporting (IR) are the methodological basis of this work. Results. The article defines conceptual provisions and offers practical recommendations on the set-up and development of capital flow accounting in the corporate governance system of the agricultural organization. It clarifies the classification and economic content of capital as a source of funding for the organization's reproduction activities. The article also offers an original method of accounting for the value reserve (balances) and capital changes. Conclusions and Relevance. The practical application of the developed recommendations for value accounting and capital changes will help generate all the necessary information in the integrated reporting of the agricultural organization to assess its reserves of value, create value as new, economic, environmental, and social efficiency of the organization's activities. The results of the study can be used to develop the theory, methodology and techniques of accounting of capital types as sources of financing of value creation as a result of the agricultural organization's business activities.


2020 ◽  
Vol 3 (3) ◽  
pp. 32-37
Author(s):  
Shavkat Abdullayev ◽  

The article discusses the theoretical foundations, current status and ways of improving consumer lending in Uzbekistan. It were studied the views of foreign and domestic scientists on the definition of consumer credit. There are analyzed the disadvantages of consumer credits and are proposed ways to improve them


1989 ◽  
Vol 21 (8-9) ◽  
pp. 1057-1064 ◽  
Author(s):  
Vijay Joshi ◽  
Prasad Modak

Waste load allocation for rivers has been a topic of growing interest. Dynamic programming based algorithms are particularly attractive in this context and are widely reported in the literature. Codes developed for dynamic programming are however complex, require substantial computer resources and importantly do not allow interactions of the user. Further, there is always resistance to utilizing mathematical programming based algorithms for practical applications. There has been therefore always a gap between theory and practice in systems analysis in water quality management. This paper presents various heuristic algorithms to bridge this gap with supporting comparisons with dynamic programming based algorithms. These heuristics make a good use of the insight gained in the system's behaviour through experience, a process akin to the one adopted by field personnel and therefore can readily be understood by a user familiar with the system. Also they allow user preferences in decision making via on-line interaction. Experience has shown that these heuristics are indeed well founded and compare very favourably with the sophisticated dynamic programming algorithms. Two examples have been included which demonstrate such a success of the heuristic algorithms.


1998 ◽  
Vol 38 (2) ◽  
pp. 9-15 ◽  
Author(s):  
J. Guan ◽  
T. D. Waite ◽  
R. Amal ◽  
H. Bustamante ◽  
R. Wukasch

A rapid method of determining the structure of aggregated particles using small angle laser light scattering is applied here to assemblages of bacteria from wastewater treatment systems. The structure information so obtained is suggestive of fractal behaviour as found by other methods. Strong dependencies are shown to exist between the fractal structure of the bacterial aggregates and the behaviour of the biosolids in zone settling and dewatering by both pressure filtration and centrifugation methods. More rapid settling and significantly higher solids contents are achievable for “looser” flocs characterised by lower fractal dimensions. The rapidity of determination of structural information and the strong dependencies of the effectiveness of a number of wastewater treatment processes on aggregate structure suggests that this method may be particularly useful as an on-line control tool.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Anna Concas ◽  
Lothar Reichel ◽  
Giuseppe Rodriguez ◽  
Yunzi Zhang

AbstractThis paper introduces the notions of chained and semi-chained graphs. The chain of a graph, when existent, refines the notion of bipartivity and conveys important structural information. Also the notion of a center vertex $$v_c$$ v c is introduced. It is a vertex, whose sum of p powers of distances to all other vertices in the graph is minimal, where the distance between a pair of vertices $$\{v_c,v\}$$ { v c , v } is measured by the minimal number of edges that have to be traversed to go from $$v_c$$ v c to v. This concept extends the definition of closeness centrality. Applications in which the center node is important include information transmission and city planning. Algorithms for the identification of approximate central nodes are provided and computed examples are presented.


Author(s):  
Peter F. Pelz ◽  
Stefan S. Stonjek

Acceptance tests on large fans to prove the performance (efficiency and total pressure rise) to the customer are expensive and sometimes even impossible to perform. Hence there is a need for the manufacturer to reliably predict the performance of fans from measurements on down-scaled test fans. The commonly used scale-up formulas give satisfactorily results only near the design point, where inertia losses are small in comparison to frictional losses. At part- and overload the inertia losses are dominant and the scale-up formulas used so far fail. In 2013 Pelz and Stonjek introduced a new scaling method which fullfills the demands ( [1], [2]). This method considers the influence of surface roughness and geometric variations on the performance. It consists basically of two steps: Initially, the efficiency is scaled. Efficiency scaling is derived analytically from the definition of the total efficiency. With the total derivative it can be shown that the change of friction coefficient is inversely proportional to the change of efficiency of a fan. The second step is shifting the performance characteristic to a higher value of flow coefficient. It is the task of this work to improve the scaling method which was previously introduced by Pelz and Stonjek by treating the rotor/impeller and volute/stator separately. The validation of the improved scale-up method is performed with test data from two axial fans with a diameter of 1000 mm/250mm and three centrifugal fans with 2240mm/896mm/224mm diameter. The predicted performance characteristics show a good agreement to test data.


1975 ◽  
Vol 42 (3) ◽  
pp. 552-556 ◽  
Author(s):  
A. J. Padgaonkar ◽  
K. W. Krieger ◽  
A. I. King

The computation of angular acceleration of a rigid body from measured linear accelerations is a simple procedure, based on well-known kinematic principles. It can be shown that, in theory, a minimum of six linear accelerometers are required for a complete definition of the kinematics of a rigid body. However, recent attempts in impact biomechanics to determine general three-dimensional motion of body segments were unsuccessful when only six accelerometers were used. This paper demonstrates the cause for this inconsistency between theory and practice and specifies the conditions under which the method fails. In addition, an alternate method based on a special nine-accelerometer configuration is proposed. The stability and superiority of this approach are shown by the use of hypothetical as well as experimental data.


Sign in / Sign up

Export Citation Format

Share Document