scholarly journals MATHEMATICAL MODELING OF THE COVERING OPTIMIZATION OF THE ROUND BUILDINGS IN A PLAN WITH A RADIAL BEAM POSITION

Author(s):  
A.E. Yanin ◽  
◽  
S.N. Novikova ◽  

The article presents the results of optimization of the angle between radial beams in the floor of a circular building in the plan. On the one hand, they rest on the central post, and on the other, on vertical supporting structures along the circle. Steel decking is laid on the beams. The angle between the beams is determined so that the mass of the beam and the deck is minimal. This angle is considered optimal. To solve the problem, the target function of the cost of flooring and radial beams per unit floor area is used. This function depends on the angle between the beams. Using mathematical methods of differentiation, the minimum of the objective function and the corresponding value of the optimal angle were found. The thickness of the flooring was determined on the basis of ensuring its rigidity. It is assumed that composite welded radial beams have I-beams with two axes of symmetry. The height of the beam corresponds to the equality of the areas of the shelves and the wall. The problem of determining the optimal angle between the beams was solved on the basis of ensuring the strength of the beams under normal stresses. In the design diagram of the beam, a triangular distributed load is adopted. The dimensions of the cross-section of the beam were determined based on the equality of the required and actual moments of resistance, and were included in the target cost function. The study took into account that the deflection of the beam at the optimal angle between them can exceed the limiting standard value. Based on the solution of the system of equations of strength and stiffness, a formula is obtained for the minimum angle between the beams from the stiffness condition. The carried out mathematical studies have shown that at the optimal angle between the beams, it is possible to ensure its rigidity. This is possible when the flexibility of the beam wall exceeds a certain minimum value. Analysis of the formula for the minimum value of the wall flexibility showed that it is proportional to the design steel resistance to the sixth power. Therefore, to ensure that the deflection of the beam does not exceed the limiting value at the optimum angle, it is necessary to use low strength steel. To confirm the practical feasibility of using the proposed method, the problem was solved with certain numerical data. The results obtained have confirmed that the problem has a practical meaning at a relatively low steel strength. In addition, it turned out that the optimal angle between the beams does not depend on its span.

2018 ◽  
Vol 45 (5) ◽  
pp. 393-406 ◽  
Author(s):  
Aladdin Alwisy ◽  
Ahmed Bouferguene ◽  
Mohamed Al-Hussein

Target value design (TVD) principles set the main guidelines for the design-estimate process that allow the efficient exploration of available construction alternatives, thereby helping construction companies to reduce cost-to-design, cost-to-build, and improve the quality of construction projects. The successful application of TVD requires a clear understanding of the interactions among construction components. The proposed target cost modelling approach introduces an algorithmic factor-based framework to advance TVD that supports the design-estimate process by examining the relationships among building components, their direct and indirect impact on project overall cost and value. Construction factors control compatibility and performance analysis among available construction alternatives. Costing factors contribute to the development of mathematical costing models capable of automatically calculating the cost of compatible alternatives. Finally, rule-based analysis, developed under an appropriate programming environment, executes alternative value analysis to develop a detailed estimate with an improved overall value for construction projects.


2021 ◽  
Vol 296 ◽  
pp. 06030
Author(s):  
G.I. Moskvitin ◽  
A.B. Pismennaya ◽  
P.N. Abroskin ◽  
V.V. Korsakova

The practice of management activities suggests that the experience and knowledge of managers alone cannot always provide an optimal or even rational solution without additional scientifically grounded methods for assessing the effectiveness of possible options for organizing transport processes. All this generated a need for the development of scientific methods for making (developing, supporting, justifying) decisions that would make it possible to develop specific recommendations for the head who manages the facility in difficult situations. These methods included the formation of effectiveness measures in the form of maximum or minimum of a single indicator — the target function with or without constraints and various algorithms for finding the values of arguments that provide the required value of this function. Subsequently, mathematical methods for solving multi-criteria problems involving different goals of the operation and using, respectively, a lot of alternative effectiveness measures began to develop.


2007 ◽  
Vol 40 (2) ◽  
pp. 250-259 ◽  
Author(s):  
Boualem Hammouda ◽  
David F. R. Mildner

The resolution of small-angle neutron scattering instruments is investigated for the case where refractive optics (lenses or prisms) are used. The appropriate equations are derived to describe the position and the spatial variance of the neutron beam at the detector in the horizontal and vertical directions, and the minimum value of the scattering vector. This is given for the spectrometer without any additional optics, and with the insertion of converging lenses or prisms. The addition of the lenses decreases the sample-aperture contribution to the resolution to enable an increase of neutron current at the sample. They also reduce the size of the penumbra of the beam at the detector, thereby lowering the minimum value of the scattering vector. The prisms correct the effect of gravity on the vertical beam position, and make the beam spot less asymmetric.


2017 ◽  
Vol 19 (2(64)) ◽  
pp. 157-163
Author(s):  
A.G. Kukharchyk

In the article, the questions of cost optimization in solving the transport problem using mathematical models are considered. A group of criteria that have the greatest influence in solving the transport problem is determined. The mathematical model of the transport problem allows us to describe a multitude of situations that arise in multimodal transport. The formulation of the goal is optimization - a task more economical, on the other - knowledge of economic and mathematical methods can more effectively solve this problem. The rationale for choosing an optimization criterion is a procedure that cannot be fully formalized, it must be performed taking into account the performance of transport and the interrelationship between them. The common approach to choosing and justifying an optimization criterion is usually based on the following circumstance: as a criterion, only a measure that can be quantified is chosen. Most often, the justification of one indicator is taken as a criterion (characteristic) of the process, less often - a group of criteria, depending on which one speaks of tasks with one criterion or multicriteria. As can be seen from the above, each criterion of optimality has advantages and disadvantages, which most often result from the measure of the synthetic criterion, the difficulty of preparing information in the form an array of coefficients for unknowns in the target function, the narrower or broader scope of its application. The selection and justification of the optimization criterion are performed taking into account all these circumstances in each particular case. In conclusion, it should be noted that all of these criteria have meaning in such tasks, where the volume of traffic is predetermined.


2008 ◽  
Vol 392-394 ◽  
pp. 25-29 ◽  
Author(s):  
Yun Xia Wang ◽  
Xiao Dong Zhang ◽  
Xue Zhi Wu

As the research object, the mobile robot chassis is designed. Firstly, the mobile robot chassis intensity design is processed in accordance with the chassis external load. According to the design results, the structure model is constructed in ANSYS, the strength and stiffness is checked and its structural dynamics characteristic is computed. Then, based on the numerical value analysis, the non-sensitive variable of the bodywork is analyzed and the optimization model of the body structure is established. The result of the Numerical Optimization proved that, the inherent structure frequency characteristics did not change significantly in the case of the body weight components reduced. The mobile robot platform is set up in the laboratory, the experimental results showed that the robot chassis design is reasonable, in line with the requirements of mobile robots.


Author(s):  
W.M. Stobbs

I do not have access to the abstracts of the first meeting of EMSA but at this, the 50th Anniversary meeting of the Electron Microscopy Society of America, I have an excuse to consider the historical origins of the approaches we take to the use of electron microscopy for the characterisation of materials. I have myself been actively involved in the use of TEM for the characterisation of heterogeneities for little more than half of that period. My own view is that it was between the 3rd International Meeting at London, and the 1956 Stockholm meeting, the first of the European series , that the foundations of the approaches we now take to the characterisation of a material using the TEM were laid down. (This was 10 years before I took dynamical theory to be etched in stone.) It was at the 1956 meeting that Menter showed lattice resolution images of sodium faujasite and Hirsch, Home and Whelan showed images of dislocations in the XlVth session on “metallography and other industrial applications”. I have always incidentally been delighted by the way the latter authors misinterpreted astonishingly clear thickness fringes in a beaten (”) foil of Al as being contrast due to “large strains”, an error which they corrected with admirable rapidity as the theory developed. At the London meeting the research described covered a broad range of approaches, including many that are only now being rediscovered as worth further effort: however such is the power of “the image” to persuade that the above two papers set trends which influence, perhaps too strongly, the approaches we take now. Menter was clear that the way the planes in his image tended to be curved was associated with the imaging conditions rather than with lattice strains, and yet it now seems to be common practice to assume that the dots in an “atomic resolution image” can faithfully represent the variations in atomic spacing at a localised defect. Even when the more reasonable approach is taken of matching the image details with a computed simulation for an assumed model, the non-uniqueness of the interpreted fit seems to be rather rarely appreciated. Hirsch et al., on the other hand, made a point of using their images to get numerical data on characteristics of the specimen they examined, such as its dislocation density, which would not be expected to be influenced by uncertainties in the contrast. Nonetheless the trends were set with microscope manufacturers producing higher and higher resolution microscopes, while the blind faith of the users in the image produced as being a near directly interpretable representation of reality seems to have increased rather than been generally questioned. But if we want to test structural models we need numbers and it is the analogue to digital conversion of the information in the image which is required.


Author(s):  
M. Watanabe ◽  
Z. Horita ◽  
M. Nemoto

X-ray absorption in quantitative x-ray microanalysis of thin specimens may be corrected without knowledge of thickness when the extrapolation method or the differential x-ray absorption (DXA) method is used. However, there is an experimental limitation involved in each method. In this study, a method is proposed to overcome such a limitation. The method is developed by introducing the ζ factor and by combining the extrapolation method and DXA method. The method using the ζ factor, which is called the ζ-DXA method in this study, is applied to diffusion-couple experiments in the Ni-Al system.For a thin specimen where incident electrons are fully transparent, the characteristic x-ray intensity generated from a beam position, I, may be represented as I = (NρW/A)Qωaist.


Author(s):  
M. G. Burke ◽  
M. N. Gungor ◽  
M. A. Burke

Intermetallic matrix composites are candidates for ultrahigh temperature service when light weight and high temperature strength and stiffness are required. Recent efforts to produce intermetallic matrix composites have focused on the titanium aluminide (TiAl) system with various ceramic reinforcements. In order to optimize the composition and processing of these composites it is necessary to evaluate the range of structures that can be produced in these materials and to identify the characteristics of the optimum structures. Normally, TiAl materials are difficult to process and, thus, examination of a suitable range of structures would not be feasible. However, plasma processing offers a novel method for producing composites from difficult to process component materials. By melting one or more of the component materials in a plasma and controlling deposition onto a cooled substrate, a range of structures can be produced and the method is highly suited to examining experimental composite systems. Moreover, because plasma processing involves rapid melting and very rapid cooling can be induced in the deposited composite, it is expected that processing method can avoid some of the problems, such as interfacial degradation, that are associated with the relatively long time, high temperature exposures that are induced by conventional processing methods.


Author(s):  
Christine M. Dannels ◽  
Christopher Viney

Processing polymers from the liquid crystalline state offers several advantages compared to processing from conventional fluids. These include: better axial strength and stiffness in fibers, better planar orientation in films, lower viscosity during processing, low solidification shrinkage of injection moldings (thermotropic processing), and low thermal expansion coefficients. However, the compressive strength of the solid is disappointing. Previous efforts to improve this property have focussed on synthesizing stiffer molecules. The effect of microstructural scale has been overlooked, even though its relevance to the mechanical and physical properties of more traditional materials is well established. By analogy with the behavior of metals and ceramics, one would expect a fine microstructure (i..e. a high density of orientational defects) to be desirable.Also, because much microstructural detail in liquid crystalline polymers occurs on a scale close to the wavelength of light, light is scattered on passing through these materials.


Sign in / Sign up

Export Citation Format

Share Document