Justification of rockfall-and-rockslide protection screen design for open pit mines

2021 ◽  
pp. 119-124
Author(s):  
A. Yu. Barinov ◽  

The article discusses the general operation principle and design features of structural elements of a standard rockfill-and-rockslide protection screen. The main problems connected with operation of such structure and their causes are described. The recommendations on aspects to focus on during the analysis of a situation and selection of a proper solution are given. The list of initial data required for calculating the screen design variables and approximate cost at the project design stage is presented. By way of illustration, it is proposed to analyze the engineering protection systems manufactured by Geobrugg AG, Switzerland. The main advantages of these products include: – strength higher than the conventional analogs have; – design capable to uniformly distribute load; – approval by large-scale trials in association with independent institutions. Rockfall-and-rockslide protection screens are widely applied, including protection of benches and pit walls. The screens can be constructed and installed on the fly, need no constant attendance, are inexpensive and, above all, ensure safety and efficiency of mining. Adherence to the recommendations during planning of screen construction activities, namely, acquisition of specific information and engineering approach to the screen design and material can greatly enhance reliability of the structure and to extend its operating life.

2006 ◽  
Vol 34 (3) ◽  
pp. 170-194 ◽  
Author(s):  
M. Koishi ◽  
Z. Shida

Abstract Since tires carry out many functions and many of them have tradeoffs, it is important to find the combination of design variables that satisfy well-balanced performance in conceptual design stage. To find a good design of tires is to solve the multi-objective design problems, i.e., inverse problems. However, due to the lack of suitable solution techniques, such problems are converted into a single-objective optimization problem before being solved. Therefore, it is difficult to find the Pareto solutions of multi-objective design problems of tires. Recently, multi-objective evolutionary algorithms have become popular in many fields to find the Pareto solutions. In this paper, we propose a design procedure to solve multi-objective design problems as the comprehensive solver of inverse problems. At first, a multi-objective genetic algorithm (MOGA) is employed to find the Pareto solutions of tire performance, which are in multi-dimensional space of objective functions. Response surface method is also used to evaluate objective functions in the optimization process and can reduce CPU time dramatically. In addition, a self-organizing map (SOM) proposed by Kohonen is used to map Pareto solutions from high-dimensional objective space onto two-dimensional space. Using SOM, design engineers see easily the Pareto solutions of tire performance and can find suitable design plans. The SOM can be considered as an inverse function that defines the relation between Pareto solutions and design variables. To demonstrate the procedure, tire tread design is conducted. The objective of design is to improve uneven wear and wear life for both the front tire and the rear tire of a passenger car. Wear performance is evaluated by finite element analysis (FEA). Response surface is obtained by the design of experiments and FEA. Using both MOGA and SOM, we obtain a map of Pareto solutions. We can find suitable design plans that satisfy well-balanced performance on the map called “multi-performance map.” It helps tire design engineers to make their decision in conceptual design stage.


Author(s):  
T. V. Galanina ◽  
M. I. Baumgarten ◽  
T. G. Koroleva

Large-scale mining disturbs wide areas of land. The development program for the mining industry, with an expected considerable increase in production output, aggravates the problem with even vaster territories exposed to the adverse anthropogenic impact. Recovery of mining-induced ecosystems in the mineral-extracting regions becomes the top priority objective. There are many restoration mechanisms, and they should be used in integration and be highly technologically intensive as the environmental impact is many-sided. This involves pollution of water, generation of much waste and soil disturbance which is the most typical of open pit mining. Scale disturbance of land, withdrawal of farming land, land pollution and littering are critical problems to the solved in the first place. One of the way outs is highquality reclamation. This article reviews the effective rules and regulations on reclamation. The mechanism is proposed for the legal control of disturbed land reclamation on a regional and federal level. Highly technologically intensive recovery of mining-induced landscape will be backed up by the natural environment restoration strategy proposed in the Disturbed Land Reclamation Concept.


2021 ◽  
Vol 1 ◽  
pp. 3229-3238
Author(s):  
Torben Beernaert ◽  
Pascal Etman ◽  
Maarten De Bock ◽  
Ivo Classen ◽  
Marco De Baar

AbstractThe design of ITER, a large-scale nuclear fusion reactor, is intertwined with profound research and development efforts. Tough problems call for novel solutions, but the low maturity of those solutions can lead to unexpected problems. If designers keep solving such emergent problems in iterative design cycles, the complexity of the resulting design is bound to increase. Instead, we want to show designers the sources of emergent design problems, so they may be dealt with more effectively. We propose to model the interplay between multiple problems and solutions in a problem network. Each problem and solution is then connected to a dynamically changing engineering model, a graph of physical components. By analysing the problem network and the engineering model, we can (1) derive which problem has emerged from which solution and (2) compute the contribution of each design effort to the complexity of the evolving engineering model. The method is demonstrated for a sequence of problems and solutions that characterized the early design stage of an optical subsystem of ITER.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Stephan Fischer ◽  
Marc Dinh ◽  
Vincent Henry ◽  
Philippe Robert ◽  
Anne Goelzer ◽  
...  

AbstractDetailed whole-cell modeling requires an integration of heterogeneous cell processes having different modeling formalisms, for which whole-cell simulation could remain tractable. Here, we introduce BiPSim, an open-source stochastic simulator of template-based polymerization processes, such as replication, transcription and translation. BiPSim combines an efficient abstract representation of reactions and a constant-time implementation of the Gillespie’s Stochastic Simulation Algorithm (SSA) with respect to reactions, which makes it highly efficient to simulate large-scale polymerization processes stochastically. Moreover, multi-level descriptions of polymerization processes can be handled simultaneously, allowing the user to tune a trade-off between simulation speed and model granularity. We evaluated the performance of BiPSim by simulating genome-wide gene expression in bacteria for multiple levels of granularity. Finally, since no cell-type specific information is hard-coded in the simulator, models can easily be adapted to other organismal species. We expect that BiPSim should open new perspectives for the genome-wide simulation of stochastic phenomena in biology.


2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
Carolina Lagos ◽  
Guillermo Guerrero ◽  
Enrique Cabrera ◽  
Stefanie Niklander ◽  
Franklin Johnson ◽  
...  

A novel matheuristic approach is presented and tested on a well-known optimisation problem, namely, capacitated facility location problem (CFLP). The algorithm combines local search and mathematical programming. While the local search algorithm is used to select a subset of promising facilities, mathematical programming strategies are used to solve the subproblem to optimality. Proposed local search is influenced by instance-specific information such as installation cost and the distance between customers and facilities. The algorithm is tested on large instances of the CFLP, where neither local search nor mathematical programming is able to find good quality solutions within acceptable computational times. Our approach is shown to be a very competitive alternative to solve large-scale instances for the CFLP.


Mathematics ◽  
2021 ◽  
Vol 9 (13) ◽  
pp. 1474
Author(s):  
Ruben Tapia-Olvera ◽  
Francisco Beltran-Carbajal ◽  
Antonio Valderrabano-Gonzalez ◽  
Omar Aguilar-Mejia

This proposal is aimed to overcome the problem that arises when diverse regulation devices and controlling strategies are involved in electric power systems regulation design. When new devices are included in electric power system after the topology and regulation goals were defined, a new design stage is generally needed to obtain the desired outputs. Moreover, if the initial design is based on a linearized model around an equilibrium point, the new conditions might degrade the whole performance of the system. Our proposal demonstrates that the power system performance can be guaranteed with one design stage when an adequate adaptive scheme is updating some critic controllers’ gains. For large-scale power systems, this feature is illustrated with the use of time domain simulations, showing the dynamic behavior of the significant variables. The transient response is enhanced in terms of maximum overshoot and settling time. This is demonstrated using the deviation between the behavior of some important variables with StatCom, but without or with PSS. A B-Spline neural networks algorithm is used to define the best controllers’ gains to efficiently attenuate low frequency oscillations when a short circuit event is presented. This strategy avoids the parameters and power system model dependency; only a dataset of typical variable measurements is required to achieve the expected behavior. The inclusion of PSS and StatCom with positive interaction, enhances the dynamic performance of the system while illustrating the ability of the strategy in adding different controllers in only one design stage.


2012 ◽  
Vol 517 ◽  
pp. 13-19 ◽  
Author(s):  
P. Ohayon ◽  
Khosrow Ghavami

The results of many successfully realized Research and Development (R&D) concerned with non-conventional materials and technologies (NOCMAT) in developing countries including Brazil have not been used in large scale in practice. This is due to the lack of selection and evaluation criteria and concepts from planning and designing to implementation programs by governmental agencies and private organizations concerned with the newly developed sustainable materials and technologies. The problems of selecting and evaluating R&D innovation outputs and impacts for construction are complex and need scientific and systematic studies in order to avoid the social and environmental mistakes occurred in industrialized countries after the Second World War. This paper presents a logical framework for the implementation of pertinent indicators to be used as a tool in R&D of NOCMAT projects selection and evaluation concerned with materials, structural elements and technologies of bamboo and composites reinforced with vegetable fibers. Indicators, related to the efficiency, effectiveness, impact, relevance and sustainability of such projects are considered and discussed.


2016 ◽  
Vol 40 (6) ◽  
pp. 500-525 ◽  
Author(s):  
Ben Kelcey ◽  
Zuchao Shen ◽  
Jessaca Spybrook

Objective: Over the past two decades, the lack of reliable empirical evidence concerning the effectiveness of educational interventions has motivated a new wave of research in education in sub-Saharan Africa (and across most of the world) that focuses on impact evaluation through rigorous research designs such as experiments. Often these experiments draw on the random assignment of entire clusters, such as schools, to accommodate the multilevel structure of schooling and the theory of action underlying many school-based interventions. Planning effective and efficient school randomized studies, however, requires plausible values of the intraclass correlation coefficient (ICC) and the variance explained by covariates during the design stage. The purpose of this study was to improve the planning of two-level school-randomized studies in sub-Saharan Africa by providing empirical estimates of the ICC and the variance explained by covariates for education outcomes in 15 countries. Method: Our investigation drew on large-scale representative samples of sixth-grade students in 15 countries in sub-Saharan Africa and includes over 60,000 students across 2,500 schools. We examined two core education outcomes: standardized achievement in reading and mathematics. We estimated a series of two-level hierarchical linear models with students nested within schools to inform the design of two-level school-randomized trials. Results: The analyses suggested that outcomes were substantially clustered within schools but that the magnitude of the clustering varied considerably across countries. Similarly, the results indicated that covariance adjustment generally reduced clustering but that the prognostic value of such adjustment varied across countries.


2009 ◽  
Vol 419-420 ◽  
pp. 89-92
Author(s):  
Zhuo Yi Yang ◽  
Yong Jie Pang ◽  
Zai Bai Qin

Cylinder shell stiffened by rings is used commonly in submersibles, and structure strength should be verified in the initial design stage considering the thickness of the shell, the number of rings, the shape of ring section and so on. Based on the statistical techniques, a strategy for optimization design of pressure hull is proposed in this paper. Its central idea is that: firstly the design variables are chosen by referring criterion for structure strength, then the samples for analysis are created in the design space; secondly finite element models corresponding to the samples are built and analyzed; thirdly the approximations of these analysis are constructed using these samples and responses obtained by finite element model; finally optimization design result is obtained using response surface model. The result shows that this method that can improve the efficiency and achieve optimal intention has valuable reference information for engineering application.


Author(s):  
Eric Coatane´a ◽  
Tuomas Ritola ◽  
Irem Y. Tumer ◽  
David Jensen

In this paper, a design-stage failure identification framework is proposed using a modeling and simulation approach based on Dimensional Analysis and qualitative physics. The proposed framework is intended to provide a new approach to model the behavior in the Functional-Failure Identification and Propagation (FFIP) framework, which estimates potential faults and their propagation paths under critical event scenarios. The initial FFIP framework is based on combining hierarchical system models of functionality and configuration, with behavioral simulation and qualitative reasoning. This paper proposes to develop a behavioral model derived from information available at the configuration level. Specifically, the new behavioral model uses design variables, which are associated with units and quantities (i.e., Mass, Length, Time, etc…). The proposed framework continues the work to allow the analysis of functional failures and fault propagation at a highly abstract system concept level before any potentially high-cost design commitments are made. The main contribution in this paper consists of developing component behavioral models based on the combination of fundamental design variables used to describe components and their units or quantities, more precisely describing components’ behavior.


Sign in / Sign up

Export Citation Format

Share Document