Modeling of Accurate Variable Slope Angles in Open-Pit Mine Design Using Spline Interpolation / Modelowanie Zmiennego Kąta Nachylenia Stoku W Projektowaniu Kopalni Odkrywkowych Za Pomocą Interpolacji Funkcjami Sklejającymi (Metodą Spline’Ów)

2012 ◽  
Vol 57 (4) ◽  
pp. 921-932 ◽  
Author(s):  
Masoud Soleymani Shishvan ◽  
Javad Sattarvand

Abstract In this paper a new method of modeling variable slope angles has been presented based on the spline interpolation method. Slope angle modeling and defining precedency of the blocks are the vital parts of almost any open pit optimization algorithm. Traditionally heuristic patterns such as 1:5 or 1:9 have been used to generate slope angles. Cone template based models were later employed in developing variable slope angles. They normally use a linear interpolation process for determination of slope angles between the given directions which leads to sharp and non-realistic pits. The other elliptical alternatives suffer from having limitations in defining slope angles in non-geographical directions. The method is capable to consider any number of slope angles in any desired direction as well as creating quite accurate and realistic pit shapes. Three major types of the spline interpolation including cubic, quadratic and cardinal are tested, however, the cubic form is preferred due to more realistic outcomes. Main steps of the method are described through a numerical case study.

1997 ◽  
Vol 40 (1) ◽  
Author(s):  
E. Le Meur ◽  
J. Virieux ◽  
P. Podvin

At a local scale, travel-time tomography requires a simultaneous inversion of earthquake positions and velocity structure. We applied a joint iterative inversion scheme where medium parameters and hypocenter parameters were inverted simultaneously. At each step of the inversion, rays between hypocenters and stations were traced, new partial derivatives of travel-time were estimated and scaling between parameters was performed as well. The large sparse linear system modified by the scaling was solved by the LSQR method at each iteration. We compared performances of two different forward techniques. Our first approach was a fast ray tracing based on a paraxial method to solve the two-point boundary value problem. The rays connect sources and stations in a velocity structure described by a 3D B-spline interpolation over a regular grid. The second approach is the finite-difference solution of the eikonal equation with a 3D linear interpolation over a regular grid. The partial derivatives are estimated differently depending on the interpolation method. The reconstructed images are sensitive to the spatial variation of the partial derivatives shown by synthetic examples. We aldo found that a scaling between velocity and hypocenter parameters involved in the linear system to be solved is important in recovering accurate amplitudes of anomalies. This scaling was estimated to be five through synthetic examples with the real configuration of stations and sources. We also found it necessary to scale Pand S velocities in order to recover better amplitudes of S velocity anomaly. The crustal velocity structure of a 50X50X20 km domain near Patras in the Gulf of Corinth (Greece) was recovered using microearthquake data. These data were recorded during a field experiment in 1991 where a dense network of 60 digital stations was deployed. These microearthquakes were widely distributed under the Gulf of Corinth and enabled us to perform a reliable tomography of first arrival P and S travel-times. The obtained images of this seismically active zone show a south/north asymmetry in agreement with the tectonic context. The transition to high velocity lies between 6 km and 9 km indicating a very thin crust related to the active extension regime.At a local scale, travel-time tomography requires a simultaneous inversion of earthquake positions and velocity structure. We applied a joint iterative inversion scheme where medium parameters and hypocenter parameters were inverted simultaneously. At each step of the inversion, rays between hypocenters and stations were traced, new partial derivatives of travel-time were estimated and scaling between parameters was performed as well. The large sparse linear system modified by the scaling was solved by the LSQR method at each iteration. We compared performances of two different forward techniques. Our first approach was a fast ray tracing based on a paraxial method to solve the two-point boundary value problem. The rays connect sources and stations in a velocity structure described by a 3D B-spline interpolation over a regular grid. The second approach is the finite-difference solution of the eikonal equation with a 3D linear interpolation over a regular grid. The partial derivatives are estimated differently depending on the interpolation method. The reconstructed images are sensitive to the spatial variation of the partial derivatives shown by synthetic examples. We aldo found that a scaling between velocity and hypocenter parameters involved in the linear system to be solved is important in recovering accurate amplitudes of anomalies. This scaling was estimated to be five through synthetic examples with the real configuration of stations and sources. We also found it necessary to scale Pand S velocities in order to recover better amplitudes of S velocity anomaly. The crustal velocity structure of a 50X50X20 km domain near Patras in the Gulf of Corinth (Greece) was recovered using microearthquake data. These data were recorded during a field experiment in 1991 where a dense network of 60 digital stations was deployed. These microearthquakes were widely distributed under the Gulf of Corinth and enabled us to perform a reliable tomography of first arrival P and S travel-times. The obtained images of this seismically active zone show a south/north asymmetry in agreement with the tectonic context. The transition to high velocity lies between 6 km and 9 km indicating a very thin crust related to the active extension regime.


Author(s):  
Ilya V. Sergodeev ◽  

The article deals with the dynamics of the semantic complex of dominant units in poetic text. Units of poetic text are divided into constant and dominant ones. Constant units realize the function of the context formation. They have one clear meaning. Dominant units realize the function of semantization. They are poly-interpretative. The methodology of the work is based on the theory of intertextuality which is viewed from the position of structural, interpretative and lingua-cultural approaches. The brief typology and characteristics of intertextual relations are given: auto- (self-quotations, self-allusions), in- (quotations, allusions), para- (structural and compositional units of a text such as a title, an epigraph, etc.) and arch-textuality (genre imitation; referring to well-known artistic images or cultural phenomena). The paper presents the model of analysis of dominant units in poetic text. The analysis is carried out in five steps: fragmentation, contextual analysis, search and determination of intertextual relations between the analyzed unit and units of address texts, contextual analysis of address texts, synthesis of the obtained contextual meanings. The practical material under study is the poem Elegy by the Canadian poet L. Cohen. The unit of analysis is the personal pronoun he in the given poetic text. The paper establishes intertextual relations between Elegy and texts from Greek mythology, the Holy Bible, Christian culture, and other works by L. Cohen. The conducted analysis shows that intertextual relations between the studied units initiate exchange and superimposition of their context meanings. As a result, the studied unit can have several meanings (some of which are not present in dictionaries but unique for the given author) within the same context. In this way, the dynamics of the semantic complex of the studied units and poetic text is realized.


2018 ◽  
Vol 7 (3.7) ◽  
pp. 51
Author(s):  
Maria Elena Nor ◽  
Norsoraya Azurin Wahir ◽  
G P. Khuneswari ◽  
Mohd Saifullah Rusiman

The presence of outliers is an example of aberrant data that can have huge negative influence on statistical method under the assumption of normality and it affects the estimation. This paper introduces an alternative method as outlier treatment in time series which is interpolation. It compares two interpolation methods using performance indicator. Assuming outlier as a missing value in the data allows the application of the interpolation method to interpolate the missing value, thus comparing the result using the forecast accuracy. The monthly time series data from January 1998 until December 2015 of Malaysia Tourist Arrivals were used to deal with outliers. The results found that the cubic spline interpolation method gave the best result than the linear interpolation and the improved time series data indicated better performance in forecasting rather than the original time series data of Box-Jenkins model. 


2015 ◽  
Vol 5 (5) ◽  
pp. 607-624
Author(s):  
Ebru Ezberci ◽  
Mehmet Altan Kurnaz ◽  
Nezihe Gökçen Bayri

The aim of this study was to reveal secondary school students' ability of making transitions between text, picture, table and graphic representations related to the electricity. The research is a case study of the qualitative research methods. The working groups of the study were totally 100 students, including 50 from each of 6th, 7th and 8th students, studying at a secondary school in the 2012-2013 academic year. To determine the students' status of transitions in representations, a measurement tool relating to the subject of electricity was developed by researches. The measurement consists open-ended questions which questioned transitions representations (text, images, tables and graphics) with each other. Document analysis method was used in data analysis. In this context, determining specific criteria, established codes in accordance with the given answers by the students and the questions was evaluated in certain dimensions. Consequently, students were found to be inadequate to present transitions in representations in electricity. When the results are evaluated, it was suggested that during the teaching of electricity topic, teaching practices to draw attention to transition between different representations and in measurement-assessment processes, the questions reflecting the transition between different representations should be given; regulation the assessment questions in textbooks in this direction is recommended.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Ning Li ◽  
Xianqing Lv ◽  
Jicai Zhang

A new method for the estimation of initial conditions (ICs) in a PM2.5 transport adjoint model is proposed in this paper. In this method, we construct the field of ICs by interpolating values at independent points using the surface spline interpolation. Compared to the traditionally used linear interpolation, the surface spline interpolation has an advantage for reconstructing continuous smooth surfaces. The method is verified in twin experiments, and the results indicate that this method can produce better inverted ICs and less simulation errors. In practical experiments, simulation results show good agreement with the ground-level observations during the 22nd Asia-Pacific Economic Cooperation summit period, demonstrating that the new method is effective in practical application fields.


2020 ◽  
Vol 10 (1) ◽  
pp. 39-48
Author(s):  
László Rónai

Development of an electric measurement system for rapid determination of the friction coefficient is discussed in this paper. The electric system is capable to use with a ball cage guide bush unit. Two beam load cells are included into the system and the measured values of the forces are processed by microcontrollers. In the course of measurements, normal- and tangential forces of inner or outer surfaces of different enamelled specimens could be determined. A data acquisition program is developed to record the force values to a personal computer. Linear interpolation method is required to synchronize the values of the load cells, which is necessary to calculate the coefficient of friction.


2020 ◽  
Vol 5 (1) ◽  
pp. 31-36
Author(s):  
Lina Lina ◽  
Kelly Anthony

The over time role of technology becomes very important. That is because the function of technology is to facilitate human work. Because human needs are increasingly complex, technological developments are created in such a way as to meet human needs. The experts in the medical field are currently very dependent on technology to do their jobs, in order to obtain effective and efficient results. Application system designed aims to help experts in the medical field to diagnose diseases through introduction to white blood cell types. The recognition system was developed using the Nearest Feature Line (NFL) method. In this NFL method, characteristic lines are formed using the method of linear interpolation, linear spline, quadratic spline, and cubic spline. Aside from introducing an introduction system, this paper also discusses comparisons between interpolation methods to form characteristic lines. The test was carried out using FTI Untar Pattern Recognition laboratory blood cell data. The test results show that the formation of characteristic lines using the linear interpolation method provides better recognition results compared to the spline interpolation method.


2012 ◽  
Vol 229-231 ◽  
pp. 2100-2105
Author(s):  
Son Duy Dao ◽  
Kazem Abhary

Tolerance parameters have different effects on robot accuracy. Therefore, it is better to tighten the tolerances of the factors that have statistically significant effect on robot accuracy and widen the tolerances of insignificant ones. By doing so, one not only achieves the given robot accuracy but also reduces manufacturing costs. Objective of this paper is to present an approach used to determine statistical significance of each tolerance parameter of robot manipulator on robot accuracy which can assist robot designers in making decisions regarding tolerance design. In this paper, a comprehensive model of industrial robot manipulator capable of carrying out various applications is developed and computer simulated. Then Taguchi’s Tolerance Design Experiment is applied to determine the statistical significances of the tolerances on robot accuracy. The approach is illustrated by a case study dealing with 6-DOF PUMA 560 robot manipulator.


2014 ◽  
Vol 555 ◽  
pp. 57-65
Author(s):  
Adrian Mihail Stoica

This paper presents a Kalman type method for attitude determination of satellites. It is shown that the linearized model describing the satellite kinematics may be expressed as a stochastic system corrupted with both additive and multiplicative white noise. For this class of stochastic models a hybrid Kalman filter is used to estimate the quaternion expressing the satellite attitude and the bias of the inertial measurement unit. By contrast with the classical Kalman filters, the hybrid filter used in this paper includes a continuous-time and a discrete-time component. The advantage is that it may provide a time varying estimation of the states between the sampling moments which usually are sparse in space applications. The theoretical developments is illustrated via a numerical case study.


Sign in / Sign up

Export Citation Format

Share Document