scholarly journals Numerical Analysis of Double Integral of Trigonometric Function Using Romberg Method

2020 ◽  
Vol 8 (2) ◽  
pp. 131
Author(s):  
Andika Saputra ◽  
Rizal Bakri ◽  
Ramlan Mahmud

In general, solving the two-fold integral of trigonometric functions is not easy to do analytically. Therefore, we need a numerical method to get the solution. Numerical methods can only provide solutions that approach true value. Thus, a numerical solution is also called a close solution. However, we can determine the difference between the two (errors) as small as possible. Numerical settlement is done by consecutive estimates (iteration method). The numerical method used in this study is the Romberg method. Romberg's integration method is based on Richardson's extrapolation expansion, so that there is a calculation of the integration of functions in two estimating ways I (h1) and I (h2) resulting in an error order on the result of the completion increasing by two, so it needs to be reviewed briefly about how the accuracy of the method. The results of this study indicate that the level of accuracy of the Romberg method to the analytical method (exact) will give the same value, after being used in several simulations.

1982 ◽  
Vol 47 (5) ◽  
pp. 1301-1309 ◽  
Author(s):  
František Kaštánek ◽  
Marie Fialová

The possibility of use of approximate models for calculation of selectivity of consecutive reactions is critically analysed. Simple empirical criteria are proposed which enable safer application of approximate analytical reactions. A more universal modification has been formulated by use of which the difference of selectivity calculated by the exact numerical method and by the approximate analytical method is at maximum 12%.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Roger P. A’Hern

Abstract Background Accuracy can be improved by taking multiple synchronous samples from each subject in a study to estimate the endpoint of interest if sample values are not highly correlated. If feasible, it is useful to assess the value of this cluster approach when planning studies. Multiple assessments may be the only method to increase power to an acceptable level if the number of subjects is limited. Methods The main aim is to estimate the difference in outcome between groups of subjects by taking one or more synchronous primary outcome samples or measurements. A summary statistic from multiple samples per subject will typically have a lower sampling error. The number of subjects can be balanced against the number of synchronous samples to minimize the sampling error, subject to design constraints. This approach can include estimating the optimum number of samples given the cost per subject and the cost per sample. Results The accuracy improvement achieved by taking multiple samples depends on the intra-class correlation (ICC). The lower the ICC, the greater the benefit that can accrue. If the ICC is high, then a second sample will provide little additional information about the subject’s true value. If the ICC is very low, adding a sample can be equivalent to adding an extra subject. Benefits of multiple samples include the ability to reduce the number of subjects in a study and increase both the power and the available alpha. If, for example, the ICC is 35%, adding a second measurement can be equivalent to adding 48% more subjects to a single measurement study. Conclusion A study’s design can sometimes be improved by taking multiple synchronous samples. It is useful to evaluate this strategy as an extension of a single sample design. An Excel workbook is provided to allow researchers to explore the most appropriate number of samples to take in a given setting.


2020 ◽  
Vol 25 (6) ◽  
pp. 997-1014
Author(s):  
Ozgur Yildirim ◽  
Meltem Uzun

In this paper, we study the existence and uniqueness of weak solution for the system of finite difference schemes for coupled sine-Gordon equations. A novel first order of accuracy unconditionally stable difference scheme is considered. The variational method also known as the energy method is applied to prove unique weak solvability.We also present a new unified numerical method for the approximate solution of this problem by combining the difference scheme and the fixed point iteration. A test problem is considered, and results of numerical experiments are presented with error analysis to verify the accuracy of the proposed numerical method.


2021 ◽  
Author(s):  
Stephan van der Westhuizen ◽  
Gerard Heuvelink ◽  
David Hofmeyr

<p>Digital soil mapping (DSM) may be defined as the use of a statistical model to quantify the relationship between a certain observed soil property at various geographic locations, and a collection of environmental covariates, and then using this relationship to predict the soil property at locations where the property was not measured. It is also important to quantify the uncertainty with regards to prediction of these soil maps. An important source of uncertainty in DSM is measurement error which is considered as the difference between a measured and true value of a soil property.</p><p>The use of machine learning (ML) models such as random forests (RF) has become a popular trend in DSM. This is because ML models tend to be capable of accommodating highly non-linear relationships between the soil property and covariates. However, it is not clear how to incorporate measurement error into ML models. In this presentation we will discuss how to incorporate measurement error into some popular ML models, starting with incorporating weights into the objective function of ML models that implicitly assume a Gaussian error. We will discuss the effect that these modifications have on prediction accuracy, with reference to simulation studies.</p>


2020 ◽  
Vol 80 (11) ◽  
Author(s):  
Jun-Wang Lu ◽  
Ya-Bo Wu ◽  
Bao-Ping Dong ◽  
Yu Zhang

AbstractAt the probe approximation, we construct a holographic p-wave conductor/superconductor model in the five-dimensional Lifshitz black hole with the Weyl correction via both numerical and analytical methods, and study the effects of the Lifshitz parameter z as well as the Weyl parameter $$\gamma $$ γ on the superconductor model. As we take into account one of the two corrections separately, the increasing z ($$\gamma $$ γ ) inhibits(enhances) the superconductor phase transition. When the two corrections are considered comprehensively, they display the obviously competitive effects on both the critical temperature and the vector condensate. In particular, the promoting effects of the Weyl parameter $$\gamma $$ γ on the critical temperature are obviously suppressed by the increasing Lifshitz parameter. Meanwhile, in the case of $$z<2.35$$ z < 2.35 ($$z>2.35$$ z > 2.35 ), the condensate at lower temperature decreases(increases) with the increasing Weyl parameter $$\gamma $$ γ . What is more, the difference among the condensate with the fixed Weyl parameter($$\gamma =-\frac{6}{100},0,\frac{4}{100}$$ γ = - 6 100 , 0 , 4 100 ) decreases(increases) with the increasing Lifshitz parameter z in the region $$z<2.35$$ z < 2.35 ($$z>2.35$$ z > 2.35 ). Furthermore, the increasing z obviously suppresses the real part of conductivity for all value of the Weyl parameter $$\gamma $$ γ . In addition, the analytical results agree well with the ones from the numerical method.


Author(s):  
Shuichi Fukuda

Our traditional design has been producer-centric. But to respond to the frequent and extensive changes and increasing diversification, we have to change our design to user-centric. But it is not a straightforward extension and just listening to the voice of the customer is not enough. Value is defined as value = performance/cost, but performance has been interpreted in the current design solely as functions of a final product and all other factors such as manufacturing are considered as cost. This framework has been effective until recently because there has been asymmetry of information between the producer and the customer. As the producer had a greater amount of information, they only had to produce a product which they think best and it really satisfied the customer who needed a product. The 20th century was the age of products. But as we approached the 21st century, we entered information society and sometimes the customer knows more than the producer. Thus, such a one way flow of development to fill the information (water level) gap doe not work any more, because the gap is quickly disappearing. The difference was evaluated as value in the traditional design and it meant profit for the producer. Therefore, a new approach to create value is called for. One solution is to raise the water level together by the producer and the customer so that the level increase serves for profit for the producer and for the true value for the customer. In order to achieve this goal, we have to identify what is the true value for the customer. We have to step outside of our traditional notion of value being functions of a final product. What is the true value for the customer? It is customers’ satisfaction. Then, how can we satisfy our customers. This paper points out if we note that our customers are very active and creative, we can provide satisfaction to them by getting them involved in the whole process of product development. Then our customers can enjoy not only product experience but also process experience, which will satisfy their needs for self actualization and challenge, i.e., their highest human needs.


Author(s):  
Yanjun Zhang ◽  
Tingting Xia ◽  
Mian Li

Abstract Various types of uncertainties, such as parameter uncertainty, model uncertainty, metamodeling uncertainty may lead to low robustness. Parameter uncertainty can be either epistemic or aleatory in physical systems, which have been widely represented by intervals and probability distributions respectively. Model uncertainty is formally defined as the difference between the true value of the real-world process and the code output of the simulation model at the same value of inputs. Additionally, metamodeling uncertainty is introduced due to the usage of metamodels. To reduce the effects of uncertainties, robust optimization (RO) algorithms have been developed to obtain solutions being not only optimal but also less sensitive to uncertainties. Based on how parameter uncertainty is modeled, there are two categories of RO approaches: interval-based and probability-based. In real-world engineering problems, both interval and probabilistic parameter uncertainties are likely to exist simultaneously in a single problem. However, few works have considered mixed interval and probabilistic parameter uncertainties together with other types of uncertainties. In this work, a general RO framework is proposed to deal with mixed interval and probabilistic parameter uncertainties, model uncertainty, and metamodeling uncertainty simultaneously in design optimization problems using the intervals-of-statistics approaches. The consideration of multiple types of uncertainties will improve the robustness of optimal designs and reduce the risk of inappropriate decision-making, low robustness and low reliability in engineering design. Two test examples are utilized to demonstrate the applicability and effectiveness of the proposed RO approach.


2013 ◽  
Vol 411-414 ◽  
pp. 572-575
Author(s):  
Gang Ding ◽  
Hong Min Nie ◽  
Ying Zhe Liu

Nowadays, a variety of database systems coexist and the application softwares access and operation for database is often custom-written, which can lead to more complicated cross-database access. Through analysis of the operational process of the database middleware, adding the generic database query middleware eQueryMW which can shield the difference of the data source to the traditional three-tier architecture as the middleware level can realize the access of relational heterogeneity database, providing a unified database query interface and cross-database integration method for the users.


2012 ◽  
Vol 20 (3) ◽  
pp. 46-49 ◽  
Author(s):  
C. T. Schamp

The transmission electron microscope (TEM) is well known as the technique of choice for visualization and measurement of features at near-atomic length scales, particularly for semiconductor devices. For example, a critical measurement of interest may be the thickness of the gate oxide in a transistor. The accuracy of these measurements is based on calibrated distances at each magnification. The term accuracy conveys the extent to which the measurement minimizes the difference between the measured value and the true value. The associated term precision is the closeness of agreement in a series of measurements locating the end-points of a measurement line. This article describes a method that increases the accuracy of metrology measurements applied to a high-resolution TEM image.


2015 ◽  
Vol 21 (2) ◽  
pp. 344-348 ◽  
Author(s):  
Elitsa Petrova

Abstract Efficient market hypothesis considers that because many talented analysts constantly searching the market for the deals in a short period, after a certain point of time there are not so attractive deals. According to the hypothesis, successful investors owe their success more to luck than to their skills. The paper presents the Value investing. Value investing is an investment paradigm, which stems from the ideas of Benjamin Graham and David Dodd. The proponents, including the chairperson of Berkshire Hathaway Warren Buffett, argue that the essence of value investing is to buy shares at a price lower than their true value. The difference between the market price and intrinsic value of the stock Graham called the “margin of safety”.


Sign in / Sign up

Export Citation Format

Share Document