cent error
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 9)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 228 (2) ◽  
pp. 857-875
Author(s):  
Ashley Bellas ◽  
Shijie Zhong ◽  
Anthony B Watts

SUMMARY Recent modelling studies have shown that laboratory-derived rheology is too strong to reproduce observations of flexure at the Hawaiian Islands, while the same rheology appears consistent with outer rise—trench flexure at circum-Pacific subduction zones. Collectively, these results indicate that the rheology of an oceanic plate boundary is stronger than that of its interior, which, if correct, presents a challenge to understanding the formation of trenches and subduction initiation. To understand this dilemma, we first investigate laboratory-derived rheology using fully dynamic viscoelastic loading models and find that it is too strong to reproduce the observationally inferred elastic thickness, Te, at most plate interior settings. The Te can, however, be explained if the yield stress of low-temperature plasticity is significantly reduced, for example, by reducing the activation energy from 320 kJ mol−1, as in Mei et al., to 190 kJ mol−1 as was required by previous studies of the Hawaiian Islands, implying that the lithosphere beneath Hawaii is not anomalous. Second, we test the accuracy of the modelling methods used to constrain the rheology of subducting lithosphere, including the yield stress envelope (YSE) method, and the broken elastic plate model (BEPM). We show the YSE method accurately reproduces the model Te to within ∼10 per cent error with only modest sensitivity to the assumed strain rate and curvature. Finally, we show that the response of a continuous plate is significantly enhanced when a free edge is introduced at or near an edge load, as in the BEPM, and is sensitive to the degree of viscous coupling at the free edge. Since subducting lithosphere is continuous and generally mechanically coupled to a sinking slab, the BEPM may falsely introduce a weakness and hence overestimate Te at a trench because of trade-off. This could explain the results of recent modelling studies that suggest the rheology of subducting oceanic plate is stronger than that of its interior. However, further studies using more advanced thermal and mechanical models will be required in the future in order to quantify this.


BISMA ◽  
2020 ◽  
Vol 14 (2) ◽  
pp. 101
Author(s):  
John Henry Wijaya ◽  
Nugi Mohammad Nugraha

This study aims to determine how the forecasting of banking stock performance in 2017 is measured weekly using the ARCH-GARCH method. There were 43 registered banking companies listed on the Indonesia Stock Exchange, but only 39 companies used as the research sample based on data completeness. The ARCH-GARCH method was used in the forecasting process. Results showed that the value of the mean absolute per cent error was 8.52% or below 10%. Therefore, the ARCH-GARCH method was quite good at predicting the performance of the banking sector. With a high level of complexity, the ARCH-GARCH method could provide a more realistic description than other methods to help investors make decisions. The banking sector tends to experience a downturn. Thus, it would be better for investors to hold back the intention to invest in banking stocks unless they are the risk-takers. Keywords: ARCH-GARCH, banking sector, stock performance


2020 ◽  
Vol 12 (11) ◽  
pp. 1823
Author(s):  
Max Yaremchuk ◽  
Joseph M. D’Addezio ◽  
Gregg Jacobs

Wide-swath satellite altimeter observations are contaminated by errors caused by the uncertainties in the geometry and orientation of the on-board interferometer. These errors are strongly correlated across the track, while also having similar error structures in the along-track direction. We describe a method for modifying the geometric component of the error covariance matrix which facilitates accuracy in the removal of the respective error modes from the signal and improves computational efficiency of the data assimilation schemes involving wide-swath altimeter observations. The method has been tested using the Surface Water and Ocean Topography simulator. We show substantial computer cost savings in the pseudo-inversion of the respective error covariance matrix. This efficiency improvement comes with a few per cent error in the approximation of the original covariance model simulating uncertainties in the geometry and orientation of the on-board interferometer.


2020 ◽  
Vol 221 (2) ◽  
pp. 928-937
Author(s):  
Denys Grombacher ◽  
Mason Andrew Kass ◽  
Esben Auken ◽  
Jakob Juul Larsen

SUMMARY A surface nuclear magnetic resonance (NMR) forward model based on the full-Bloch equation improves the accuracy of the forward response given an arbitrary excitation pulse and a wider range of relaxation conditions. However, the full-Bloch solution imposes a significant slowdown in inversion times compared to the traditional forward model. We present a fast-mapping approach capable of dramatic increases in inversion speeds with minimal sacrifices in forward response accuracy. We show that the look-up tables used to calculate the transverse magnetization and the full surface NMR forward response are smoothly varying functions of the underlying T2* and T2 values. We exploit this smoothness to form a polynomial representation of the look-up tables and surface NMR forward responses, where a fast-mapping approximation of each are reduced to a simple matrix multiplication. Accurate approximations with less than 1 per cent error can be produced using 21 coefficient representations of the look-up tables for each B1 value and for the signal expected from a particular depth layer for a particular pulse moment. In essence, the proposed fast-mapping approach front-loads all expensive calculations and stores the results in a compressed form as a coefficient matrix containing less than a half a million elements. This allows all subsequent inversions to be performed at greatly improved speeds.


2019 ◽  
Vol 491 (2) ◽  
pp. 1600-1621
Author(s):  
Yi Mao ◽  
Jun Koda ◽  
Paul R Shapiro ◽  
Ilian T Iliev ◽  
Garrelt Mellema ◽  
...  

ABSTRACT Cosmic reionization was driven by the imbalance between early sources and sinks of ionizing radiation, both of which were dominated by small-scale structure and are thus usually treated in cosmological reionization simulations by subgrid modelling. The recombination rate of intergalactic hydrogen is customarily boosted by a subgrid clumping factor, 〈n2〉/〈n〉2, which corrects for unresolved fluctuations in gas density n on scales below the grid-spacing of coarse-grained simulations. We investigate in detail the impact of this inhomogeneous subgrid clumping on reionization and its observables, as follows: (1) Previous attempts generally underestimated the clumping factor because of insufficient mass resolution. We perform a high-resolution N-body simulation that resolves haloes down to the pre-reionization Jeans mass to derive the time-dependent, spatially varying local clumping factor and a fitting formula for its correlation with local overdensity. (2) We then perform a large-scale N-body and radiative transfer simulation that accounts for this inhomogeneous subgrid clumping by applying this clumping factor-overdensity correlation. Boosting recombination significantly slows the expansion of ionized regions, which delays completion of reionization and suppresses 21 cm power spectra on large scales in the later stages of reionization. (3) We also consider a simplified prescription in which the globally averaged, time-evolving clumping factor from the same high-resolution N-body simulation is applied uniformly to all cells in the reionization simulation, instead. Observables computed with this model agree fairly well with those from the inhomogeneous clumping model, e.g. predicting 21 cm power spectra to within 20 per cent error, suggesting it may be a useful approximation.


2019 ◽  
Vol 19 (3) ◽  
pp. 186-200
Author(s):  
Deborah B. Kim ◽  
Edward D. White ◽  
Jonathan D. Ritschel ◽  
Chad A. Millette

Purpose Within earned value management, the cost performance index (CPI) and the critical ratio (CR) are used to generate the estimates at completion (EACs). According to the research in the 1990s, estimating the final contract’s cost at completion (CAC) using EACCR is a quicker predictor of the actual final cost versus using EACCPI. This paper aims to investigate whether this trend stills holds for modern department of defense contracts. Design/methodology/approach Accessing the Cost Assessment Data Enterprise (CADE) database, 451 contracts consisting of 863 contract line item numbers (CLINs) were initially retrieved and analyzed in three stages. The first replicated the work conducted in 1990s. The second stage entailed calculating 95 per cent confidence intervals and hypothesis tests regarding percentage accuracy of EACs for a contract’s final CAC. Lastly, regression analysis was conducted to characterize major, moderate and minor influencers on EAC reliability. Findings For modern contracts, EACCR aligns more with EACCPI and no longer demonstrates early accuracy of a contract’s final CAC. Contract percentage completion strongly reduced the per cent error of estimating CAC, while cost-plus-fixed-fee contracts and those with no work breakdown structure greater than Level 2 negatively affected accuracy. Social implications To militate against optimism of early assessment of a contract's true cost. Originality/value This paper provides empirical evidence that EACCR behaves more like EACCPI with respect to modern contracts, suggesting that today’s contracts have relatively high SPI. Therefore, caution is warranted for program managers when estimating the CAC from contract initiation up to and slightly beyond the mid-point of completion.


2019 ◽  
Vol 487 (3) ◽  
pp. 3419-3426 ◽  
Author(s):  
Valerio Marra ◽  
Eddy G Chirinos Isidro

ABSTRACT Using almost one million galaxies from the final Data Release 12 of the SDSS’s Baryon Oscillation Spectroscopic Survey (BOSS), we have obtained, albeit with low significance, a first model-independent determination of the radial baryon acoustic oscillation (BAO) peak with 9 per cent error: ΔzBAO(zeff = 0.51) = 0.0456 ± 0.0042. In order to obtain this measurement, the radial correlation function was computed in 7700 angular pixels, from which the mean correlation function and covariance matrix were obtained, making the analysis completely model-independent. This novel method of obtaining the covariance matrix was validated via comparison with 500 BOSS mock catalogues. This ΔzBAO determination can be used to constrain the background expansion of exotic models for which the assumptions adopted in the standard analysis cannot be satisfied. Future galaxy catalogues from J-PAS, DESI, and Euclid are expected to significantly increase the quality and significance of model-independent determinations of the BAO peak, possibly determined at various redshift and angular positions. We stress that it is imperative to test the standard paradigm in a model-independent way in order to test its foundations, maximize the extraction of information from the data, and look for clues regarding the poorly understood dark energy and dark matter.


Water SA ◽  
2019 ◽  
Vol 45 (2 April) ◽  
Author(s):  
Sezar Gulbaz ◽  
Cevza Melek Kazezyılmaz-Alhan ◽  
Rasim Temür

Urbanization of a watershed affects both surface water and groundwater resources. When impervious area increases, the excess runoff and volume of water collected at the downstream end of the watershed also increases, due to the decrease in groundwater recharge, depression storage, infiltration and evapotranspiration. Low-impact development (LID) methods have been developed in order to diminish adverse effects of excess stormwater runoff. Bioretention is one of the LID types which is used to prevent flooding by decreasing runoff volume and peak flow rate, and to manage storm-water by improving water quality. In this study, an empirical formula is derived to predict the peak outflow out of a bioretention column as a function of the ponding depth on bioretention, hydraulic conductivity, porosity, suction head, initial moisture content and height of the soil mixture used in the bioretention column. Coefficients of the empirical formula are determined by using metaheuristic algorithms. For analyses, the experimental data obtained from rainfall-watershed-bioretention (RWB) system are used. The reliability of the empirical formula is evaluated by calculating the absolute per cent error between the peak value ofthe measured outflow and the calculated outflow of the bioretention columns. The results show that the performance of the empirical formula is satisfactory.


2019 ◽  
Vol 27 (1) ◽  
pp. 151-188
Author(s):  
Sherwood Lane Lambert ◽  
Kevin Krieger ◽  
Nathan Mauck

Purpose To the authors’ knowledge, this paper is the first to use Detail I/B/E/S to study directly the timeliness of security analysts’ next-year earnings-per-share (EPS) estimates relative to the SEC filings of annual (10-K) and quarterly (10-Q) financial statements. Although the authors do not prove a causal relationship, they provide evidence that the average time from firms’ filings of 10-Ks and 10-Qs to the release of analysts’ annual EPS forecasts during short timeframes (for example, 15-day timeframe from a 10-K’s SEC file date) subsequent to the 10-K and 10-Q filing dates significantly shortened with XBRL implementation and then remained relatively constant following implementation. Design/methodology/approach Using filing dates hand-collected from the SEC website for 10-Ks during 2009-2011 and filing dates for 10-Ks and 10-Qs during 2003-2014 input from Compustat along with analysts’ estimated values for next year EPS, actual estimated next year EPS realized and estimate announcement dates in Detail I/B/E/S, the authors study the days from 10-K and 10-Q file dates to announcement dates and the per cent errors for individual estimates during per- and post-XBRL eras. Findings The authors find that analysts are announcing next-year EPS forecasts significantly more frequently and in significantly shorter time in zero to 15 days immediately following 10-K and 10-Q file dates post-XBRL as compared to pre-XBRL. However, the authors do not find a significant change in forecast accuracy post-XBRL as compared to pre-XBRL. Research limitations/implications Because this study uses short timeframes immediately following the events (filings of 10-Ks and 10-Qs), the relationship between 10-Ks and 10-Qs with and without XBRL and improved forecast timeliness is strengthened. However, even this strengthened difference-in-difference methodology does not establish causality. Future research may determine whether XBRL or other factors cause the improved forecast timeliness the authors’ evidence. Practical implications This improved efficiency may become critical if financial statement reporting expands as a result of new innovations such as Big Data and continuous reporting. In the future, users may be able to electronically connect to financial statement data that firms are maintaining on a perpetual basis on the SEC website and continuously monitor and analyze the financial statement data dynamically in real time. If so, then unquestionably, XBRL will have played a critical role in bringing about this future innovation. Originality/value Whereas previous studies have utilized Summary IBES data to assess the impact of XBRL on analyst forecasts, the authors use Detail IBES to study the effects of XBRL adoption directly by measuring days from 10-K and 10-Q file dates in Compustat to each estimate’s announcement date recorded in IBES and by computing the per cent error using each estimate’s VALUE and ACTUAL recorded in Detail IBES. The authors are the first to evidence a significant shortening in average days and an increase in per cent of 30-day counts in the zero- to 15-day timeframe immediately following the fillings of 10-K s and 10-Qs.


2018 ◽  
Vol 90 (9) ◽  
pp. 1445-1463
Author(s):  
Hyeong-Uk Park ◽  
Joon Chung ◽  
Ohyun Kwon

Purpose The purpose of this paper is a development of a virtual flight test framework with derivative design optimization. Aircraft manufactures and engineers have been putting significant effort into the design process to lower the cost of development and time to a minimum. In terms of flight tests and aircraft certification, implementing simulation and virtual test techniques may be a sufficient method in achieving these goals. In addition to simulation and virtual test, a derivative design can be implemented to satisfy different market demands and technical changes while reducing development cost and time. Design/methodology/approach In this paper, a derivative design optimization was applied to Expedition 350, a small piston engine powered aircraft developed by Found Aircraft in Canada. A derivative that changes the manned aircraft to an Unmanned Aerial Vehicle for payload delivery was considered. An optimum configuration was obtained while enhancing the endurance of the UAV. The multidisciplinary design optimization module of the framework represents the optimized configuration and additional parameters for the simulator. These values were implemented in the simulator and generated the aircraft model for simulation. Two aircraft models were generated for the flight test. Findings The optimization process delivered the UAV derivative of Expedition E350, and it had increased endurance up to 21.7 hours. The original and optimized models were implemented into virtual flight test. The cruise performance exhibited less than 10 per cent error on cruise performance between the original model and Pilots Operating Handbook (POH). The dynamic stability of original and optimized models was tested by checking Phugoid, short period, Dutch roll and spiral roll modes. Both models exhibited stable dynamic stability characteristics. Practical implications The original Expedition 350 was generated to verify the accuracy of the simulation data by comparing its result with actual flight test data. The optimized model was generated to evaluate the optimization results. Ultimately, the virtual flight test framework with an aircraft derivative design was proposed in this research. The additional module for derivative design optimization was developed and its results were implemented to commercial off-the-shelf simulators. Originality/value This paper proposed the application of UAV derivative design optimization for the virtual flight test framework. The methodology included the optimization of UAV derivative utilizing MDO and virtual flight testing of an optimized result with a flight simulator.


Sign in / Sign up

Export Citation Format

Share Document