error tolerance
Recently Published Documents





Geophysics ◽  
2021 ◽  
pp. 1-42
Hanjie Song ◽  
Jinhai Zhang ◽  
Yongliao Zou

The Fourier method for one-way wave propagation is efficient, but potentially inaccurate in complex media. The implicit finite-difference method can handle arbitrarily complex media, but can be inefficient in 3D and has limited dip bandwidth. We proposed a new Fourier method based on Chebyshev expansion of the second kind. Both theoretical analyses and numerical experiments show that the proposed method is comprehensively superior to a similar method based on Chebyshev expansion of the first kind in terms of balanced amplitude and error tolerance. Within the dip bandwidth from 0 to 65°, the fourth-order form of our method has an error tolerance of 2%, which is about one-third that of Chebyshev expansion of the first kind. Our method is also superior to the implicit finite-difference method in several important aspects: effective bandwidth, computational efficiency, numerical dispersion and two-way splitting error. It can be easily extended from 2D to 3D compared with the finite-difference method and from low orders to high orders compared with the optimized Chebyshev-Fourier method. The proposed method shows better imaging results of the SEG/EAGE model by providing a well-focused salt dome, flank and bottom as well as the detailed structures beneath the salt body, compared with the implicit finite-difference method and Chebyshev expansion of the first kind; meanwhile, our method has less imaging artifacts since it can better position the reflectors.

2021 ◽  
Vol 9 ◽  
Fuquan Zhang ◽  
Junyu Zhang ◽  
Yan Lu ◽  
Yixiangzi Sheng ◽  
Yun Sun ◽  

Purpose: The radioactivity induced by proton and heavy ion beam belongs to the ultra-low-activity (ULA). Therefore, the radioactivity and space range of commercial off-line positron emission tomography (PET) acquisition based on ULA should be evaluated accurately to guarantee the reliability of clinical verification. The purpose of this study is to quantify the radioactivity and space range of off-line PET acquisition by simulating the ULA triggered by proton and heavy ion beam.Methods: PET equipment validation phantom and low activity 18F-FDG were used to simulate the ULA with radioactivity of 11.1–1480 Bq/mL. The radioactivity of ULA was evaluated by comparing the radioactivity in the images with the values calculated from the decay function with a radioactivity error tolerance of 5%. The space range of ULA was evaluated by comparing the width of the R50 analyzed activity distribution curve with the actual width of the container with a space range error tolerance of 4 mm.Results: When radioactivity of ULA was >148 Bq/mL, the radioactivity error was <5%. When radioactivity of ULA was >30 Bq/mL, the space range error was below 4 mm.Conclusions: Off-line PET can be used to quantify the radioactivity of proton and heavy ion beam when the ULA exceeds 148 Bq/mL, both in radioactivity and in space range.

2021 ◽  
pp. 001316442110497
Robert L. Brennan ◽  
Stella Y. Kim ◽  
Won-Chan Lee

This article extends multivariate generalizability theory (MGT) to tests with different random-effects designs for each level of a fixed facet. There are numerous situations in which the design of a test and the resulting data structure are not definable by a single design. One example is mixed-format tests that are composed of multiple-choice and free-response items, with the latter involving variability attributable to both items and raters. In this case, two distinct designs are needed to fully characterize the design and capture potential sources of error associated with each item format. Another example involves tests containing both testlets and one or more stand-alone sets of items. Testlet effects need to be taken into account for the testlet-based items, but not the stand-alone sets of items. This article presents an extension of MGT that faithfully models such complex test designs, along with two real-data examples. Among other things, these examples illustrate that estimates of error variance, error–tolerance ratios, and reliability-like coefficients can be biased if there is a mismatch between the user-specified universe of generalization and the complex nature of the test.

Aleksandr Zatsarinny ◽  
Yuri Stepchenkov ◽  
Yuri Diachenko ◽  
Yuri Rogdestvenski

The article considers the problem of developing synchronous and self-timed (ST) digital circuits that are tolerant to soft errors. Synchronous circuits traditionally use the 2-of-3 voting principle to ensure single failure, resulting in three times the hardware costs. In ST circuits, due to dual-rail signal coding and two-phase control, even duplication provides a soft error tolerance level 2.1 to 3.5 times higher than the triple modular redundant synchronous counterpart. The development of new high-precision software simulating microelectronic failure mechanisms will provide more accurate estimates for the electronic circuits' failure tolerance

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7121
Chiu-Han Hsiao ◽  
Frank Yeong-Sung Lin ◽  
Hao-Jyun Yang ◽  
Yennun Huang ◽  
Yu-Fang Chen ◽  

As wireless sensor networks have become more prevalent, data from sensors in daily life are constantly being recorded. Due to cost or energy consumption considerations, optimization-based approaches are proposed to reduce deployed sensors and yield results within the error tolerance. The correlation-aware method is also designed in a mathematical model that combines theoretical and practical perspectives. The sensor deployment strategies, including XGBoost, Pearson correlation, and Lagrangian Relaxation (LR), are determined to minimize deployment costs while maintaining estimation errors below a given threshold. Moreover, the results significantly ensure the accuracy of the gathered information while minimizing the cost of deployment and maximizing the lifetime of the WSN. Furthermore, the proposed solution can be readily applied to sensor distribution problems in various fields.

2021 ◽  
Irati Ibanez-Hidalgo ◽  
Alain Sanchez-Ruiz ◽  
Angel Perez-Basante ◽  
Salvador Ceballos ◽  
Asier Zubizarreta ◽  

Atayi Abraham Vincent ◽  

This study seeks to examine the relationship between Entrepreneurship practices and the level of profitability among farmers in Jos. The study covered small and medium scale farmer entrepreneurs within Jos North, Jos South and Jos East. A sample size of 518 was obtained from the population of 834 at 5% error tolerance and 95% level of confidence, using Simple Random Sampling. A self-structured questionnaire was used to collect data. 505(97.5%) of the questionnaire distributed were returned. The study conducted a pre-test on the questionnaire to ensure the validity of the instrument. Data collected were presented in descriptive statistics and frequency tables. The study used financial ratios such as the gross profit margin, net profit margin, returns on assets, sales per year and total assets measures were used to measure the profitability. The average values for gross profit margin, net profit margin and returns on assets are 29.47%, 19.2% and 8.2% respectively; the result shows that an individual farmer in this study can boast of a high level of profit. The study recommends among other things that governments at all levels should work to create a more conducive environment for farmer entrepreneurs to make profitable investments in agriculture.

Nuodi Huang ◽  
Li Hua ◽  
Xi Huang ◽  
Yang Zhang ◽  
Li-Min Zhu ◽  

Abstract Toolpath represented by linear segments leads to tangency discontinuity between blocks, which results in fluctuation of feedrate and reduction of machining efficiency and quality. To eliminate these unwanted external factors, optimal corner smoothing operation is essential for CNC systems to achieve a smooth toolpath. This work proposes a corner smoothing approach by generating a B-spline transition curve with 7 control points. By adjusting the position of the control points, the resulting transition curve is not limited to smooth the corner in the convex side of the corner, but shuttles back and forth between the convex and concave sides to decrease the maximum curvature, while respecting the given error tolerance. The approximation errors in convex and concave sides can be analytically calculated. Experimental results demonstrate the effectiveness of the proposed method on machining efficiency improvement.

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Dongming Li ◽  
Peng Tang ◽  
Run Zhang ◽  
Changming Sun ◽  
Yong Li ◽  

For the analysis of medical images, one of the most basic methods is to diagnose diseases by examining blood smears through a microscope to check the morphology, number, and ratio of red blood cells and white blood cells. Therefore, accurate segmentation of blood cell images is essential for cell counting and identification. The aim of this paper is to perform blood smear image segmentation by combining neural ordinary differential equations (NODEs) with U-Net networks to improve the accuracy of image segmentation. In order to study the effect of ODE-solve on the speed and accuracy of the network, the ODE-block module was added to the nine convolutional layers in the U-Net network. Firstly, blood cell images are preprocessed to enhance the contrast between the regions to be segmented; secondly, the same dataset was used for the training set and testing set to test segmentation results. According to the experimental results, we select the location where the ordinary differential equation block (ODE-block) module is added, select the appropriate error tolerance, and balance the calculation time and the segmentation accuracy, in order to exert the best performance; finally, the error tolerance of the ODE-block is adjusted to increase the network depth, and the training NODEs-UNet network model is used for cell image segmentation. Using our proposed network model to segment blood cell images in the testing set, it can achieve 95.3% pixel accuracy and 90.61% mean intersection over union. By comparing the U-Net and ResNet networks, the pixel accuracy of our network model is increased by 0.88% and 0.46%, respectively, and the mean intersection over union is increased by 2.18% and 1.13%, respectively. Our proposed network model improves the accuracy of blood cell image segmentation and reduces the computational cost of the network.

Sign in / Sign up

Export Citation Format

Share Document