significant error
Recently Published Documents


TOTAL DOCUMENTS

108
(FIVE YEARS 33)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
Vol 8 (6) ◽  
pp. 923-927
Author(s):  
Akram K. Mohammed ◽  
Raad H. Irzooki ◽  
Asmaa A. Jamel ◽  
Wesam S. Mohammed-Ali ◽  
Suhad S. Abbas

The critical depth and normal depth computation are essential for hydraulic engineers to understanding the characteristics of varied flow in open channels. These depths are fundamental to analyze the flow for irrigation, drainage, and sewer pipes. Several explicit solutions to calculate critical and normal depths in different shape open channels were discovered over time. Regardless of the complexity of using these explicit solutions, these formulas have a significant error percentage compared to the exact solution. Therefore, this research explicitly calculates the normal and critical depth in circular channels and finds simple, fast, and accurate equations. First, the dimensional analysis was used to propose an analytical equation for measuring the circular channels' critical and normal depths. Then, regression analysis has been carried for 2160 sets of discharge versus critical and normal depths data in a circular open channel. The results show that this study's proposed equation for measuring the circular channels' critical and normal depths overcomes the error percentage in previous studies. Furthermore, the proposed equation offers efficiency and precision compared with other previous solutions.


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Joon Soo Yoo ◽  
Ji Won Yoon

Homomorphic encryption (HE) is notable for enabling computation on encrypted data as well as guaranteeing high-level security based on the hardness of the lattice problem. In this sense, the advantage of HE has facilitated research that can perform data analysis in an encrypted state as a purpose of achieving security and privacy for both clients and the cloud. However, much of the literature is centered around building a network that only provides an encrypted prediction result rather than constructing a system that can learn from the encrypted data to provide more accurate answers for the clients. Moreover, their research uses simple polynomial approximations to design an activation function causing a possibly significant error in prediction results. Conversely, our approach is more fundamental; we present t-BMPNet which is a neural network over fully homomorphic encryption scheme that is built upon primitive gates and fundamental bitwise homomorphic operations. Thus, our model can tackle the nonlinearity problem of approximating the activation function in a more sophisticated way. Moreover, we show that our t-BMPNet can perform training—backpropagation and feedforward algorithms—in the encrypted domain, unlike other literature. Last, we apply our approach to a small dataset to demonstrate the feasibility of our model.


2021 ◽  
Vol 2131 (4) ◽  
pp. 042097
Author(s):  
S Ivankov ◽  
S Zagulyaev ◽  
D Gukov

Abstract Data on the magnetizing current of power transformers are taken from the experience of idling. It is considered that it does not change under load. The experience of idling does not take into account the uneven saturation of the magnetic core when working under load. The hypothesis of a significant error caused by this assumption is put forward. The experiments carried out confirmed the hypothesis. The differences in the measurement of the magnetizing current at idle and under load in the experiments reached 28-32%. This determines the inaccuracy in the calculations of currents and losses in power transformers, which, taking into account the continuous operation of transformers and their large number, can be significant. It is proposed to add the experience of working at rated load when testing power transformers. This experience will not only allow us to clarify the val-ue of the magnetizing current under load and magnetic losses, but also to re-fine the design of the transformer in the direction of reducing the magnetizing current by eliminating uneven saturation of the magnetic circuit when working under load, due to the influence of magnetic scattering fields. This is possible by locally increasing the cross-section of the magnetic circuit in the busiest places of the magnetic circuit.


2021 ◽  
Vol 1 (1) ◽  
pp. 32-34
Author(s):  
Nefeli Panagiota Tzavara ◽  
Bjørn-Jostein Singstad

Colorectal cancer is one of the deadliest and most widespread types of cancer in the world. Colonoscopy is the procedure used to detect and diagnose polyps from the colon, but today's detection rate shows a significant error rate that affects diagnosis and treatment. An automatic image segmentation algorithm may help doctors to improve the detection rate of pathological polyps in the colon. Furthermore, segmenting endoscopic tools in images taken during colonoscopy may contribute towards robotic assisted surgery. In this study, we trained and validated both pre-trained and not pre-trained segmentation models on two different data sets, containing images of polyps and endoscopic tools. Finally, we applied the models on two separate test sets and the best polyp model got a dice score 0.857 and the test instrument model got a dice score 0.948. Moreover, we found that pre-training of the models increased the performance in segmenting polyps and endoscopic tools.


2021 ◽  
pp. 87-90
Author(s):  
Igor Lvovich Abramov

The lack of the operating elements strength calculating methods providing sufficient accuracy of the results is a significant problem in modern agricultural machinery development. The using calculation models do not take into consideration the microrelief of the treated surface, which leads to a significant error in determining both extreme and long-term loads on the working bodies of mechanisms. In this article author analyzes the cultivated soil treated surface microgeometry influence on the forces arising in the tillage tool. The existing design model is considered on the needle harrow example, its disadvantages are indicated and a way to eliminate them is proposed. Experimental data on the soil surface profile microroughnesses size study are presented, regularities of the microroughnesses random distribution are revealed, in particular, the assumption of the normal nature of this distribution is confirmed. The dependences based on the obtained data are proposed for a more accurate acting on the tillage tool loads calculation.


Author(s):  
Matthew Faiella ◽  
Corwin G. J. MacMillan ◽  
Jared Whitehead ◽  
Zhao Pan

This work investigates the propagation of error in a Velocimetry-based Pressure field reconstruction (VPressure) problem to determine and explain the effects of error profile of the data on the error propagation. The results discussed are an extension to those found in Pan et al. (2016). We first show how to determine the upper bound of the error in the pressure field, and that this worst scenario for error in the data field is unique and depends on the characteristics of the domain. We then show that the error propagation for a V-Pressure problem is analogous to elastic deformation in, for example, a Euler-Bernoulli beam or Kirchhoff-Love plate for one- and two-dimensional problems, respectively. Finally, we discuss the difference in error propagation near Dirichlet and Neumann boundary conditions, and explain the behavior using Green’s function and the solid mechanics analogy. The methods discussed in this paper will benefit the community in two ways: i) to give experimentalists intuitive and quantitative insights to design tests that minimize error propagation for a V-pressure problem, and ii) to create tests with significant error propagation for the benchmarking of V-Pressure solvers or algorithms. This paper is intended as a summary of recent research conducted by the authors, whereas the full work has been recently published (Faiella et al., 2021).


Author(s):  
Shaowei Wang ◽  
Jin Li ◽  
Yuqiu Qian ◽  
Jiachun Du ◽  
Wenqing Lin ◽  
...  

Numerical vector aggregation has numerous applications in privacy-sensitive scenarios, such as distributed gradient estimation in federated learning, and statistical analysis on key-value data. Within the framework of local differential privacy, this work gives tight minimax error bounds of O(d s/(n epsilon^2)), where d is the dimension of the numerical vector and s is the number of non-zero entries. An attainable mechanism is then designed to improve from existing approaches suffering error rate of O(d^2/(n epsilon^2)) or O(d s^2/(n epsilon^2)). To break the error barrier in the local privacy, this work further consider privacy amplification in the shuffle model with anonymous channels, and shows the mechanism satisfies centralized (14 ln(2/delta) (s e^epsilon+2s-1)/(n-1))^0.5, delta)-differential privacy, which is domain independent and thus scales to federated learning of large models. We experimentally validate and compare it with existing approaches, and demonstrate its significant error reduction.


2021 ◽  
Vol 3 (2) ◽  
pp. 114-126
Author(s):  
Jihan Della Safegi ◽  
Hapizah Hapizah ◽  
Cecil Hiltrimartin ◽  
Made Sukaryawan ◽  
Kodri Madang ◽  
...  

This study aimed to determine student errors and the factors that caused students to make mistakes in solving PISA-type math problems. Error analysis was carried out based on Newman's analysis through tests and interviews. This research was conducted in one of the junior high schools in the Province of the Bangka Belitung Islands, involving 26 students. Subjects were selected based on a purposive sampling technique with three considerations: academic ability, teacher recommendations, and student willingness. This study used the descriptive qualitative method. The PISA math test questions tested consisted of change and relationship, space and shape, quantity and uncertainty, and data content. The results showed that reading errors were 40.21%, comprehension errors were 41.86%, transformation errors were 87.29%, process skill error is 90.26%, and the answer writing error is 88.46%. While uncertain and data is the content with the most significant error percentage, which is 82.31%. In general, errors were caused by students who cannot relate PISA questions to the material they usually study, and students were not accustomed to working on PISA questions.


Author(s):  
Khyrinaairinfariza Abu Samah, Et. al.

We present the real-world public sentiment expressed on Twitter using the proposed conceptual model (CM) to visualize the communication service providers (CSP) reputation during the Covid-19 pandemic in Malaysia from March 18 until August 18, 2020. The CM is a guideline that entails public tweets directly or indirectly mentioned to the three biggest CSP in Malaysia: Celcom, Maxis, and Digi. A text classifier model optimized for short snippets like tweets is developed to make bilingual sentiment analysis possible. The two languages explored are Bahasa Malaysia and English since they are the two most spoken languages in Malaysia. The classifier model is trained and tested on a huge multidomain dataset pre-labeled with the labels “0” and “1”, which resemble “positive” and “negative”, respectively. We used the Naïve Bayes (NB) technique as the core of the classifier model. Functionality testing has done to ensure no significant error that will render the application useless, and the accuracy testing score of 89% is considered quite impressive. We came out with the visualization through the word clouds and presented -56%, -42%, and -43% of Net Brand Reputation for Celcom, Maxis, and Digi.


2021 ◽  
Vol 14 (3) ◽  
pp. 2441-2450
Author(s):  
Hengnan Guo ◽  
Zefeng Zhang ◽  
Lin Jiang ◽  
Junlin An ◽  
Bin Zhu ◽  
...  

Abstract. Visibility is an indicator of atmospheric transparency, and it is widely used in many research fields, including air pollution, climate change, ground transportation, and aviation. Although efforts have been made to improve the performance of visibility meters, a significant error exists in measured visibility data. This study conducts a well-designed simulation calibration of visibility meters, which proves that current methods of visibility measurement include a false assumption, leading to the long-term neglect of an important source of visibility error caused by erroneous values of Ångström exponents. This error has two characteristics, namely (1) independence, which means that the magnitude of the error is independent of the performance of the visibility meter. It is impossible to reduce this error by improving the performance of visibility meters. The second characteristic is (2) uncertainty, which means the magnitude of the error does not show a clear change pattern but can be substantially larger than the measurement error of visibility meters. It is impossible to accurately estimate the magnitude of this error or its influence on visibility measurements. Our simulations indicate that, as errors in visibility caused by erroneous values of Ångström exponents are inevitable using current methods of visibility measurement, reliable visibility data cannot be obtained without major adjustments to current measurement methods.


Sign in / Sign up

Export Citation Format

Share Document