Localization–compensation algorithm based on the Mean kShift and the Kalman filter

2015 ◽  
Vol 29 (06n07) ◽  
pp. 1540020
Author(s):  
Dong Myung Lee ◽  
Tae Wan Kim ◽  
Yun-Hae Kim

In this paper, we propose a localization simulator based on the random walk/waypoint mobility model and a hybrid-type location–compensation algorithm using the Mean kShift/Kalman filter (MSKF) to enhance the precision of the estimated location value of mobile modules. From an analysis of our experimental results, the proposed algorithm using the MSKF can better compensate for the error rates, the average error rate per estimated distance moved by the mobile node ( Err _ Rate DV ) and the error rate per estimated trace value of the mobile node ( Err _ Rate TV ) than the Mean shift or Kalman filter up to a maximum of 29% in a random mobility environment for the three scenarios.

1976 ◽  
Vol 42 (2) ◽  
pp. 487-490 ◽  
Author(s):  
Victor M. Catano

Two groups of helicopter technicians filled out data forms after completing maintenance and repairs as routine procedure. When information stating what changes had been made in the system as a result of data collected from the forms was given to the experimental group, the mean error-rate in completing the forms fell significantly from 58.2% to 46.5%. The experimental group had a significantly lower average error-rate (24.6%) than the control (50.6%). The control group's performance was not different from pre-experimental levels.


2016 ◽  
Vol 55 (04) ◽  
pp. 373-380 ◽  
Author(s):  
Matthias Ganzinger ◽  
Karsten Senghas ◽  
Stefan Riezler ◽  
Petra Knaup ◽  
Martin Löpprich ◽  
...  

SummaryObjectives: In the Multiple Myeloma clinical registry at Heidelberg University Hospital, most data are extracted from discharge letters. Our aim was to analyze if it is possible to make the manual documentation process more efficient by using methods of natural language processing for multiclass classification of free-text diagnostic reports to automatically document the diagnosis and state of disease of myeloma patients. The first objective was to create a corpus consisting of free-text diagnosis paragraphs of patients with multiple myeloma from German diagnostic reports, and its manual annotation of relevant data elements by documentation specialists. The second objective was to construct and evaluate a framework using different NLP methods to enable automatic multiclass classification of relevant data elements from free-text diagnostic reports.Methods: The main diagnoses paragraph was extracted from the clinical report of one third randomly selected patients of the multiple myeloma research database from Heidelberg University Hospital (in total 737 selected patients). An EDC system was setup and two data entry specialists performed independently a manual documentation of at least nine specific data elements for multiple myeloma characterization. Both data entries were compared and assessed by a third specialist and an annotated text corpus was created. A framework was constructed, consisting of a self-developed package to split multiple diagnosis sequences into several subsequences, four different preprocessing steps to normalize the input data and two classifiers: a maximum entropy classifier (MEC) and a support vector machine (SVM). In total 15 different pipelines were examined and assessed by a ten-fold cross-validation, reiterated 100 times. For quality indication the average error rate and the average F1-score were conducted. For significance testing the approximate randomization test was used.Results: The created annotated corpus consists of 737 different diagnoses paragraphs with a total number of 865 coded diagnosis. The dataset is publicly available in the supplementary online files for training and testing of further NLP methods. Both classifiers showed low average error rates (MEC: 1.05; SVM: 0.84) and high F1-scores (MEC: 0.89; SVM: 0.92). However the results varied widely depending on the classified data ele -ment. Preprocessing methods increased this effect and had significant impact on the classification, both positive and negative. The automatic diagnosis splitter increased the average error rate significantly, even if the F1-score decreased only slightly.Conclusions: The low average error rates and high average F1-scores of each pipeline demonstrate the suitability of the investigated NPL methods. However, it was also shown that there is no best practice for an automatic classification of data elements from free-text diagnostic reports.


Quantum ◽  
2018 ◽  
Vol 2 ◽  
pp. 85 ◽  
Author(s):  
A. K. Hashagen ◽  
S. T. Flammia ◽  
D. Gross ◽  
J. J. Wallman

Randomized benchmarking provides a tool for obtaining precise quantitative estimates of the average error rate of a physical quantum channel. Here we define real randomized benchmarking, which enables a separate determination of the average error rate in the real and complex parts of the channel. This provides more fine-grained information about average error rates with approximately the same cost as the standard protocol. The protocol requires only averaging over the real Clifford group, a subgroup of the full complex Clifford group, and makes use of the fact that it forms an orthogonal 2-design. It therefore allows benchmarking of fault-tolerant gates for an encoding which does not contain the full Clifford group transversally. Furthermore, our results are especially useful when considering quantum computations on rebits (or real encodings of complex computations), in which case the real Clifford group now plays the role of the complex Clifford group when studying stabilizer circuits.


2014 ◽  
Vol 556-562 ◽  
pp. 5017-5020
Author(s):  
Ting Ting Wang

Three-dimensional stereo vision technology has the capability of overcoming drawbacks influencing by light, posture and occluder. A novel image processing method is proposed based on three-dimensional stereoscopic vision, which optimizes model on the basis of camera binocular vision and in improvement of adding constraints to traditional model, moreover ensures accuracy of later location and recognition. To verify validity of the proposed method, firstly marking experiments are conducted to achieve fruit location, with the result of average error rate of 0.65%; and then centroid feature experiments are achieved with error from 5.77mm to 68.15mm and reference error rate from 1.44% to 5.68%, average error rate of 3.76% while the distance changes from 300mm to 1200mm. All these data of experiments demonstrate that proposed method meets the requirements of three-dimensional imageprocessing.


2013 ◽  
Vol 378 ◽  
pp. 478-482
Author(s):  
Yoshihiro Mitani ◽  
Toshitaka Oki

The microbubble has been widely used and shown to be effective in various fields. Therefore, there is an importance of measuring accurately its size by image processing techniques. In this paper, we propose a detection method of microbubbles by the approach based on the Hough transform. Experimental results show only 4.49% of the average error rate of the undetected microbubbles and incorrectly detected ones. This low percentage of the error rate shows the effectiveness of the proposed method.


2021 ◽  
Author(s):  
Abdulqader Mahmoud ◽  
Frederic Vanderveken ◽  
Florin Ciubotaru ◽  
Christoph Adelmann ◽  
Said Hamdioui ◽  
...  

In this paper, we propose an energy efficient SW based approximate 4:2 compressor comprising a 3-input and a 5-input Majority gate. We validate our proposal by means of micromagnetic simulations, and assess and compare its performance with one of the state-of-the-art SW, 45nm CMOS, and Spin-CMOS counterparts. The evaluation results indicate that the proposed compressor consumes 31.5\% less energy in comparison with its accurate SW design version. Furthermore, it has the same energy consumption and error rate as the approximate compressor with Directional Coupler (DC), but it exhibits 3x lower delay. In addition, it consumes 14% less energy, while having 17% lower average error rate than the approximate 45nm CMOS counterpart. When compared with the other emerging technologies, the proposed compressor outperforms approximate Spin-CMOS based compressor by 3 orders of magnitude in term of energy consumption while providing the same error rate. Finally, the proposed compressor requires the smallest chip real-estate measured in terms of devices.


2021 ◽  
Author(s):  
Abdulqader Mahmoud ◽  
Frederic Vanderveken ◽  
Florin Ciubotaru ◽  
Christoph Adelmann ◽  
Said Hamdioui ◽  
...  

By their very nature Spin Waves (SWs) enable the realization of energy efficient circuits as they propagate and interfere within waveguides without consuming noticeable energy. However, SW computing can be even more energy efficient by taking advantage of the approximate computing paradigm as many applications are error-tolerant like multimedia and social media. In this paper we propose an ultra-low energy novel Approximate Full Adder (AFA) and a 2-bit inputs Multiplier (AMUL). The approximate FA consists of one Majority gate while the approximate MUL is built by means of 3 AND gates. We validate the correct functionality of our proposal by means of micromagnetic simulations and evaluate the approximate FA figure of merit against state-of-the-art accurate SW, 7nm CMOS, Spin Hall Effect (SHE), Domain Wall Motion (DWM), accurate and approximate 45nm CMOS, Magnetic Tunnel Junction (MTJ), and Spin-CMOS FA implementations. Our results indicate that AFA consumes 43% and 33% less energy than state-of-the-art accurate SW and 7nm CMOS FA, respectively, and saves 69% and 44% when compared with accurate and approximate 45nm CMOS, respectively, and provides a 2 orders of magnitude energy reduction when compared with accurate SHE, accurate and approximate DWM, MTJ, and Spin-CMOS, counterparts. In addition, it achieves the same error rate as approximate 45nm CMOS and Spin-CMOS FA whereas it exhibits 50% less error rate than the approximate DWM FA. Furthermore, it outperforms its contenders in terms of area by saving at least 29% chip real-estate. AMUL is evaluated and compared with state-of-the-art accurate SW and 16nm CMOS accurate and approximate state-of-the-art designs. The evaluation results indicate that it saves at least 2x and 5x energy in comparison with the state-of-the-art SW designs and 16nm CMOS accurate and approximate designs, respectively, and has an average error rate of 10%, while the approximate CMOS MUL has an average error rate of 12.5%, and requires at least 64% less chip real-estate.


2010 ◽  
pp. 1741-1752
Author(s):  
A. Chandra ◽  
C. Bose

Simple closed-form solutions for the average error rate of several coherent modulation schemes including square M-QAM, DBPSK and QPSK operating over slow flat Rician fading channel are derived. Starting from a novel unified expression of conditional error probability the error rates are analysed using PDF based approach. The derived end expressions composed of infinite series summations of Gauss hypergeometric function are accurate, free from any numerical integration and general enough, as it encompasses as special situations, some cases of non-diversity and Rayleigh fading. Error probabilities are graphically displayed for the modulation schemes for different values of the Rician parameter K. In addition, to examine the dependence of error rate performance of M-QAM on the constellation size, numerical results are plotted for various values of M. The generality of the analytical results presented offers valuable insight into the performance evaluation over a fading channel in a unified manner.


2020 ◽  
pp. 014459872097336
Author(s):  
Fan Cui ◽  
Jianyu Ni ◽  
Yunfei Du ◽  
Yuxuan Zhao ◽  
Yingqing Zhou

The determination of quantitative relationship between soil dielectric constant and water content is an important basis for measuring soil water content based on ground penetrating radar (GPR) technology. The calculation of soil volumetric water content using GPR technology is usually based on the classic Topp formula. However, there are large errors between measured values and calculated values when using the formula, and it cannot be flexibly applied to different media. To solve these problems, first, a combination of GPR and shallow drilling is used to calibrate the wave velocity to obtain an accurate dielectric constant. Then, combined with experimental moisture content, the intelligent group algorithm is applied to accurately build mathematical models of the relative dielectric constant and volumetric water content, and the Topp formula is revised for sand and clay media. Compared with the classic Topp formula, the average error rate of sand is decreased by nearly 15.8%, the average error rate of clay is decreased by 31.75%. The calculation accuracy of the formula has been greatly improved. It proves that the revised model is accurate, and at the same time, it proves the rationality of the method of using GPR wave velocity calibration method to accurately calculate the volumetric water content.


Sign in / Sign up

Export Citation Format

Share Document