decision error
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 14)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
pp. 1-13
Author(s):  
Tao Yin ◽  
Xiaojuan Mao ◽  
Xingtan Wu ◽  
Hengrong Ju ◽  
Weiping Ding ◽  
...  

Neighborhood classifier, a common classification method, is applied in pattern recognition and data mining. The neighborhood classifier mainly relies on the majority voting strategy to judge each category. This strategy only considers the number of samples in the neighborhood but ignores the distribution of samples, which leads to a decreased classification accuracy. To overcome the shortcomings and improve the classification performance, D-S evidence theory is applied to represent the evidence information support of other samples in the neighborhood, and the distance between samples in the neighborhood is taken into account. In this paper, a novel attribute reduction method of neighborhood rough set with a dynamic updating strategy is developed. Different from the traditional heuristic algorithm, the termination threshold of the proposed reduction algorithm is dynamically optimized. Therefore, when the attribute significance is not monotonic, this method can retrieve a better value, in contrast to the traditional method. Moreover, a new classification approach based on D-S evidence theory is proposed. Compared with the classical neighborhood classifier, this method considers the distribution of samples in the neighborhood, and evidence theory is applied to describe the closeness between samples. Finally, datasets from the UCI database are used to indicate that the improved reduction can achieve a lower neighborhood decision error rate than classical heuristic reduction. In addition, the improved classifier acquires higher classification performance in contrast to the traditional neighborhood classifier. This research provides a new direction for improving the accuracy of neighborhood classification.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Steven Long ◽  
Geb W. Thomas ◽  
Matthew D. Karam ◽  
J. Lawrence Marsh ◽  
Donald D. Anderson

2020 ◽  
Vol 63 (9) ◽  
pp. 680-685
Author(s):  
R. Z. Khayrullin ◽  
A. S. Kornev ◽  
A. S. Kostoglotov ◽  
S. V. Lazarenko

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6397
Author(s):  
Binqi Wu ◽  
Jin Lu ◽  
Mingyi Gao ◽  
Hongliang Ren ◽  
Zichun Le ◽  
...  

A blind discrete-cosine-transform-based phase noise compensation (BD-PNC) is proposed to compensate the inter-carrier-interference (ICI) in the coherent optical offset-quadrature amplitude modulation (OQAM)-based filter-bank multicarrier (CO-FBMC/OQAM) transmission system. Since the phase noise sample can be approximated by an expansion of the discrete cosine transform (DCT) in the time-domain, a time-domain compensation model is built for the transmission system. According to the model, phase noise compensation (PNC) depends only on its DCT coefficients. The common phase error (CPE) compensation is firstly performed for the received signal. After that, a pre-decision is made on a part of compensated signals with low decision error probability, and the pre-decision results are used as the estimated values of transmitted signals to calculate the DCT coefficients. Such a partial pre-decision process reduces not only decision error but also the complexity of the BD-PNC method while keeping almost the same performance as in the case of the pre-decision of all compensated signals. Numerical simulations are performed to evaluate the performance of the proposed scheme for a 30 GBaud CO-FBMC/OQAM system. The simulation results show that its bit error rate (BER) performance is improved by more than one order of magnitude through the mitigation of the ICI in comparison with the traditional blind PNC scheme only aiming for CPE compensation.


Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3614 ◽  
Author(s):  
Alexandru Martian ◽  
Mahmood Jalal Ahmad Al Sammarraie ◽  
Călin Vlădeanu ◽  
Dimitrie C. Popescu

Implementation of dynamic spectrum access (DSA) in cognitive radio (CR) systems requires the unlicensed secondary users (SU) to implement spectrum sensing to monitor the activity of the licensed primary users (PU). Energy detection (ED) is one of the most widely used methods for spectrum sensing in CR systems, and in this paper we present a novel ED algorithm with an adaptive sensing threshold. The three-event ED (3EED) algorithm for spectrum sensing is considered for which an accurate approximation of the optimal decision threshold that minimizes the decision error probability (DEP) is found using Newton’s method with forced convergence in one iteration. The proposed algorithm is analyzed and illustrated with numerical results obtained from simulations that closely match the theoretical results and show that it outperforms the conventional ED (CED) algorithm for spectrum sensing.


2020 ◽  
pp. 1-10
Author(s):  
Gordon A. Fenton ◽  
Craig B. Lake ◽  
Rukhsana Liza

This paper presents statistical analyses of hydraulic conductivity data collected from an existing cement-based solidification/stabilization (S/S) system. The goal is to characterize the spatial variability of hydraulic conductivity and to examine sampling recommendations for the quality control (QC) program of that system to achieve target decision error probabilities regarding the acceptance or rejection of the system with respect to hydraulic conductivity. Over 2000 QC hydraulic conductivity samples, taken over an area of 300 000 m2, are used as a basis for these analyses. The hydraulic conductivity spatial variability is described by a marginal lognormal distribution with correlation function parameterized by directional correlation lengths, which are estimated by best fitting an exponentially decaying correlation model to sample correlation functions. The spatial variability associated with hydraulic conductivity of the studied S/S system is then utilized to assess sampling requirements for the QC program of that system. Considering the “worst case” correlation length and the hydraulic conductivity mean and variance, hypothesis test error probabilities are used to provide recommendations for conservative sampling requirements. It is believed that the analysis of this large construction project represents a unique opportunity to review the current practice of S/S field sampling requirements.


2020 ◽  
Vol 12 (5) ◽  
pp. 860 ◽  
Author(s):  
Vinicius Francisco Rofatto ◽  
Marcelo Tomio Matsuoka ◽  
Ivandro Klein ◽  
Maurício Roberto Veronez ◽  
Luiz Gonzaga da Silveira

An iterative outlier elimination procedure based on hypothesis testing, commonly known as Iterative Data Snooping (IDS) among geodesists, is often used for the quality control of modern measurement systems in geodesy and surveying. The test statistic associated with IDS is the extreme normalised least-squares residual. It is well-known in the literature that critical values (quantile values) of such a test statistic cannot be derived from well-known test distributions but must be computed numerically by means of Monte Carlo. This paper provides the first results on the Monte Carlo-based critical value inserted into different scenarios of correlation between outlier statistics. From the Monte Carlo evaluation, we compute the probabilities of correct identification, missed detection, wrong exclusion, over-identifications and statistical overlap associated with IDS in the presence of a single outlier. On the basis of such probability levels, we obtain the Minimal Detectable Bias (MDB) and Minimal Identifiable Bias (MIB) for cases in which IDS is in play. The MDB and MIB are sensitivity indicators for outlier detection and identification, respectively. The results show that there are circumstances in which the larger the Type I decision error (smaller critical value), the higher the rates of outlier detection but the lower the rates of outlier identification. In such a case, the larger the Type I Error, the larger the ratio between the MIB and MDB. We also highlight that an outlier becomes identifiable when the contributions of the measures to the wrong exclusion rate decline simultaneously. In this case, we verify that the effect of the correlation between outlier statistics on the wrong exclusion rate becomes insignificant for a certain outlier magnitude, which increases the probability of identification.


Author(s):  
Vinicius Francisco Rofatto ◽  
Marcelo Tomio Matsuoka ◽  
Ivandro Klein ◽  
Mauricio Roberto Veronez ◽  
Luiz Gonzaga da Silveira Jr.

An iterative outlier elimination procedure based on hypothesis testing, commonly known as Iterative Data Snooping (IDS) among geodesists, is often used for the quality control of the modern measurement systems in geodesy and surveying. The test statistic associated with IDS is the extreme normalised least-squares residual. It is well-known in the literature that critical values (quantile values) of such a test statistic cannot be derived from well-known test distributions, but must be computed numerically by means of Monte Carlo. This paper provides the first results about Monte Carlo-based critical value inserted to different scenarios of correlation between the outlier statistics. From the Monte Carlo evaluation, we compute the probabilities of correct identification, missed detection, wrong exclusion, overidentifications and statistical overlap associated with IDS in the presence of a single outlier. Based on such probability levels we obtain the Minimal Detectable Bias (MDB) and Minimal Identifiable Bias (MIB) for the case where IDS is in play. MDB and MIB are sensitivity indicators for outlier detection and identification, respectively. The results show that there are circumstances that the larger the Type I decision error (smaller critical value), the higher the rates of outlier detection, but the lower the rates of outlier identification. For that case, the larger the Type I Error, the larger the ratio between MIB and MDB. We also highlight that an outlier becomes identifiable when the contribution of the measures to the wrong exclusion rate decline simultaneously. In that case, we verify that the effect of the correlation between the outlier statistics on the wrong exclusion rates becomes insignificant from a certain outlier magnitude, which increases the probability of identification.


Author(s):  
Christopher D. Wickens ◽  
Adam Williams ◽  
Benjamin A. Clegg ◽  
C. A. P. Smith

Objective Experimentally investigate maneuver decision preferences in navigating ships to avoid a collision. How is safety (collision avoidance) balanced against efficiency (deviation from path and delay) and rules of the road under conditions of both trajectory certainty and uncertainty. Background Human decision error is a prominent factor in nautical collisions, but the multiple factors of geometry of collisions and role of uncertainty have been little studied in empirical human factors literature. Approach Eighty-seven Mechanical Turk participants performed in a lower fidelity ship control simulation, depicting ownship and a cargo ship hazard on collision or near-collision trajectories of various conflict geometries, while controlling heading and speed with the sluggish relative dynamics. Experiment 1 involved the hazard on a straight trajectory. In Experiment 2, the hazard could turn on unpredictable trials. Participants were rewarded for efficiency and penalized for collisions or close passes. Results Participants made few collisions, but did so more often when on a collision path. They sometimes violated the instructed rules of the road by maneuvering in front of the hazard ship’s path. They preferred speed control to heading control. Performance degraded in conditions of uncertainty. Conclusion Data reveal an understanding of maneuver decisions and conditions that affect the balance between safety and efficiency. Application The simulation and data highlight the degrading role of uncertainty and provide a foundation upon which more complex questions can be asked, asked of more trained navigators, and decision support tools examined.


Sign in / Sign up

Export Citation Format

Share Document