double threshold
Recently Published Documents


TOTAL DOCUMENTS

337
(FIVE YEARS 85)

H-INDEX

18
(FIVE YEARS 3)

Algorithmica ◽  
2022 ◽  
Author(s):  
Yusuke Kobayashi ◽  
Yoshio Okamoto ◽  
Yota Otachi ◽  
Yushi Uno

AbstractA graph $$G = (V,E)$$ G = ( V , E ) is a double-threshold graph if there exist a vertex-weight function $$w :V \rightarrow \mathbb {R}$$ w : V → R and two real numbers $$\mathtt {lb}, \mathtt {ub}\in \mathbb {R}$$ lb , ub ∈ R such that $$uv \in E$$ u v ∈ E if and only if $$\mathtt {lb}\le \mathtt {w}(u) + \mathtt {w}(v) \le \mathtt {ub}$$ lb ≤ w ( u ) + w ( v ) ≤ ub . In the literature, those graphs are studied also as the pairwise compatibility graphs that have stars as their underlying trees. We give a new characterization of double-threshold graphs that relates them to bipartite permutation graphs. Using the new characterization, we present a linear-time algorithm for recognizing double-threshold graphs. Prior to our work, the fastest known algorithm by Xiao and Nagamochi [Algorithmica 2020] ran in $$O(n^{3} m)$$ O ( n 3 m ) time, where n and m are the numbers of vertices and edges, respectively.


2021 ◽  
pp. 1-13
Author(s):  
Géraud M. F. C. Dautzenberg ◽  
Jeroen G. Lijmer ◽  
Aartjan T. F. Beekman

ABSTRACT Objectives: Diagnosis of patients suspected of mild dementia (MD) is a challenge and patient numbers continue to rise. A short test triaging patients in need of a neuropsychological assessment (NPA) is welcome. The Montreal cognitive assessment (MoCA) has high sensitivity at the original cutoff <26 for MD, but results in too many false-positive (FP) referrals in clinical practice (low specificity). A cutoff that finds all patients at high risk of MD without referring to many patients not (yet) in need of an NPA is needed. A difficulty is who is to be considered at risk, as definitions for disease (e.g. MD) do not always define health at the same time and thereby create subthreshold disorders. Design: In this study, we compared different selection strategies to efficiently identify patients in need of an NPA. Using the MoCA with a double threshold tackles the dilemma of increasing the specificity without decreasing the sensitivity and creates the opportunity to distinguish the clinical (MD) and subclinical (MCI) state and hence to get their appropriate policy. Setting/participants: Patients referred to old-age psychiatry suspected of cognitive impairment that could benefit from an NPA (n = 693). Results: The optimal strategy was a two-stage selection process using the MoCA with a double threshold as an add-on after initial assessment. By selecting who is likely to have dementia and should be assessed further (MoCA<21), who should be discharged (≥26), and who’s course should be monitored actively as they are at increased risk (21<26). Conclusion: By using two cutoffs, the clinical value of the MoCA improved for triaging. A double-threshold MoCA not only gave the best results; accuracy, PPV, NPV, and reducing FP referrals by 65%, still correctly triaging most MD patients. It also identified most MCIs whose intermediate state justifies active monitoring.


2021 ◽  
Vol 6 (3) ◽  
pp. 70
Author(s):  
Erik Kowalski ◽  
Danilo S. Catelli ◽  
Mario Lamontagne

Electromyography (EMG) onsets determined by computerized detection methods have been compared against the onsets selected by experts through visual inspection. However, with this type of approach, the true onset remains unknown, making it impossible to determine if computerized detection methods are better than visual detection (VD) as they can only be as good as what the experts select. The use of simulated signals allows for all aspects of the signal to be precisely controlled, including the onset and the signal-to-noise ratio (SNR). This study compared three onset detection methods: approximated generalized likelihood ratio, double threshold (DT), and VD determined by eight trained individuals. The selected onset was compared against the true onset in simulated signals which varied in the SNR from 5 to 40 dB. For signals with 5 dB SNR, the VD method was significantly better, but for SNRs of 20 dB or greater, no differences existed between the VD and DT methods. The DT method is recommended as it can improve objectivity and reduce time of analysis when determining EMG onsets. Even for the best-quality signals (SNR of 40 dB), all the detection methods were off by 15–30 ms from the true onset and became progressively more inaccurate as the SNR decreased. Therefore, although all the detection methods provided similar results, they can be off by 50–80 ms from the true onset as the SNR decreases to 10 dB. Caution must be used when interpreting EMG onsets, especially on signals where the SNR is low or not reported at all.


2021 ◽  
Vol 263 (2) ◽  
pp. 4570-4580
Author(s):  
Liu Ting ◽  
Luo Xinwei

The recognition accuracy of speech signal and noise signal is greatly affected under low signal-to-noise ratio. The neural network with parameters obtained from the training set can achieve good results in the existing data, but is poor for the samples with different the environmental noises. This method firstly extracts the features based on the physical characteristics of the speech signal, which have good robustness. It takes the 3-second data as samples, judges whether there is speech component in the data under low signal-to-noise ratios, and gives a decision tag for the data. If a reasonable trajectory which is like the trajectory of speech is found, it is judged that there is a speech segment in the 3-second data. Then, the dynamic double threshold processing is used for preliminary detection, and then the global double threshold value is obtained by K-means clustering. Finally, the detection results are obtained by sequential decision. This method has the advantages of low complexity, strong robustness, and adaptability to multi-national languages. The experimental results show that the performance of the method is better than that of traditional methods under various signal-to-noise ratios, and it has good adaptability to multi language.


2021 ◽  
Vol 2 (5) ◽  
Author(s):  
Anindita Sarkar Mondal ◽  
Somnath Mukhopadhyay ◽  
Kartick Chandra Mondal ◽  
Samiran Chattopadhyay

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jiahui Zhang ◽  
Xiao Wang ◽  
Mingchi Ju ◽  
Tailin Han ◽  
Yingzhi Wang

In the compressed sensing (CS) reconstruction algorithms, the problems of overestimation and large redundancy of candidate atoms will affect the reconstruction accuracy and probability of the algorithm when using Sparsity Adaptive Matching Pursuit (SAMP) algorithm. In this paper, we propose an improved SAMP algorithm based on a double threshold, candidate set reduction, and adaptive backtracking methods. The algorithm uses the double threshold variable step-size method to improve the accuracy of sparsity judgment and reduces the undetermined atomic candidate set in the small step stage to enhance the stability. At the same time, the sparsity estimation accuracy can be improved by combining with the backtracking method. We use a Gaussian sparse signal and a measured shock wave signal of the 15psi range sensor to verify the algorithm performance. The experimental results show that, compared with other iterative greedy algorithms, the overall stability of the DBCSAMP algorithm is the strongest. Compared with the SAMP algorithm, the estimated sparsity of the DBCSAMP algorithm is more accurate, and the reconstruction accuracy and operational efficiency of the DBCSAMP algorithm are greatly improved.


Sign in / Sign up

Export Citation Format

Share Document