scholarly journals An Enhanced Wu-Huberman Algorithm with Pole Point Selection Strategy

2013 ◽  
Vol 2013 ◽  
pp. 1-6
Author(s):  
Yan Sun ◽  
Shuxue Ding

The Wu-Huberman clustering is a typical linear algorithm among many clustering algorithms, which illustrates data points relationship as an artificial “circuit” and then applies the Kirchhoff equations to get the voltage value on the complex circuit. However, the performance of the algorithm is crucially dependent on the selection of pole points. In this paper, we present a novel pole point selection strategy for the Wu-Huberman algorithm (named as PSWH algorithm), which aims at preserving the merit and increasing the robustness of the algorithm. The pole point selection strategy is proposed to filter the pole point by introducing sparse rate. Experiments results demonstrate that the PSWH algorithm is significantly improved in clustering accuracy and efficiency compared with the original Wu-Huberman algorithm.

Author(s):  
Walid Atwa ◽  
◽  
Abdulwahab Ali Almazroi

Semi.-supervised clustering algorithms aim to enhance the performance of clustering using the pairwise constraints. However, selecting these constraints randomly or improperly can minimize the performance of clustering in certain situations and with different applications. In this paper, we select the most informative constraints to improve semi-supervised clustering algorithms. We present an active selection of constraints, including active must.-link (AML) and active cannot.-link (ACL) constraints. Based on Radial-Bases Function, we compute lower-bound and upper-bound between data points to select the constraints that improve the performance. We test the proposed algorithm with the base-line methods and show that our proposed active pairwise constraints outperform other algorithms.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Baicheng Lyu ◽  
Wenhua Wu ◽  
Zhiqiang Hu

AbstractWith the widely application of cluster analysis, the number of clusters is gradually increasing, as is the difficulty in selecting the judgment indicators of cluster numbers. Also, small clusters are crucial to discovering the extreme characteristics of data samples, but current clustering algorithms focus mainly on analyzing large clusters. In this paper, a bidirectional clustering algorithm based on local density (BCALoD) is proposed. BCALoD establishes the connection between data points based on local density, can automatically determine the number of clusters, is more sensitive to small clusters, and can reduce the adjusted parameters to a minimum. On the basis of the robustness of cluster number to noise, a denoising method suitable for BCALoD is proposed. Different cutoff distance and cutoff density are assigned to each data cluster, which results in improved clustering performance. Clustering ability of BCALoD is verified by randomly generated datasets and city light satellite images.


2013 ◽  
Vol 368 (1628) ◽  
pp. 20130056 ◽  
Author(s):  
Matteo Toscani ◽  
Matteo Valsecchi ◽  
Karl R. Gegenfurtner

When judging the lightness of objects, the visual system has to take into account many factors such as shading, scene geometry, occlusions or transparency. The problem then is to estimate global lightness based on a number of local samples that differ in luminance. Here, we show that eye fixations play a prominent role in this selection process. We explored a special case of transparency for which the visual system separates surface reflectance from interfering conditions to generate a layered image representation. Eye movements were recorded while the observers matched the lightness of the layered stimulus. We found that observers did focus their fixations on the target layer, and this sampling strategy affected their lightness perception. The effect of image segmentation on perceived lightness was highly correlated with the fixation strategy and was strongly affected when we manipulated it using a gaze-contingent display. Finally, we disrupted the segmentation process showing that it causally drives the selection strategy. Selection through eye fixations can so serve as a simple heuristic to estimate the target reflectance.


2021 ◽  
Vol 12 (1) ◽  
pp. 89-102
Author(s):  
Bjørn-Jostein Singstad ◽  
Naomi Azulay ◽  
Andreas Bjurstedt ◽  
Simen S. Bjørndal ◽  
Magnus F. Drageseth ◽  
...  

Abstract Due to the possibilities in miniaturization and wearability, photoplethysmography (PPG) has recently gained a large interest not only for heart rate measurement, but also for estimating heart rate variability, which is derived from ECG by convention. The agreement between PPG and ECG-based HRV has been assessed in several studies, but the feasibility of PPG-based HRV estimation is still largely unknown for many conditions. In this study, we assess the feasibility of HRV estimation based on finger PPG during rest, mild physical exercise and mild mental stress. In addition, we compare different variants of signal processing methods including selection of fiducial point and outlier correction. Based on five minutes synchronous recordings of PPG and ECG from 15 healthy participants during each of these three conditions, the PPG-based HRV estimation was assessed for the SDNN and RMSSD parameters, calculated based on two different fiducial points (foot point and maximum slope), with and without outlier correction. The results show that HRV estimation based on finger PPG is feasible during rest and mild mental stress, but can give large errors during mild physical exercise. A good estimation is very dependent on outlier correction and fiducial point selection, and SDNN seems to be a more robust parameter compared to RMSSD for PPG-based HRV estimation.


2021 ◽  
Vol 20 ◽  
pp. 133-139
Author(s):  
Alexander Zemliak

The different design trajectories have been analyzed in the design space on the basis of the new system design methodology. Optimal position of the design algorithm start point was analyzed to minimize the CPU time. The initial point selection has been done on the basis of the before discovered acceleration effect of the system design process. The geometrical dividing surface was defined and analyzed to obtain the optimal position of the algorithm start point. The numerical results of the design of passive and active nonlinear electronic circuits confirm the possibility of the optimal selection of the starting point of the design algorithm.


2002 ◽  
Vol 126 (1) ◽  
pp. 19-27
Author(s):  
Dana Marie Grzybicki ◽  
Thomas Gross ◽  
Kim R. Geisinger ◽  
Stephen S. Raab

Abstract Context.—Measuring variation in clinician test ordering behavior for patients with similar indications is an important focus for quality management and cost containment. Objective.—To obtain information from physicians and nonphysicians regarding their test-ordering behavior and their knowledge of test performance characteristics for diagnostic tests used to work up patients with lung lesions suspicious for cancer. Design.—A self-administered, voluntary, anonymous questionnaire was distributed to 452 multiple-specialty physicians and 500 nonphysicians in academic and private practice in Pennsylvania, Iowa, and North Carolina. Respondents indicated their estimates of test sensitivities for multiple tests used in the diagnosis of lung lesions and provided their test selection strategy for case simulations of patients with solitary lung lesions. Data were analyzed using descriptive statistics and the χ2 test. Results.—The response rate was 11.2%. Both physicians and nonphysicians tended to underestimate the sensitivities of all minimally invasive tests, with the greatest underestimations reported for sputum cytology and transthoracic fine-needle aspiration biopsy. There was marked variation in sequential test selection for all the case simulations and no association between respondent perception of test sensitivity and their selection of first diagnostic test. Overall, the most frequently chosen first diagnostic test was bronchoscopy. Conclusions.—Physicians and nonphysicians tend to underestimate the performance of diagnostic tests used to evaluate solitary lung lesions. However, their misperceptions do not appear to explain the wide variation in test-ordering behavior for patients with lung lesions suspicious for cancer.


Author(s):  
Ping Deng ◽  
Qingkai Ma ◽  
Weili Wu

Clustering can be considered as the most important unsupervised learning problem. It has been discussed thoroughly by both statistics and database communities due to its numerous applications in problems such as classification, machine learning, and data mining. A summary of clustering techniques can be found in (Berkhin, 2002). Most known clustering algorithms such as DBSCAN (Easter, Kriegel, Sander, & Xu, 1996) and CURE (Guha, Rastogi, & Shim, 1998) cluster data points based on full dimensions. When the dimensional space grows higher, the above algorithms lose their efficiency and accuracy because of the so-called “curse of dimensionality”. It is shown in (Beyer, Goldstein, Ramakrishnan, & Shaft, 1999) that computing the distance based on full dimensions is not meaningful in high dimensional space since the distance of a point to its nearest neighbor approaches the distance to its farthest neighbor as dimensionality increases. Actually, natural clusters might exist in subspaces. Data points in different clusters may be correlated with respect to different subsets of dimensions. In order to solve this problem, feature selection (Kohavi & Sommerfield, 1995) and dimension reduction (Raymer, Punch, Goodman, Kuhn, & Jain, 2000) have been proposed to find the closely correlated dimensions for all the data and the clusters in such dimensions. Although both methods reduce the dimensionality of the space before clustering, the case where clusters may exist in different subspaces of full dimensions is not handled well. Projected clustering has been proposed recently to effectively deal with high dimensionalities. Finding clusters and their relevant dimensions are the objectives of projected clustering algorithms. Instead of projecting the entire dataset on the same subspace, projected clustering focuses on finding specific projection for each cluster such that the similarity is reserved as much as possible.


Author(s):  
Deepali Virmani ◽  
Nikita Jain ◽  
Ketan Parikh ◽  
Shefali Upadhyaya ◽  
Abhishek Srivastav

This article describes how data is relevant and if it can be organized, linked with other data and grouped into a cluster. Clustering is the process of organizing a given set of objects into a set of disjoint groups called clusters. There are a number of clustering algorithms like k-means, k-medoids, normalized k-means, etc. So, the focus remains on efficiency and accuracy of algorithms. The focus is also on the time it takes for clustering and reducing overlapping between clusters. K-means is one of the simplest unsupervised learning algorithms that solves the well-known clustering problem. The k-means algorithm partitions data into K clusters and the centroids are randomly chosen resulting numeric values prohibits it from being used to cluster real world data containing categorical values. Poor selection of initial centroids can result in poor clustering. This article deals with a proposed algorithm which is a variant of k-means with some modifications resulting in better clustering, reduced overlapping and lesser time required for clustering by selecting initial centres in k-means and normalizing the data.


1969 ◽  
Vol 91 (1) ◽  
pp. 193-197 ◽  
Author(s):  
William H. Bussell

A method intended for programming on a computer is presented for designing four bar function generators based on infinitesimal kinematic synthesis. By using the outlined procedure, one can obtain mechanism linkage specifications and tables of performance of a large number of the possible mechanisms for a single design point. Selection of the most suitable mechanism by inspection of the tables is then possible.


Sign in / Sign up

Export Citation Format

Share Document