scholarly journals Comparing distributions and shapes using the kernel distance

Author(s):  
Sarang Joshi ◽  
Raj Varma Kommaraji ◽  
Jeff M. Phillips ◽  
Suresh Venkatasubramanian
1998 ◽  
pp. 81-101
Author(s):  
Allan J. Rossman ◽  
Beth L. Chance

Author(s):  
Zuherman Rustam ◽  
Aldi Purwanto ◽  
Sri Hartini ◽  
Glori Stephani Saragih

<span id="docs-internal-guid-94842888-7fff-2ae1-cd5c-026943b95b7f"><span>Cancer is one of the diseases with the highest mortality rate in the world. Cancer is a disease when abnormal cells grow out of control that can attack the body's organs side by side or spread to other organs. Lung cancer is a condition when malignant cells form in the lungs. To diagnose lung cancer can be done by taking x-ray images, CT scans, and lung tissue biopsy. In this modern era, technology is expected to help research in the field of health. Therefore, in this study feature extraction from CT images was used as data to classify lung cancer. We used CT scan image data from SPIE-AAPM Lung CT challenge 2015. Fuzzy C-Means and fuzzy kernel C-Means were used to classify the lung nodule from the patient into benign or malignant. Fuzzy C-Means is a soft clustering method that uses Euclidean distance to calculate the cluster center and membership matrix. Whereas fuzzy kernel C-Means uses kernel distance to calculate it. In addition, the support vector machine was used in another study to obtain 72% average AUC. Simulations were performed using different k-folds. The score showed fuzzy kernel C-Means had the highest accuracy of 74%, while fuzzy C-Means obtained 73% accuracy. </span></span>


2021 ◽  
pp. 2150027
Author(s):  
Junlan Nie ◽  
Ruibo Gao ◽  
Ye Kang

Prediction of urban noise is becoming more significant for tackling noise pollution and protecting human mental health. However, the existing noise prediction algorithms neglected not only the correlation between noise regions, but also the nonlinearity and sparsity of the data, which resulted in low accuracy of filling in the missing entries of data. In this paper, we propose a model based on multiple views and kernel-matrix tensor decomposition to predict the noise situation at different times of day in each region. We first construct a kernel tensor decomposition model by using kernel mapping in order to speed decomposition rate and realize stable estimate the prediction system. Then, we analyze and compute the cause of the noise from multiple views including computing the similarity of regions and the correlation between noise categories by kernel distance, which improves the credibility to infer the noise situation and the categories of regions. Finally, we devise a prediction algorithm based on the kernel-matrix tensor factorization model. We evaluate our method with a real dataset, and the experiments to verify the advantages of our method compared with other existing baselines.


1980 ◽  
Vol 37 (4) ◽  
pp. 576-582 ◽  
Author(s):  
R. R. Reisenbichler ◽  
N. A. Hartmann Jr.

Methods are developed for predicting the expected precision for studies of the contribution of fish to a fishery, based upon the number of fish marked and the number of years an experiment is repeated. Studies concerned with estimating catch–release ratios, comparing catch–release ratios, and comparing distributions of catch are considered. It is suggested that releases of marked fish should be repeated for at least three or four broods, and often there is little advantage in releasing more than 50 000 marked fish per release group. Although we explicitly address studies of contribution to ocean fisheries, the methods apply directly to a broad range of studies involving marked fish, from evaluations of harvest rates on catchable-trout plants to estimates of catch–escapement ratios for Pacific salmon.Key words: precision, experimental design, number of fish to mark, number of years to release marked fish, catch–release ratios, distribution of catch


2020 ◽  
Vol 14 ◽  
pp. 174830262093142 ◽  
Author(s):  
Noor Badshah ◽  
Ali Ahmad ◽  
Fazli Rehman

One of the crucial challenges in the area of image segmentation is intensity inhomogeneity. For most of the region-based models, it is not easy to completely segment images having severe intensity inhomogeneity and complex structure, as they rely on intensity distributions. In this work, we proposed a firsthand hybrid model by blending kernel and Euclidean distance metrics. Experimental results on some real and synthetic images suggest that our proposed model is better than models of Chan and Vese, Wu and He, and Salah et al.


Sign in / Sign up

Export Citation Format

Share Document