A New Approach of 2D Measurement of Injury Rate on Fish by a Modified K-means Clustering Algorithm Based on L*A*B* Color Space

Author(s):  
Minh Thien Tran ◽  
Huy Hung Nguyen ◽  
Jotje Rantung ◽  
Hak Kyeong Kim ◽  
Sea June Oh ◽  
...  
2020 ◽  
Vol 2020 (1) ◽  
pp. 100-104
Author(s):  
Hakki Can Karaimer ◽  
Rang Nguyen

Colorimetric calibration computes the necessary color space transformation to map a camera's device-specific color space to a device-independent perceptual color space. Color calibration is most commonly performed by imaging a color rendition chart with a fixed number of color patches with known colorimetric values (e. g., CIE XYZ values). The color space transformation is estimated based on the correspondences between the camera's image and the chart's colors. We present a new approach to colorimetric calibration that does not require explicit color correspondences. Our approach computes a color space transformation by aligning the color distributions of the captured image to the known distribution of a calibration chart containing thousands of colors. We show that a histogram-based colorimetric calibration approach provides results that are onpar with the traditional patch-based method without the need to establish correspondences.


2020 ◽  
Vol 15 ◽  
pp. 155892502097832
Author(s):  
Jiaqin Zhang ◽  
Jingan Wang ◽  
Le Xing ◽  
Hui’e Liang

As the precious cultural heritage of the Chinese nation, traditional costumes are in urgent need of scientific research and protection. In particular, there are scanty studies on costume silhouettes, due to the reasons of the need for cultural relic protection, and the strong subjectivity of manual measurement, which limit the accuracy of quantitative research. This paper presents an automatic measurement method for traditional Chinese costume dimensions based on fuzzy C-means clustering and silhouette feature point location. The method is consisted of six steps: (1) costume image acquisition; (2) costume image preprocessing; (3) color space transformation; (4) object clustering segmentation; (5) costume silhouette feature point location; and (6) costume measurement. First, the relative total variation model was used to obtain the environmental robustness and costume color adaptability. Second, the FCM clustering algorithm was used to implement image segmentation to extract the outer silhouette of the costume. Finally, automatic measurement of costume silhouette was achieved by locating its feature points. The experimental results demonstrated that the proposed method could effectively segment the outer silhouette of a costume image and locate the feature points of the silhouette. The measurement accuracy could meet the requirements of industrial application, thus providing the dual value of costume culture research and industrial application.


Author(s):  
Ralf Schleiffer ◽  
Hans-Jürgen Sebastian ◽  
Erik K. Antonsson

Abstract Problems in the field of engineering design represent an important class of real world problems that typically require a fuzzy and imprecise representation. This article presents and discusses a new approach to model this type of problem, by incorporating linguistic descriptions together with a variety of user-defined trade-off strategies. An interactive computer application is introduced, using stochastic optimization to solve the design task by producing a specially desired output under the given environmental conditions which are partly caused by the personal preferences of the engineer and by the expectations of the customer. It utilizes a randomized evolutionary technique, made suitable for the class of problems at hand, to generate and to optimize design solutions that are later identified by a clustering algorithm. Moreover test problems that were solved by the application are considered. In all cases the good solutions were obtained by evaluating only an extremely small fraction of all possible designs.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Abdelaaziz Mahdaoui ◽  
El Hassan Sbai

While the reconstruction of 3D objects is increasingly used today, the simplification of 3D point cloud, however, becomes a substantial phase in this process of reconstruction. This is due to the huge amounts of dense 3D point cloud produced by 3D scanning devices. In this paper, a new approach is proposed to simplify 3D point cloud based on k-nearest neighbor (k-NN) and clustering algorithm. Initially, 3D point cloud is divided into clusters using k-means algorithm. Then, an entropy estimation is performed for each cluster to remove the ones that have minimal entropy. In this paper, MATLAB is used to carry out the simulation, and the performance of our method is testified by test dataset. Numerous experiments demonstrate the effectiveness of the proposed simplification method of 3D point cloud.


Author(s):  
Preeti Mulay

<p>Cluster members are decided based on how close they are with each other. Compactness of cluster plays an important role in forming better quality clusters. ICNBCF incremental clustering algorithm computes closeness factor between every two data series. To decide members of cluster, it is necessary to know one more decisive factor to compare, threshold. Internal evaluation measure of cluster like variance and dunn index provide required decisive factor. in intial phase of ICNBCF, this decisive factor was given manually by investigative formed closeness factors. With values generated by internal evaluation measure formule, this process can be automated. This paper shows the detailed study of various evaluation measuress to work with new incremental clustreing algorithm ICNBCF.</p>


2020 ◽  
Vol 28 (4) ◽  
pp. 531-561 ◽  
Author(s):  
Andrew Lensen ◽  
Bing Xue ◽  
Mengjie Zhang

Clustering is a difficult and widely studied data mining task, with many varieties of clustering algorithms proposed in the literature. Nearly all algorithms use a similarity measure such as a distance metric (e.g., Euclidean distance) to decide which instances to assign to the same cluster. These similarity measures are generally predefined and cannot be easily tailored to the properties of a particular dataset, which leads to limitations in the quality and the interpretability of the clusters produced. In this article, we propose a new approach to automatically evolving similarity functions for a given clustering algorithm by using genetic programming. We introduce a new genetic programming-based method which automatically selects a small subset of features (feature selection) and then combines them using a variety of functions (feature construction) to produce dynamic and flexible similarity functions that are specifically designed for a given dataset. We demonstrate how the evolved similarity functions can be used to perform clustering using a graph-based representation. The results of a variety of experiments across a range of large, high-dimensional datasets show that the proposed approach can achieve higher and more consistent performance than the benchmark methods. We further extend the proposed approach to automatically produce multiple complementary similarity functions by using a multi-tree approach, which gives further performance improvements. We also analyse the interpretability and structure of the automatically evolved similarity functions to provide insight into how and why they are superior to standard distance metrics.


Author(s):  
J. M. Soto-Hidalgo ◽  
J. Chamorro-Martinez ◽  
D. Sanchez
Keyword(s):  

2017 ◽  
Vol 23 (6) ◽  
pp. 1130-1142 ◽  
Author(s):  
Muhammad Burhan Khan ◽  
Humaira Nisar ◽  
Choon Aun Ng ◽  
Kim Ho Yeap ◽  
Koon Chun Lai

AbstractImage processing and analysis is an effective tool for monitoring and fault diagnosis of activated sludge (AS) wastewater treatment plants. The AS image comprise of flocs (microbial aggregates) and filamentous bacteria. In this paper, nine different approaches are proposed for image segmentation of phase-contrast microscopic (PCM) images of AS samples. The proposed strategies are assessed for their effectiveness from the perspective of microscopic artifacts associated with PCM. The first approach uses an algorithm that is based on the idea that different color space representation of images other than red-green-blue may have better contrast. The second uses an edge detection approach. The third strategy, employs a clustering algorithm for the segmentation and the fourth applies local adaptive thresholding. The fifth technique is based on texture-based segmentation and the sixth uses watershed algorithm. The seventh adopts a split-and-merge approach. The eighth employs Kittler’s thresholding. Finally, the ninth uses a top-hat and bottom-hat filtering-based technique. The approaches are assessed, and analyzed critically with reference to the artifacts of PCM. Gold approximations of ground truth images are prepared to assess the segmentations. Overall, the edge detection-based approach exhibits the best results in terms of accuracy, and the texture-based algorithm in terms of false negative ratio. The respective scenarios are explained for suitability of edge detection and texture-based algorithms.


Author(s):  
NACER FARAJZADEH ◽  
GANG PAN ◽  
ZHAOHUI WU ◽  
MIN YAO

This paper proposes a new approach to improve multiclass classification performance by employing Stacked Generalization structure and One-Against-One decomposition strategy. The proposed approach encodes the outputs of all pairwise classifiers by implicitly embedding two-class discriminative information in a probabilistic manner. The encoded outputs, called Meta Probability Codes (MPCs), are interpreted as the projections of the original features. It is observed that MPC, compared to the original features, has more appropriate features for clustering. Based on MPC, we introduce a cluster-based multiclass classification algorithm, called MPC-Clustering. The MPC-Clustering algorithm uses the proposed approach to project an original feature space to MPC, and then it employs a clustering scheme to cluster MPCs. Subsequently, it trains individual multiclass classifiers on the produced clusters to complete the procedure of multiclass classifier induction. The performance of the proposed algorithm is extensively evaluated on 20 datasets from the UCI machine learning database repository. The results imply that MPC-Clustering is quite efficient with an improvement of 2.4% overall classification rate compared to the state-of-the-art multiclass classifiers.


Sign in / Sign up

Export Citation Format

Share Document