scholarly journals Morphological Reconstruction-Based Image-Guided Fuzzy Clustering with a Novel Impact Factor

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Qingxue Qin ◽  
Guangmei Xu ◽  
Jin Zhou ◽  
Rongrong Wang ◽  
Hui Jiang ◽  
...  

The guided filter is a novel explicit image filtering method, which implements a smoothing filter on “flat patch” regions and ensures edge preserving on “high variance” regions. Recently, the guided filter has been successfully incorporated into the process of fuzzy c-means (FCM) to boost the clustering results of noisy images. However, the adaptability of the existing guided filter-based FCM methods to different images is deteriorated, as the factor ε of the guided filter is fixed to a scalar. To solve this issue, this paper proposes a new guided filter-based FCM method (IFCM_GF), in which the guidance image of the guided filter is adjusted by a newly defined influence factor ρ . By dynamically changing the impact factor ρ , the IFCM_GF acquires excellent segmentation results on various noisy images. Furthermore, to promote the segmentation accuracy of images with heavy noise and simplify the selection of the influence factor ρ , we further propose a morphological reconstruction-based improved FCM clustering algorithm with guided filter (MRIFCM_GF). In this approach, the original noisy image is reconstructed by the morphological reconstruction (MR) before clustering, and the IFCM_GF is performed on the reconstructed image by utilizing the adjusted guidance image. Due to the efficiency of the MR to remove noise, the MRIFCM_GF achieves better segmentation results than the IFCM_GF on images with heavy noise and the selection of the influence factor for the MRIFCM_GF is simple. Experiments demonstrate the effectiveness of the presented methods.

2011 ◽  
Vol 90-93 ◽  
pp. 1245-1249 ◽  
Author(s):  
Xiang Rong Yuan

The natural frequencies and mode shapes of 2, 3 and 4 spans continuous beam with universal cross section are calculated, and these dynamic parameters of 2 spans continuous beam model are measured. From the analysis and the model test, the locations of the maximum curvatures of the mode shapes are determined, and comparing that with the maximum bending moment of the beam under the action of uniformly distributed load, the selection of the natural frequency of the beam is discussed with the General Code for Design of Highway Bridges and Culverts as the impact factors of the beam is calculated. It is shown from the results of the analysis and the test that, for the impact factor, when the effect of positive bending moment caused by impact force is calculated, the fundamental frequency must be used as shown in the General Code, and the 2nd or 3rd frequency must be used when the effect of negative bending moment caused by impact force is calculated. The selection of the frequency should be combined with the mode shape into account for the specific circumstances.


2019 ◽  
Vol 20 (2) ◽  
pp. 237-258
Author(s):  
Avinash Kaur ◽  
Pooja Gupta ◽  
Manpreet Singh

Scientific Workflow is a composition of both coarse-grained and fine-grained computational tasks displaying varying execution requirements. Large-scale data transfer is involved in scientific workflows, so efficient techniques are required to reduce the makespan of the workflow. Task clustering is an efficient technique used in such a scenario that involves combining multiple tasks with shorter execution time into a single cluster to be executed on a resource. This leads to a reduction of scheduling overheads in scientific workflows and thus improvement of performance. However available task clustering methods involve clustering the tasks horizontally without the consideration of the structure of tasks in a workflow. We propose hybrid balanced task clustering algorithm that uses the parameter of impact factor of workflows along with the structure of workflow. According to this technique, tasks can be considered for clustering either vertically or horizontally based on the value of the impact factor. This minimizes the system overheads and the makespan for execution of a workflow. A simulation based evaluation is performed on real workflows that shows the proposed algorithm is efficient in recommending clusters. It shows improvement of 5-10\% in makespan time of workflow depending on the type of workflow used.


Author(s):  
A. Murugan ◽  
D. Gobinath ◽  
S. Ganesh Kumar ◽  
B. Muruganantham ◽  
Sarubala Velusamy

Massive growth in the big data makes difficult to analyse and retrieve the useful information from the set of available data’s. Statistical analysis: Existing approaches cannot guarantee an efficient retrieval of data from the database. In the existing work stratified sampling is used to partition the tables in terms of static variables. However k means clustering algorithm cannot guarantees an efficient retrieval where the choosing centroid in the large volume of data would be difficult. And less knowledge about the static variable might leads to the less efficient partitioning of tables. Findings: This problem is overcome in the proposed methodology by introducing the FCM clustering instead of k means clustering which can cluster the large volume of data which are similar in nature. Stratification problem is overcome by introducing the post stratification approach which will leads to efficient selection of static variable. Improvements: This methodology leads to an efficient retrieval process in terms of user query within less time and more accuracy.</p>


2013 ◽  
Vol 54 (2) ◽  
pp. 327-356 ◽  
Author(s):  
Christian Fleck

AbstractOne of the most popular indicators is the Impact Factor. This paper examines the coming into being of this highly influential figure. It is the offspring of Eugene Garfield’s experimentation with the huge amounts of data available at his Institute for Scientific Information and the result of a number of attempts to find appropriate measurements for the success (“impact”) of articles and journals. The completely inductive procedure was initially adjusted by examining the data thoughtfully and by consulting with experts from different scientific disciplines. Later, its calculation modes were imposed on other disciplines without further consideration. The paper demonstrates in detail the inopportune consequences of this, in particular for sociology. Neither the definition of disciplines, nor the selection of journals for the Web of Science/Social Science Citation Index follows any comprehensible rationale. The procedures for calculating the impact factor are inappropriate. Despite its obvious unsuitability, the impact factor is used by editors of sociological journals for marketing and impression management purposes. Fetishism!


Methodology ◽  
2007 ◽  
Vol 3 (1) ◽  
pp. 14-23 ◽  
Author(s):  
Juan Ramon Barrada ◽  
Julio Olea ◽  
Vicente Ponsoda

Abstract. The Sympson-Hetter (1985) method provides a means of controlling maximum exposure rate of items in Computerized Adaptive Testing. Through a series of simulations, control parameters are set that mark the probability of administration of an item on being selected. This method presents two main problems: it requires a long computation time for calculating the parameters and the maximum exposure rate is slightly above the fixed limit. Van der Linden (2003) presented two alternatives which appear to solve both of the problems. The impact of these methods in the measurement accuracy has not been tested yet. We show how these methods over-restrict the exposure of some highly discriminating items and, thus, the accuracy is decreased. It also shown that, when the desired maximum exposure rate is near the minimum possible value, these methods offer an empirical maximum exposure rate clearly above the goal. A new method, based on the initial estimation of the probability of administration and the probability of selection of the items with the restricted method ( Revuelta & Ponsoda, 1998 ), is presented in this paper. It can be used with the Sympson-Hetter method and with the two van der Linden's methods. This option, when used with Sympson-Hetter, speeds the convergence of the control parameters without decreasing the accuracy.


2007 ◽  
Vol 148 (4) ◽  
pp. 165-171
Author(s):  
Anna Berhidi ◽  
Edit Csajbók ◽  
Lívia Vasas

Nobody doubts the importance of the scientific performance’s evaluation. At the same time its way divides the group of experts. The present study mostly deals with the models of citation-analysis based evaluation. The aim of the authors is to present the background of the best known tool – Impact factor – since, according to the authors’ experience, to the many people use without knowing it well. In addition to the „nonofficial impact factor” and Euro-factor, the most promising index-number, h-index is presented. Finally new initiation – Index Copernicus Master List – is delineated, which is suitable to rank journals. Studying different indexes the authors make a proposal and complete the method of long standing for the evaluation of scientific performance.


2018 ◽  
Vol 3 (1) ◽  
pp. 001
Author(s):  
Zulhendra Zulhendra ◽  
Gunadi Widi Nurcahyo ◽  
Julius Santony

In this study using Data Mining, namely K-Means Clustering. Data Mining can be used in searching for a large enough data analysis that aims to enable Indocomputer to know and classify service data based on customer complaints using Weka Software. In this study using the algorithm K-Means Clustering to predict or classify complaints about hardware damage on Payakumbuh Indocomputer. And can find out the data of Laptop brands most do service on Indocomputer Payakumbuh as one of the recommendations to consumers for the selection of Laptops.


Sign in / Sign up

Export Citation Format

Share Document