AN UNSUPERVISED COLOR-TEXTURE SEGMENTATION USING TWO-STAGE FUZZY c-MEANS ALGORITHM

Author(s):  
SHAOPING XU ◽  
LINGYAN HU ◽  
CHUNQUAN LI ◽  
XIAOHUI YANG ◽  
XIAOPING P. LIU

Unsupervised image segmentation is a fundamental but challenging problem in computer vision. In this paper, we propose a novel unsupervised segmentation algorithm, which could find diverse applications in pattern recognition, particularly in computer vision. The algorithm, named Two-stage Fuzzy c-means Hybrid Approach (TFHA), adaptively clusters image pixels according to their multichannel Gabor responses taken at multiple scales and orientations. In the first stage, the fuzzy c-means (FCM) algorithm is applied for intelligent estimation of centroid number and initialization of cluster centroids, which endows the novel segmentation algorithm with adaptivity. To improve the efficiency of the algorithm, we utilize the Gray Level Co-occurrence Matrix (GLCM) feature extracted at the hyperpixel level instead of the pixel level to estimate centroid number and hyperpixel-cluster memberships, which are used as initialization parameters of the following main clustering stage to reduce the computational cost while keeping the segmentation performance in terms of accuracy close to original one. Then, in the second stage, the FCM algorithm is utilized again at the pixel level to improve the compactness of the clusters forming final homogeneous regions. To examine the performance of the proposed algorithm, extensive experiments were conducted and experimental results show that the proposed algorithm has a very effective segmentation results and computational behavior, decreases the execution time and increases the quality of segmentation results, compared with the state-of-the-art segmentation methods recently proposed in the literature.

2011 ◽  
Vol 421 ◽  
pp. 465-469
Author(s):  
Yan Ling Li ◽  
Gang Li

Fuzzy C-Means(FCM) algorithm is one of the most popular methods for image segmentation, but it is in essence a technology of searching local optimal solution. The algorithm’s initial clustering centers are stochastic selection which causes it to depend on the selection of the initial cluster centers excessively. For this reason, fuzzy C-means cluster segmentation algorithm based on bacterial colony chemotaxis (BCC) is proposed in this paper. Firstly, initial cluster centers of FCM algorithm is get by BCC algorithm. Then, the images are segmented using FCM algorithm. Experimental results show that the proposed algorithm used for image segmentation can segment images more effectively and can provide more robust segmentation results.


Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2181
Author(s):  
Rafik Nafkha ◽  
Tomasz Ząbkowski ◽  
Krzysztof Gajowniczek

The electricity tariffs available to customers in Poland depend on the connection voltage level and contracted capacity, which reflect the customer demand profile. Therefore, before connecting to the power grid, each consumer declares the demand for maximum power. This amount, referred to as the contracted capacity, is used by the electricity provider to assign the proper connection type to the power grid, including the size of the security breaker. Maximum power is also the basis for calculating fixed charges for electricity consumption, which is controlled and metered through peak meters. If the peak demand exceeds the contracted capacity, a penalty charge is applied to the exceeded amount, which is up to ten times the basic rate. In this article, we present several solutions for entrepreneurs based on the implementation of two-stage and deep learning approaches to predict maximal load values and the moments of exceeding the contracted capacity in the short term, i.e., up to one month ahead. The forecast is further used to optimize the capacity volume to be contracted in the following month to minimize network charge for exceeding the contracted level. As confirmed experimentally with two datasets, the application of a multiple output forecast artificial neural network model and a genetic algorithm (two-stage approach) for load optimization delivers significant benefits to customers. As an alternative, the same benefit is delivered with a deep learning architecture (hybrid approach) to predict the maximal capacity demands and, simultaneously, to determine the optimal capacity contract.


Atmosphere ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 64
Author(s):  
Feng Jiang ◽  
Yaqian Qiao ◽  
Xuchu Jiang ◽  
Tianhai Tian

The randomness, nonstationarity and irregularity of air pollutant data bring difficulties to forecasting. To improve the forecast accuracy, we propose a novel hybrid approach based on two-stage decomposition embedded sample entropy, group teaching optimization algorithm (GTOA), and extreme learning machine (ELM) to forecast the concentration of particulate matter (PM10 and PM2.5). First, the improvement complementary ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) is employed to decompose the concentration data of PM10 and PM2.5 into a set of intrinsic mode functions (IMFs) with different frequencies. In addition, wavelet transform (WT) is utilized to decompose the IMFs with high frequency based on sample entropy values. Then the GTOA algorithm is used to optimize ELM. Furthermore, the GTOA-ELM is utilized to predict all the subseries. The final forecast result is obtained by ensemble of the forecast results of all subseries. To further prove the predictable performance of the hybrid approach on air pollutants, the hourly concentration data of PM2.5 and PM10 are used to make one-step-, two-step- and three-step-ahead predictions. The empirical results demonstrate that the hybrid ICEEMDAN-WT-GTOA-ELM approach has superior forecasting performance and stability over other methods. This novel method also provides an effective and efficient approach to make predictions for nonlinear, nonstationary and irregular data.


Author(s):  
Wei Zhang ◽  
Saad Ahmed ◽  
Jonathan Hong ◽  
Zoubeida Ounaies ◽  
Mary Frecker

Different types of active materials have been used to actuate origami-inspired self-folding structures. To model the highly nonlinear deformation and material responses, as well as the coupled field equations and boundary conditions of such structures, high-fidelity models such as finite element (FE) models are needed but usually computationally expensive, which makes optimization intractable. In this paper, a computationally efficient two-stage optimization framework is developed as a systematic method for the multi-objective designs of such multifield self-folding structures where the deformations are concentrated in crease-like areas, active and passive materials are assumed to behave linearly, and low- and high-fidelity models of the structures can be developed. In Stage 1, low-fidelity models are used to determine the topology of the structure. At the end of Stage 1, a distance measure [Formula: see text] is applied as the metric to determine the best design, which then serves as the baseline design in Stage 2. In Stage 2, designs are further optimized from the baseline design with greatly reduced computing time compared to a full FEA-based topology optimization. The design framework is first described in a general formulation. To demonstrate its efficacy, this framework is implemented in two case studies, namely, a three-finger soft gripper actuated using a PVDF-based terpolymer, and a 3D multifield example actuated using both the terpolymer and a magneto-active elastomer, where the key steps are elaborated in detail, including the variable filter, metrics to select the best design, determination of design domains, and material conversion methods from low- to high-fidelity models. In this paper, analytical models and rigid body dynamic models are developed as the low-fidelity models for the terpolymer- and MAE-based actuations, respectively, and the FE model of the MAE-based actuation is generalized from previous work. Additional generalizable techniques to further reduce the computational cost are elaborated. As a result, designs with better overall performance than the baseline design were achieved at the end of Stage 2 with computing times of 15 days for the gripper and 9 days for the multifield example, which would rather be over 3 and 2 months for full FEA-based optimizations, respectively. Tradeoffs between the competing design objectives were achieved. In both case studies, the efficacy and computational efficiency of the two-stage optimization framework are successfully demonstrated.


Author(s):  
Weilin Nie ◽  
Cheng Wang

Abstract Online learning is a classical algorithm for optimization problems. Due to its low computational cost, it has been widely used in many aspects of machine learning and statistical learning. Its convergence performance depends heavily on the step size. In this paper, a two-stage step size is proposed for the unregularized online learning algorithm, based on reproducing Kernels. Theoretically, we prove that, such an algorithm can achieve a nearly min–max convergence rate, up to some logarithmic term, without any capacity condition.


2018 ◽  
Vol 2018 ◽  
pp. 1-12
Author(s):  
Yun-Hua Wu ◽  
Lin-Lin Ge ◽  
Feng Wang ◽  
Bing Hua ◽  
Zhi-Ming Chen ◽  
...  

In order to satisfy the real-time requirement of spacecraft autonomous navigation using natural landmarks, a novel algorithm called CSA-SURF (chessboard segmentation algorithm and speeded up robust features) is proposed to improve the speed without loss of repeatability performance of image registration progress. It is a combination of chessboard segmentation algorithm and SURF. Here, SURF is used to extract the features from satellite images because of its scale- and rotation-invariant properties and low computational cost. CSA is based on image segmentation technology, aiming to find representative blocks, which will be allocated to different tasks to speed up the image registration progress. To illustrate the advantages of the proposed algorithm, PCA-SURF, which is the combination of principle component analysis and SURF, is also analyzed in this paper for comparison. Furthermore, random sample consensus (RANSAC) algorithm is applied to eliminate the false matches for further accuracy improvement. The simulation results show that the proposed strategy obtains good results, especially in scaling and rotation variation. Besides, CSA-SURF decreased 50% of the time in extraction and 90% of the time in matching without losing the repeatability performance by comparing with SURF algorithm. The proposed method has been demonstrated as an alternative way for image registration of spacecraft autonomous navigation using natural landmarks.


Sign in / Sign up

Export Citation Format

Share Document