scholarly journals An Efficient Algorithm for Convex Biclustering

Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3021
Author(s):  
Jie Chen ◽  
Joe Suzuki

We consider biclustering that clusters both samples and features and propose efficient convex biclustering procedures. The convex biclustering algorithm (COBRA) procedure solves twice the standard convex clustering problem that contains a non-differentiable function optimization. We instead convert the original optimization problem to a differentiable one and improve another approach based on the augmented Lagrangian method (ALM). Our proposed method combines the basic procedures in the ALM with the accelerated gradient descent method (Nesterov’s accelerated gradient method), which can attain O(1/k2) convergence rate. It only uses first-order gradient information, and the efficiency is not influenced by the tuning parameter λ so much. This advantage allows users to quickly iterate among the various tuning parameters λ and explore the resulting changes in the biclustering solutions. The numerical experiments demonstrate that our proposed method has high accuracy and is much faster than the currently known algorithms, even for large-scale problems.

2018 ◽  
Vol 98 (2) ◽  
pp. 331-338 ◽  
Author(s):  
STEFAN PANIĆ ◽  
MILENA J. PETROVIĆ ◽  
MIROSLAVA MIHAJLOV CAREVIĆ

We improve the convergence properties of the iterative scheme for solving unconstrained optimisation problems introduced in Petrovic et al. [‘Hybridization of accelerated gradient descent method’, Numer. Algorithms (2017), doi:10.1007/s11075-017-0460-4] by optimising the value of the initial step length parameter in the backtracking line search procedure. We prove the validity of the algorithm and illustrate its advantages by numerical experiments and comparisons.


Author(s):  
WANG XIANGDONG ◽  
WANG SHOUJUE

In this paper, we present a neural-based manufacturing process control system for semiconductor factories to improve the die yield. A model based on neural networks is proposed to simulate Very Large-Scale Integrated (VLSI) manufacturing process. Learning from the historical processing lists with Radial Basis Function (RBF), we simulate the functional relationship between the wafer probing parameters and the die yield. Then we use a gradient-descent method to search a set of 'optimal' parameters that lead to the maximum yield of the model. At last, we adjust the specification in the practical semiconductor manufacturing process. The average die yield increased from 51.7% to 57.5% after the system had been applied in Huajing Corporation.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Jinhuan Duan ◽  
Xianxian Li ◽  
Shiqi Gao ◽  
Zili Zhong ◽  
Jinyan Wang

With the vigorous development of artificial intelligence technology, various engineering technology applications have been implemented one after another. The gradient descent method plays an important role in solving various optimization problems, due to its simple structure, good stability, and easy implementation. However, in multinode machine learning system, the gradients usually need to be shared, which will cause privacy leakage, because attackers can infer training data with the gradient information. In this paper, to prevent gradient leakage while keeping the accuracy of the model, we propose the super stochastic gradient descent approach to update parameters by concealing the modulus length of gradient vectors and converting it or them into a unit vector. Furthermore, we analyze the security of super stochastic gradient descent approach and demonstrate that our algorithm can defend against the attacks on the gradient. Experiment results show that our approach is obviously superior to prevalent gradient descent approaches in terms of accuracy, robustness, and adaptability to large-scale batches. Interestingly, our algorithm can also resist model poisoning attacks to a certain extent.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Predrag S. Stanimirović ◽  
Gradimir V. Milovanović ◽  
Milena J. Petrović ◽  
Nataša Z. Kontrec

A reduction of the originally double step size iteration into the single step length scheme is derived under the proposed condition that relates two step lengths in the accelerated double step size gradient descent scheme. The proposed transformation is numerically tested. Obtained results confirm the substantial progress in comparison with the single step size accelerated gradient descent method defined in a classical way regarding all analyzed characteristics: number of iterations, CPU time, and number of function evaluations. Linear convergence of derived method has been proved.


Author(s):  
Dian Puspita Hapsari ◽  
Imam Utoyo ◽  
Santi Wulan Purnami

Data classification has several problems one of which is a large amount of data that will reduce computing time. SVM is a reliable linear classifier for linear or non-linear data, for large-scale data, there are computational time constraints. The Fractional gradient descent method is an unconstrained optimization algorithm to train classifiers with support vector machines that have convex problems. Compared to the classic integer-order model, a model built with fractional calculus has a significant advantage to accelerate computing time. In this research, it is to conduct investigate the current state of this new optimization method fractional derivatives that can be implemented in the classifier algorithm. The results of the SVM Classifier with fractional gradient descent optimization, it reaches a convergence point of approximately 50 iterations smaller than SVM-SGD. The process of updating or fixing the model is smaller in fractional because the multiplier value is less than 1 or in the form of fractions. The SVM-Fractional SGD algorithm is proven to be an effective method for rainfall forecast decisions.


2021 ◽  
Author(s):  
Haimonti Dutta

In the era of big data, an important weapon in a machine learning researcher’s arsenal is a scalable support vector machine (SVM) algorithm. Traditional algorithms for learning SVMs scale superlinearly with the training set size, which becomes infeasible quickly for large data sets. In recent years, scalable algorithms have been designed which study the primal or dual formulations of the problem. These often suggest a way to decompose the problem and facilitate development of distributed algorithms. In this paper, we present a distributed algorithm for learning linear SVMs in the primal form for binary classification called the gossip-based subgradient (GADGET) SVM. The algorithm is designed such that it can be executed locally on sites of a distributed system. Each site processes its local homogeneously partitioned data and learns a primal SVM model; it then gossips with random neighbors about the classifier learnt and uses this information to update the model. To learn the model, the SVM optimization problem is solved using several techniques, including a gradient estimation procedure, stochastic gradient descent method, and several variants including minibatches of varying sizes. Our theoretical results indicate that the rate at which the GADGET SVM algorithm converges to the global optimum at each site is dominated by an [Formula: see text] term, where λ measures the degree of convexity of the function at the site. Empirical results suggest that this anytime algorithm—where the quality of results improve gradually as computation time increases—has performance comparable to its centralized, pseudodistributed, and other state-of-the-art gossip-based SVM solvers. It is at least 1.5 times (often several orders of magnitude) faster than other gossip-based SVM solvers known in literature and has a message complexity of O(d) per iteration, where d represents the number of features of the data set. Finally, a large-scale case study is presented wherein the consensus-based SVM algorithm is used to predict failures of advanced mechanical components in a chocolate manufacturing process using more than one million data points. This paper was accepted by J. George Shanthikumar, big data analytics.


2017 ◽  
Vol 79 (3) ◽  
pp. 769-786 ◽  
Author(s):  
Milena Petrović ◽  
Vladimir Rakočević ◽  
Nataša Kontrec ◽  
Stefan Panić ◽  
Dejan Ilić

Sign in / Sign up

Export Citation Format

Share Document