scholarly journals Correction to: Using Positive Spanning Sets to Achieve d-Stationarity with the Boosted DC Algorithm

2020 ◽  
Vol 48 (2) ◽  
pp. 377-377
Author(s):  
F. J. Aragón Artacho ◽  
R. Campoy ◽  
P. T. Vuong
Keyword(s):  
Author(s):  
Tao Pham Dinh ◽  
Van Ngai Huynh ◽  
Hoai An Le Thi ◽  
Vinh Thanh Ho
Keyword(s):  

2020 ◽  
Vol 32 (4) ◽  
pp. 759-793 ◽  
Author(s):  
Hoai An Le Thi ◽  
Vinh Thanh Ho

We investigate an approach based on DC (Difference of Convex functions) programming and DCA (DC Algorithm) for online learning techniques. The prediction problem of an online learner can be formulated as a DC program for which online DCA is applied. We propose the two so-called complete/approximate versions of online DCA scheme and prove their logarithmic/sublinear regrets. Six online DCA-based algorithms are developed for online binary linear classification. Numerical experiments on a variety of benchmark classification data sets show the efficiency of our proposed algorithms in comparison with the state-of-the-art online classification algorithms.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Peixin Zhang ◽  
Jianxin Wang ◽  
Peng Ren ◽  
Shushu Yang ◽  
Haiwei Song

To detect terrestrial application-specific messages (ASM-TER) signals from a satellite, a novel detection method based on the fast computation of the cross ambiguity function is proposed in this paper. The classic cross ambiguity function’s computational burden is heavy, and we transform the classic cross ambiguity function to a frequency domain version to reduce the computational complexity according to Parseval’s theorem. The computationally efficient sliding discrete Fourier transform (SDFT) is utilized to calculate the frequency spectrum of the windowed received signal, from which the Doppler frequency could be estimated coarsely. Those subbands around the Doppler frequency are selected to calculate the ambiguity function for reducing the computational complexity. Furthermore, two local sequences with half length of the training sequence are utilized to acquire a better Doppler frequency tolerance; thus, the frequency search step is increased and the computational complexity could be further reduced. Once an ASM-TER signal is detected by the proposed algorithm, a fine Doppler frequency estimation could be obtained easily from the correlation peaks of the two local sequences. Simulation results show that the proposed algorithm shares almost the same performance with the classic cross ambiguity function-based method, and the computational complexity is greatly reduced. Simulation results also show that the proposed algorithm is more resistant to cochannel interference (CCI) than the differential correlation (DC) algorithm, and the performance of fine Doppler frequency estimation is close to that of the Cramér–Rao lower bound (CRLB).


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Abdellatif Moudafi

The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for m∈IN, is the set of errors when linear measurements are taken to recover a signal/image via the Lasso. Based on a recent work by Wang (2013), we are interested in two new penalty methods for Q-Lasso relying on two types of difference of convex functions (DC for short) programming where the DC objective functions are the difference of l1 and lσq norms and the difference of l1 and lr norms with r>1. By means of a generalized q-term shrinkage operator upon the special structure of lσq norm, we design a proximal gradient algorithm for handling the DC l1−lσq model. Then, based on the majorization scheme, we develop a majorized penalty algorithm for the DC l1−lr model. The convergence results of our new algorithms are presented as well. We would like to emphasize that extensive simulation results in the case Q={b} show that these two new algorithms offer improved signal recovery performance and require reduced computational effort relative to state-of-the-art l1 and lp (p∈(0,1)) models, see Wang (2013). We also devise two DC Algorithms on the spirit of a paper where exact DC representation of the cardinality constraint is investigated and which also used the largest-q norm of lσq and presented numerical results that show the efficiency of our DC Algorithm in comparison with other methods using other penalty terms in the context of quadratic programing, see Jun-ya et al. (2017).


2020 ◽  
Vol 34 (06) ◽  
pp. 9851-9858
Author(s):  
Michael Gao ◽  
Lindsay Popowski ◽  
Jim Boerkoel

The controllability of a temporal network is defined as an agent's ability to navigate around the uncertainty in its schedule and is well-studied for certain networks of temporal constraints. However, many interesting real-world problems can be better represented as Probabilistic Simple Temporal Networks (PSTNs) in which the uncertain durations are represented using potentially-unbounded probability density functions. This can make it inherently impossible to control for all eventualities. In this paper, we propose two new dynamic controllability algorithms that attempt to maximize the likelihood of successfully executing a schedule within a PSTN. The first approach, which we call Min-Loss DC, finds a dynamic scheduling strategy that minimizes loss of control by using a conflict-directed search to decide where to sacrifice the control in a way that optimizes overall success. The second approach, which we call Max-Gain DC, works in the other direction: it finds a dynamically controllable schedule and then attempts to progressively strengthen it by capturing additional uncertainty. Our approaches are the first known that work by finding maximally dynamically controllable schedules. We empirically compare our approaches against two existing PSTN offline dispatch approaches and one online approach and show that our Min-Loss DC algorithm outperforms the others in terms of maximizing execution success while maintaining competitive runtimes.


Survey Review ◽  
2014 ◽  
Vol 46 (339) ◽  
pp. 426-431 ◽  
Author(s):  
J. M. Guo ◽  
M. D. Zhou ◽  
J. B. Shi ◽  
C. J. Huang

Algorithms ◽  
2019 ◽  
Vol 12 (12) ◽  
pp. 249 ◽  
Author(s):  
Annabella Astorino ◽  
Antonio Fuduli ◽  
Giovanni Giallombardo ◽  
Giovanna Miglionico

A multiple instance learning problem consists of categorizing objects, each represented as a set (bag) of points. Unlike the supervised classification paradigm, where each point of the training set is labeled, the labels are only associated with bags, while the labels of the points inside the bags are unknown. We focus on the binary classification case, where the objective is to discriminate between positive and negative bags using a separating surface. Adopting a support vector machine setting at the training level, the problem of minimizing the classification-error function can be formulated as a nonconvex nonsmooth unconstrained program. We propose a difference-of-convex (DC) decomposition of the nonconvex function, which we face using an appropriate nonsmooth DC algorithm. Some of the numerical results on benchmark data sets are reported.


Author(s):  
Yoshifumi Kusunoki ◽  
◽  
Chiharu Wakou ◽  
Keiji Tatsumi

In this paper, we study nearest prototype classifiers, which classify data instances into the classes to which their nearest prototypes belong. We propose a maximum-margin model for nearest prototype classifiers. To provide the margin, we define a class-wise discriminant function for instances by the negatives of distances of their nearest prototypes of the class. Then, we define the margin by the minimum of differences between the discriminant function values of instances with respect to the classes they belong to and the values of the other classes. The optimization problem corresponding to the maximum-margin model is a difference of convex functions (DC) program. It is solved using a DC algorithm, which is ak-means-like algorithm, i.e., the members and positions of prototypes are alternately optimized. Through a numerical study, we analyze the effects of hyperparameters of the maximum-margin model, especially considering the classification performance.


Sign in / Sign up

Export Citation Format

Share Document