Scalable Algorithms for Large and Dynamic Networks: Reducing Big Data for Small Computations

2015 ◽  
Vol 20 ◽  
pp. 23-33 ◽  
Author(s):  
Iraj Saniee
2021 ◽  
Author(s):  
Haimonti Dutta

In the era of big data, an important weapon in a machine learning researcher’s arsenal is a scalable support vector machine (SVM) algorithm. Traditional algorithms for learning SVMs scale superlinearly with the training set size, which becomes infeasible quickly for large data sets. In recent years, scalable algorithms have been designed which study the primal or dual formulations of the problem. These often suggest a way to decompose the problem and facilitate development of distributed algorithms. In this paper, we present a distributed algorithm for learning linear SVMs in the primal form for binary classification called the gossip-based subgradient (GADGET) SVM. The algorithm is designed such that it can be executed locally on sites of a distributed system. Each site processes its local homogeneously partitioned data and learns a primal SVM model; it then gossips with random neighbors about the classifier learnt and uses this information to update the model. To learn the model, the SVM optimization problem is solved using several techniques, including a gradient estimation procedure, stochastic gradient descent method, and several variants including minibatches of varying sizes. Our theoretical results indicate that the rate at which the GADGET SVM algorithm converges to the global optimum at each site is dominated by an [Formula: see text] term, where λ measures the degree of convexity of the function at the site. Empirical results suggest that this anytime algorithm—where the quality of results improve gradually as computation time increases—has performance comparable to its centralized, pseudodistributed, and other state-of-the-art gossip-based SVM solvers. It is at least 1.5 times (often several orders of magnitude) faster than other gossip-based SVM solvers known in literature and has a message complexity of O(d) per iteration, where d represents the number of features of the data set. Finally, a large-scale case study is presented wherein the consensus-based SVM algorithm is used to predict failures of advanced mechanical components in a chocolate manufacturing process using more than one million data points. This paper was accepted by J. George Shanthikumar, big data analytics.


ASHA Leader ◽  
2013 ◽  
Vol 18 (2) ◽  
pp. 59-59
Keyword(s):  

Find Out About 'Big Data' to Track Outcomes


2014 ◽  
Vol 35 (3) ◽  
pp. 158-165 ◽  
Author(s):  
Christian Montag ◽  
Konrad Błaszkiewicz ◽  
Bernd Lachmann ◽  
Ionut Andone ◽  
Rayna Sariyska ◽  
...  

In the present study we link self-report-data on personality to behavior recorded on the mobile phone. This new approach from Psychoinformatics collects data from humans in everyday life. It demonstrates the fruitful collaboration between psychology and computer science, combining Big Data with psychological variables. Given the large number of variables, which can be tracked on a smartphone, the present study focuses on the traditional features of mobile phones – namely incoming and outgoing calls and SMS. We observed N = 49 participants with respect to the telephone/SMS usage via our custom developed mobile phone app for 5 weeks. Extraversion was positively associated with nearly all related telephone call variables. In particular, Extraverts directly reach out to their social network via voice calls.


2017 ◽  
Vol 225 (3) ◽  
pp. 287-288
Keyword(s):  

An associated conference will take place at ZPID – Leibniz Institute for Psychology Information in Trier, Germany, on June 7–9, 2018. For further details, see: http://bigdata2018.leibniz-psychology.org


PsycCRITIQUES ◽  
2014 ◽  
Vol 59 (2) ◽  
Author(s):  
David J. Pittenger
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document