scholarly journals A novel class of stabilized greedy kernel approximation algorithms: Convergence, stability and uniform point distribution

2021 ◽  
Vol 262 ◽  
pp. 105508
Author(s):  
Tizian Wenzel ◽  
Gabriele Santin ◽  
Bernard Haasdonk
Author(s):  
Kai Han ◽  
Shuang Cui ◽  
Tianshuai Zhu ◽  
Enpei Zhang ◽  
Benwei Wu ◽  
...  

Data summarization, i.e., selecting representative subsets of manageable size out of massive data, is often modeled as a submodular optimization problem. Although there exist extensive algorithms for submodular optimization, many of them incur large computational overheads and hence are not suitable for mining big data. In this work, we consider the fundamental problem of (non-monotone) submodular function maximization with a knapsack constraint, and propose simple yet effective and efficient algorithms for it. Specifically, we propose a deterministic algorithm with approximation ratio 6 and a randomized algorithm with approximation ratio 4, and show that both of them can be accelerated to achieve nearly linear running time at the cost of weakening the approximation ratio by an additive factor of ε. We then consider a more restrictive setting without full access to the whole dataset, and propose streaming algorithms with approximation ratios of 8+ε and 6+ε that make one pass and two passes over the data stream, respectively. As a by-product, we also propose a two-pass streaming algorithm with an approximation ratio of 2+ε when the considered submodular function is monotone. To the best of our knowledge, our algorithms achieve the best performance bounds compared to the state-of-the-art approximation algorithms with efficient implementation for the same problem. Finally, we evaluate our algorithms in two concrete submodular data summarization applications for revenue maximization in social networks and image summarization, and the empirical results show that our algorithms outperform the existing ones in terms of both effectiveness and efficiency.


2021 ◽  
Vol 11 (9) ◽  
pp. 4274
Author(s):  
Song Fang ◽  
Jianxiao Ma

Through an urban tunnel-driving experiment, this paper studies the changing trend of drivers’ visual characteristics in tunnels. A Tobii Pro Glasses 2 wearable eye tracker was used to measure pupil diameter, scanning time, and fixation point distribution of the driver during driving. A two-step clustering algorithm and the data-fitting method were used to analyze the experimental data. The results show that the univariate clustering analysis of the pupil diameter change rate of drivers has poor discrimination because the pupil diameter change rate of drivers in the process of “dark adaptation” is larger, while the pupil diameter change rate of drivers in the process of “bright adaptation” is relatively smooth. The univariate and bivariate clustering results of drivers’ pupil diameters were all placed into three categories, with reasonable distribution and suitable differentiation. The clustering results accurately corresponded to different locations of the tunnel. The clustering method proposed in this paper can identify similar behaviors of drivers at different locations in the transition section at the tunnel entrance, the inner section, and the outer area of the tunnel. Through data-fitting of drivers’ visual characteristic parameters in different tunnels, it was found that a short tunnel, with a length of less than 1 km, has little influence on visual characteristics when the maximum pupil diameter is small, and the percentage of saccades is relatively low. An urban tunnel with a length between 1 and 2 km has a significant influence on visual characteristics. In this range, with the increase in tunnel length, the maximum pupil diameter increases significantly, and the percentage of saccades increases rapidly. When the tunnel length exceeds 2 km, the maximum pupil diameter does not continue to increase. The longer the urban tunnel, the more discrete the distribution of drivers’ gaze points. The research results should provide a scientific basis for the design of urban tunnel traffic safety facilities and traffic organization.


2020 ◽  
Vol 8 (1) ◽  
pp. 1-28
Author(s):  
Siddharth Barman ◽  
Sanath Kumar Krishnamurthy

Processes ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 1172
Author(s):  
Leonard Moser ◽  
Christina Penke ◽  
Valentin Batteiger

One of the more promising technologies for future renewable fuel production from biomass is hydrothermal liquefaction (HTL). Although enormous progress in the context of continuous experiments on demonstration plants has been made in the last years, still many research questions concerning the understanding of the HTL reaction network remain unanswered. In this study, a unique process model of an HTL process chain has been developed in Aspen Plus® for three feedstock, microalgae, sewage sludge and wheat straw. A process chain consisting of HTL, hydrotreatment (HT) and catalytic hydrothermal gasification (cHTG) build the core process steps of the model, which uses 51 model compounds representing the hydrolysis products of the different biochemical groups lipids, proteins, carbohydrates, lignin, extractives and ash for modeling the biomass. Two extensive reaction networks of 272 and 290 reactions for the HTL and HT process step, respectively, lead to the intermediate biocrude (~200 model compounds) and the final upgraded biocrude product (~130 model compounds). The model can reproduce important characteristics, such as yields, elemental analyses, boiling point distribution, product fractions, density and higher heating values of experimental results from continuous experiments as well as literature values. The model can be applied as basis for techno-economic and environmental assessments of HTL fuel production, and may be further developed into a predictive yield modeling tool.


Author(s):  
Jing Tang ◽  
Xueyan Tang ◽  
Andrew Lim ◽  
Kai Han ◽  
Chongshou Li ◽  
...  

Monotone submodular maximization with a knapsack constraint is NP-hard. Various approximation algorithms have been devised to address this optimization problem. In this paper, we revisit the widely known modified greedy algorithm. First, we show that this algorithm can achieve an approximation factor of 0.405, which significantly improves the known factors of 0.357 given by Wolsey and (1-1/e)/2\approx 0.316 given by Khuller et al. More importantly, our analysis closes a gap in Khuller et al.'s proof for the extensively mentioned approximation factor of (1-1/\sqrte )\approx 0.393 in the literature to clarify a long-standing misconception on this issue. Second, we enhance the modified greedy algorithm to derive a data-dependent upper bound on the optimum. We empirically demonstrate the tightness of our upper bound with a real-world application. The bound enables us to obtain a data-dependent ratio typically much higher than 0.405 between the solution value of the modified greedy algorithm and the optimum. It can also be used to significantly improve the efficiency of algorithms such as branch and bound.


2021 ◽  
Vol 50 (1) ◽  
pp. 33-40
Author(s):  
Chenhao Ma ◽  
Yixiang Fang ◽  
Reynold Cheng ◽  
Laks V.S. Lakshmanan ◽  
Wenjie Zhang ◽  
...  

Given a directed graph G, the directed densest subgraph (DDS) problem refers to the finding of a subgraph from G, whose density is the highest among all the subgraphs of G. The DDS problem is fundamental to a wide range of applications, such as fraud detection, community mining, and graph compression. However, existing DDS solutions suffer from efficiency and scalability problems: on a threethousand- edge graph, it takes three days for one of the best exact algorithms to complete. In this paper, we develop an efficient and scalable DDS solution. We introduce the notion of [x, y]-core, which is a dense subgraph for G, and show that the densest subgraph can be accurately located through the [x, y]-core with theoretical guarantees. Based on the [x, y]-core, we develop both exact and approximation algorithms. We have performed an extensive evaluation of our approaches on eight real large datasets. The results show that our proposed solutions are up to six orders of magnitude faster than the state-of-the-art.


Sign in / Sign up

Export Citation Format

Share Document