scholarly journals Location histogram privacy by Sensitive Location Hiding and Target Histogram Avoidance/Resemblance

2019 ◽  
Vol 62 (7) ◽  
pp. 2613-2651
Author(s):  
Grigorios Loukides ◽  
George Theodorakopoulos

AbstractA location histogram is comprised of the number of times a user has visited locations as they move in an area of interest, and it is often obtained from the user in the context of applications such as recommendation and advertising. However, a location histogram that leaves a user’s computer or device may threaten privacy when it contains visits to locations that the user does not want to disclose (sensitive locations), or when it can be used to profile the user in a way that leads to price discrimination and unsolicited advertising (e.g., as “wealthy” or “minority member”). Our work introduces two privacy notions to protect a location histogram from these threats: Sensitive Location Hiding, which aims at concealing all visits to sensitive locations, and Target Avoidance/Resemblance, which aims at concealing the similarity/dissimilarity of the user’s histogram to a target histogram that corresponds to an undesired/desired profile. We formulate an optimization problem around each notion: Sensitive Location Hiding ($${ SLH}$$SLH), which seeks to construct a histogram that is as similar as possible to the user’s histogram but associates all visits with nonsensitive locations, and Target Avoidance/Resemblance ($${ TA}$$TA/$${ TR}$$TR), which seeks to construct a histogram that is as dissimilar/similar as possible to a given target histogram but remains useful for getting a good response from the application that analyzes the histogram. We develop an optimal algorithm for each notion, which operates on a notion-specific search space graph and finds a shortest or longest path in the graph that corresponds to a solution histogram. In addition, we develop a greedy heuristic for the $${ TA}$$TA/$${ TR}$$TR problem, which operates directly on a user’s histogram. Our experiments demonstrate that all algorithms are effective at preserving the distribution of locations in a histogram and the quality of location recommendation. They also demonstrate that the heuristic produces near-optimal solutions while being orders of magnitude faster than the optimal algorithm for $${ TA}$$TA/$${ TR}$$TR.

2021 ◽  
Vol 12 (4) ◽  
pp. 81-100
Author(s):  
Yao Peng ◽  
Zepeng Shen ◽  
Shiqi Wang

Multimodal optimization problem exists in multiple global and many local optimal solutions. The difficulty of solving these problems is finding as many local optimal peaks as possible on the premise of ensuring global optimal precision. This article presents adaptive grouping brainstorm optimization (AGBSO) for solving these problems. In this article, adaptive grouping strategy is proposed for achieving adaptive grouping without providing any prior knowledge by users. For enhancing the diversity and accuracy of the optimal algorithm, elite reservation strategy is proposed to put central particles into an elite pool, and peak detection strategy is proposed to delete particles far from optimal peaks in the elite pool. Finally, this article uses testing functions with different dimensions to compare the convergence, accuracy, and diversity of AGBSO with BSO. Experiments verify that AGBSO has great localization ability for local optimal solutions while ensuring the accuracy of the global optimal solutions.


2011 ◽  
Vol 421 ◽  
pp. 559-563
Author(s):  
Yong Chao Gao ◽  
Li Mei Liu ◽  
Heng Qian ◽  
Ding Wang

The scale and complexity of search space are important factors deciding the solving difficulty of an optimization problem. The information of solution space may lead searching to optimal solutions. Based on this, an algorithm for combinatorial optimization is proposed. This algorithm makes use of the good solutions found by intelligent algorithms, contracts the search space and partitions it into one or several optimal regions by backbones of combinatorial optimization solutions. And optimization of small-scale problems is carried out in optimal regions. Statistical analysis is not necessary before or through the solving process in this algorithm, and solution information is used to estimate the landscape of search space, which enhances the speed of solving and solution quality. The algorithm breaks a new path for solving combinatorial optimization problems, and the results of experiments also testify its efficiency.


Author(s):  
Hicham El Hassani ◽  
Said Benkachcha ◽  
Jamal Benhra

Inspired by nature, genetic algorithms (GA) are among the greatest meta-heuristics optimization methods that have proved their effectiveness to conventional NP-hard problems, especially the traveling salesman problem (TSP) which is one of the most studied Supply chain management problems. This paper proposes a new crossover operator called Jump Crossover (JMPX) for solving the travelling salesmen problem using a genetic algorithm (GA) for near-optimal solutions, to conclude on its efficiency compared to solutions quality given by other conventional operators to the same problem, namely, Partially matched crossover (PMX), Edge recombination Crossover (ERX) and r-opt heuristic with consideration of computational overload. We adopt the path representation technique for our chromosome which is the most direct representation and a low mutation rate to isolate the search space exploration ability of each crossover. The experimental results show that in most cases JMPX can remarkably improve the solution quality of the GA compared to the two existing classic crossover approaches and the r-opt heuristic.


2016 ◽  
pp. 1739-1752 ◽  
Author(s):  
Hicham El Hassani ◽  
Said Benkachcha ◽  
Jamal Benhra

Inspired by nature, genetic algorithms (GA) are among the greatest meta-heuristics optimization methods that have proved their effectiveness to conventional NP-hard problems, especially the traveling salesman problem (TSP) which is one of the most studied supply chain management problems. This paper proposes a new crossover operator called Jump Crossover (JMPX) for solving the travelling salesmen problem using a genetic algorithm (GA) for near-optimal solutions, to conclude on its efficiency compared to solutions quality given by other conventional operators to the same problem, namely, Partially matched crossover (PMX), Edge recombination Crossover (ERX) and r-opt heuristic with consideration of computational overload. The authors adopt a low mutation rate to isolate the search space exploration ability of each crossover. The experimental results show that in most cases JMPX can remarkably improve the solution quality of the GA compared to the two existing classic crossover approaches and the r-opt heuristic.


2015 ◽  
Vol 6 (2) ◽  
pp. 33-44 ◽  
Author(s):  
Hicham El Hassani ◽  
Said Benkachcha ◽  
Jamal Benhra

Inspired by nature, genetic algorithms (GA) are among the greatest meta-heuristics optimization methods that have proved their effectiveness to conventional NP-hard problems, especially the traveling salesman problem (TSP) which is one of the most studied supply chain management problems. This paper proposes a new crossover operator called Jump Crossover (JMPX) for solving the travelling salesmen problem using a genetic algorithm (GA) for near-optimal solutions, to conclude on its efficiency compared to solutions quality given by other conventional operators to the same problem, namely, Partially matched crossover (PMX), Edge recombination Crossover (ERX) and r-opt heuristic with consideration of computational overload. The authors adopt a low mutation rate to isolate the search space exploration ability of each crossover. The experimental results show that in most cases JMPX can remarkably improve the solution quality of the GA compared to the two existing classic crossover approaches and the r-opt heuristic.


2012 ◽  
Vol 20 (1) ◽  
pp. 27-62 ◽  
Author(s):  
Kalyanmoy Deb ◽  
Amit Saha

In a multimodal optimization task, the main purpose is to find multiple optimal solutions (global and local), so that the user can have better knowledge about different optimal solutions in the search space and as and when needed, the current solution may be switched to another suitable optimum solution. To this end, evolutionary optimization algorithms (EA) stand as viable methodologies mainly due to their ability to find and capture multiple solutions within a population in a single simulation run. With the preselection method suggested in 1970, there has been a steady suggestion of new algorithms. Most of these methodologies employed a niching scheme in an existing single-objective evolutionary algorithm framework so that similar solutions in a population are deemphasized in order to focus and maintain multiple distant yet near-optimal solutions. In this paper, we use a completely different strategy in which the single-objective multimodal optimization problem is converted into a suitable bi-objective optimization problem so that all optimal solutions become members of the resulting weak Pareto-optimal set. With the modified definitions of domination and different formulations of an artificially created additional objective function, we present successful results on problems with as large as 500 optima. Most past multimodal EA studies considered problems having only a few variables. In this paper, we have solved up to 16-variable test problems having as many as 48 optimal solutions and for the first time suggested multimodal constrained test problems which are scalable in terms of number of optima, constraints, and variables. The concept of using bi-objective optimization for solving single-objective multimodal optimization problems seems novel and interesting, and more importantly opens up further avenues for research and application.


Author(s):  
Fatma Tangour ◽  
Ihsen Saad

This paper deals with the multiobjective optimization problem of an agroalimentary production workshop. Three criteria are considered in addition to this initial cost of production: the cost of the out-of-date products, the cost of the distribution discount and the makespan, and a new coding is proposed for this type of workshop. The adopted approach consists in generating optimal solutions diversified in the search space of solutions, and to help the decision maker when it cannot give a particular preference to one of the objective functions to make the good decision with respect to the quoted criteria.


2020 ◽  
Vol 2020 (14) ◽  
pp. 306-1-306-6
Author(s):  
Florian Schiffers ◽  
Lionel Fiske ◽  
Pablo Ruiz ◽  
Aggelos K. Katsaggelos ◽  
Oliver Cossairt

Imaging through scattering media finds applications in diverse fields from biomedicine to autonomous driving. However, interpreting the resulting images is difficult due to blur caused by the scattering of photons within the medium. Transient information, captured with fast temporal sensors, can be used to significantly improve the quality of images acquired in scattering conditions. Photon scattering, within a highly scattering media, is well modeled by the diffusion approximation of the Radiative Transport Equation (RTE). Its solution is easily derived which can be interpreted as a Spatio-Temporal Point Spread Function (STPSF). In this paper, we first discuss the properties of the ST-PSF and subsequently use this knowledge to simulate transient imaging through highly scattering media. We then propose a framework to invert the forward model, which assumes Poisson noise, to recover a noise-free, unblurred image by solving an optimization problem.


Author(s):  
Tianqi Jing ◽  
Shiwen He ◽  
Fei Yu ◽  
Yongming Huang ◽  
Luxi Yang ◽  
...  

AbstractCooperation between the mobile edge computing (MEC) and the mobile cloud computing (MCC) in offloading computing could improve quality of service (QoS) of user equipments (UEs) with computation-intensive tasks. In this paper, in order to minimize the expect charge, we focus on the problem of how to offload the computation-intensive task from the resource-scarce UE to access point’s (AP) and the cloud, and the density allocation of APs’ at mobile edge. We consider three offloading computing modes and focus on the coverage probability of each mode and corresponding ergodic rates. The resulting optimization problem is a mixed-integer and non-convex problem in the objective function and constraints. We propose a low-complexity suboptimal algorithm called Iteration of Convex Optimization and Nonlinear Programming (ICONP) to solve it. Numerical results verify the better performance of our proposed algorithm. Optimal computing ratios and APs’ density allocation contribute to the charge saving.


Symmetry ◽  
2020 ◽  
Vol 12 (1) ◽  
pp. 94 ◽  
Author(s):  
Dario Fasino ◽  
Franca Rinaldi

The core–periphery structure is one of the key concepts in the structural analysis of complex networks. It consists of a partitioning of the node set of a given graph or network into two groups, called core and periphery, where the core nodes induce a well-connected subgraph and share connections with peripheral nodes, while the peripheral nodes are loosely connected to the core nodes and other peripheral nodes. We propose a polynomial-time algorithm to detect core–periphery structures in networks having a symmetric adjacency matrix. The core set is defined as the solution of a combinatorial optimization problem, which has a pleasant symmetry with respect to graph complementation. We provide a complete description of the optimal solutions to that problem and an exact and efficient algorithm to compute them. The proposed approach is extended to networks with loops and oriented edges. Numerical simulations are carried out on both synthetic and real-world networks to demonstrate the effectiveness and practicability of the proposed algorithm.


Sign in / Sign up

Export Citation Format

Share Document