optimal algorithm
Recently Published Documents


TOTAL DOCUMENTS

1249
(FIVE YEARS 302)

H-INDEX

47
(FIVE YEARS 6)

2022 ◽  
Vol 40 (2) ◽  
pp. 1-24
Author(s):  
Franco Maria Nardini ◽  
Roberto Trani ◽  
Rossano Venturini

Modern search services often provide multiple options to rank the search results, e.g., sort “by relevance”, “by price” or “by discount” in e-commerce. While the traditional rank by relevance effectively places the relevant results in the top positions of the results list, the rank by attribute could place many marginally relevant results in the head of the results list leading to poor user experience. In the past, this issue has been addressed by investigating the relevance-aware filtering problem, which asks to select the subset of results maximizing the relevance of the attribute-sorted list. Recently, an exact algorithm has been proposed to solve this problem optimally. However, the high computational cost of the algorithm makes it impractical for the Web search scenario, which is characterized by huge lists of results and strict time constraints. For this reason, the problem is often solved using efficient yet inaccurate heuristic algorithms. In this article, we first prove the performance bounds of the existing heuristics. We then propose two efficient and effective algorithms to solve the relevance-aware filtering problem. First, we propose OPT-Filtering, a novel exact algorithm that is faster than the existing state-of-the-art optimal algorithm. Second, we propose an approximate and even more efficient algorithm, ϵ-Filtering, which, given an allowed approximation error ϵ, finds a (1-ϵ)–optimal filtering, i.e., the relevance of its solution is at least (1-ϵ) times the optimum. We conduct a comprehensive evaluation of the two proposed algorithms against state-of-the-art competitors on two real-world public datasets. Experimental results show that OPT-Filtering achieves a significant speedup of up to two orders of magnitude with respect to the existing optimal solution, while ϵ-Filtering further improves this result by trading effectiveness for efficiency. In particular, experiments show that ϵ-Filtering can achieve quasi-optimal solutions while being faster than all state-of-the-art competitors in most of the tested configurations.


2022 ◽  
Vol 133 ◽  
pp. 102281
Author(s):  
Ewa M. Kubicka ◽  
Grzegorz Kubicki ◽  
Małgorzata Kuchta ◽  
Małgorzata Sulkowska
Keyword(s):  

2022 ◽  
Author(s):  
David Simchi-Levi ◽  
Rui Sun ◽  
Huanan Zhang

We study in this paper a revenue-management problem with add-on discounts. The problem is motivated by the practice in the video game industry by which a retailer offers discounts on selected supportive products (e.g., video games) to customers who have also purchased the core products (e.g., video game consoles). We formulate this problem as an optimization problem to determine the prices of different products and the selection of products for add-on discounts. In the base model, we focus on an independent demand structure. To overcome the computational challenge of this optimization problem, we propose an efficient fully polynomial-time approximation scheme (FPTAS) algorithm that solves the problem approximately to any desired accuracy. Moreover, we consider the problem in the setting in which the retailer has no prior knowledge of the demand functions of different products. To solve this joint learning and optimization problem, we propose an upper confidence bound–based learning algorithm that uses the FPTAS optimization algorithm as a subroutine. We show that our learning algorithm can converge to the optimal algorithm that has access to the true demand functions, and the convergence rate is tight up to a certain logarithmic term. We further show that these results for the independent demand model can be extended to multinomial logit choice models. In addition, we conduct numerical experiments with the real-world transaction data we collect from a popular video gaming brand’s online store on Tmall.com. The experiment results illustrate our learning algorithm’s robust performance and fast convergence in various scenarios. We also compare our algorithm with the optimal policy that does not use any add-on discount. The comparison results show the advantages of using the add-on discount strategy in practice. This paper was accepted by J. George Shanthikumar, big data analytics.


2022 ◽  
pp. 465-486
Author(s):  
Qiang Wang ◽  
Hai-Lin Liu

In this chapter, the authors propose a joint BS sleeping strategy, resource allocation, and energy procurement scheme to maximize the profit of the network operators and minimize the carbon emission. Then, a joint optimization problem is formulated, which is a mixed-integer programming problem. To solve it, they adopt the bi-velocity discrete particle swarm optimization (BVDPSO) algorithm to optimize the BS sleeping strategy. When the BS sleeping strategy is fixed, the authors propose an optimal algorithm based on Lagrange dual domain method to optimize the power allocation, subcarrier assignment, and energy procurement. Numerical results illustrate the effectiveness of the proposed scheme and algorithm.


Author(s):  
Pham Quy Muoi Pham

In [1], Nesterov has introduced an optimal algorithm with constant step-size,  with  is the Lipschitz constant of objective function. The algorithm is proved to converge with optimal rate . In this paper, we propose a new algorithm, which is allowed nonconstant step-sizes . We prove the convergence and convergence rate of the new algorithm. It is proved to have the convergence rate  as the original one. The advance of our algorithm is that it is allowed nonconstant step-sizes and give us more free choices of step-sizes, which convergence rate is still optimal. This is a generalization of Nesterov's algorithm. We have applied the new algorithm to solve the problem of finding an approximate solution to the integral equation.


Author(s):  
Rob Heylen ◽  
Aditi Thanki ◽  
Dries Verhees ◽  
Domenico Iuso ◽  
Jan De Beenhouwer ◽  
...  

Abstract X-ray computed tomography (X-CT) plays an important role in non-destructive quality inspection and process evaluation in metal additive manufacturing, as several types of defects such as keyhole and lack of fusion pores can be observed in these 3D images as local changes in material density. Segmentation of these defects often relies on threshold methods applied to the reconstructed attenuation values of the 3D image voxels. However, the segmentation accuracy is affected by unavoidable X-CT reconstruction features such as partial volume effects, voxel noise and imaging artefacts. These effects create false positives, difficulties in threshold value selection and unclear or jagged defect edges. In this paper, we present a new X-CT defect segmentation method based on preprocessing the X-CT image with a 3D total variation denoising method. By comparing the changes in the histogram, threshold selection can be significantly better, and the resulting segmentation is of much higher quality. We derive the optimal algorithm parameter settings and demonstrate robustness for deviating settings. The technique is presented on simulated data sets, compared between low- and high-quality X-CT scans, and evaluated with optical microscopy after destructive tests.


2021 ◽  
Author(s):  
Hongxiang Chang ◽  
Rongtao Su ◽  
Jinhu LONG ◽  
Qi Chang ◽  
Pengfei Ma ◽  
...  

2021 ◽  
Author(s):  
Anna Kutschireiter ◽  
Melanie A Basnak ◽  
Rachel I Wilson ◽  
Jan Drugowitsch

Efficient navigation requires animals to track their position, velocity and heading direction (HD). Bayesian inference provides a principled framework for estimating these quantities from unreliable sensory observations, yet little is known about how and where Bayesian algorithms could be implemented in the brain's neural networks. Here, we propose a class of recurrent neural networks that track both a dynamic HD estimate and its associated uncertainty. They do so according to a circular Kalman filter, a statistically optimal algorithm for circular estimation. Our network generalizes standard ring attractor models by encoding uncertainty in the amplitude of a bump of neural activity. More generally, we show that near-Bayesian integration is inherent in ring attractor networks, as long as their connectivity strength allows them to sufficiently deviate from the attractor state. Furthermore, we identified the basic network motifs that are required to implement Bayesian inference, and show that these motifs are present in the Drosophila HD system connectome. Overall, our work demonstrates that the Drosophila HD system can in principle implement a dynamic Bayesian inference algorithm in a biologically plausible manner, consistent with recent findings that suggest ring-attractor dynamics underlie the Drosophila HD system.


2021 ◽  
Vol 9 (12) ◽  
pp. 1432
Author(s):  
Zhizun Xu ◽  
Maryam Haroutunian ◽  
Alan J. Murphy ◽  
Jeff Neasham ◽  
Rose Norman

Underwater navigation presents crucial issues because of the rapid attenuation of electronic magnetic waves. The conventional underwater navigation methods are achieved by acoustic equipment, such as the ultra-short-baseline localisation systems and Doppler velocity logs, etc. However, they suffer from low fresh rate, low bandwidth, environmental disturbance and high cost. In the paper, a novel underwater visual navigation is investigated based on the multiple ArUco markers. Unlike other underwater navigation approaches based on the artificial markers, the noise model of the pose estimation of a single marker and an optimal algorithm of the multiple markers are developed to increase the precision of the method. The experimental tests are conducted in the towing tank. The results show that the proposed method is able to localise the underwater vehicle accurately.


Author(s):  
Pinjari Vali Basha

<p>By rapid transformation of technology, huge amount of data (structured data and Un Structured data) is generated every day.  With the aid of 5G technology and IoT the data generated and processed every day is very large. If we dig deeper the data generated approximately 2.5 quintillion bytes.<br> This data (Big Data) is stored and processed with the help of Hadoop framework. Hadoop framework has two phases for storing and retrieve the data in the network.</p> <ul> <li>Hadoop Distributed file System (HDFS)</li> <li>Map Reduce algorithm</li> </ul> <p>In the native Hadoop framework, there are some limitations for Map Reduce algorithm. If the same job is repeated again then we have to wait for the results to carry out all the steps in the native Hadoop. This led to wastage of time, resources.  If we improve the capabilities of Name node i.e., maintain Common Job Block Table (CJBT) at Name node will improve the performance. By employing Common Job Block Table will improve the performance by compromising the cost to maintain Common Job Block Table.<br> Common Job Block Table contains the meta data of files which are repeated again. This will avoid re computations, a smaller number of computations, resource saving and faster processing. The size of Common Job Block Table will keep on increasing, there should be some limit on the size of the table by employing algorithm to keep track of the jobs. The optimal Common Job Block table is derived by employing optimal algorithm at Name node.</p>


Sign in / Sign up

Export Citation Format

Share Document