computational cost
Recently Published Documents





2022 ◽  
Vol 40 (2) ◽  
pp. 1-24
Franco Maria Nardini ◽  
Roberto Trani ◽  
Rossano Venturini

Modern search services often provide multiple options to rank the search results, e.g., sort “by relevance”, “by price” or “by discount” in e-commerce. While the traditional rank by relevance effectively places the relevant results in the top positions of the results list, the rank by attribute could place many marginally relevant results in the head of the results list leading to poor user experience. In the past, this issue has been addressed by investigating the relevance-aware filtering problem, which asks to select the subset of results maximizing the relevance of the attribute-sorted list. Recently, an exact algorithm has been proposed to solve this problem optimally. However, the high computational cost of the algorithm makes it impractical for the Web search scenario, which is characterized by huge lists of results and strict time constraints. For this reason, the problem is often solved using efficient yet inaccurate heuristic algorithms. In this article, we first prove the performance bounds of the existing heuristics. We then propose two efficient and effective algorithms to solve the relevance-aware filtering problem. First, we propose OPT-Filtering, a novel exact algorithm that is faster than the existing state-of-the-art optimal algorithm. Second, we propose an approximate and even more efficient algorithm, ϵ-Filtering, which, given an allowed approximation error ϵ, finds a (1-ϵ)–optimal filtering, i.e., the relevance of its solution is at least (1-ϵ) times the optimum. We conduct a comprehensive evaluation of the two proposed algorithms against state-of-the-art competitors on two real-world public datasets. Experimental results show that OPT-Filtering achieves a significant speedup of up to two orders of magnitude with respect to the existing optimal solution, while ϵ-Filtering further improves this result by trading effectiveness for efficiency. In particular, experiments show that ϵ-Filtering can achieve quasi-optimal solutions while being faster than all state-of-the-art competitors in most of the tested configurations.

2022 ◽  
pp. 0272989X2110730
Anna Heath

Background The expected value of sample information (EVSI) calculates the value of collecting additional information through a research study with a given design. However, standard EVSI analyses do not account for the slow and often incomplete implementation of the treatment recommendations that follow research. Thus, standard EVSI analyses do not correctly capture the value of the study. Previous research has developed measures to calculate the research value while adjusting for implementation challenges, but estimating these measures is a challenge. Methods Based on a method that assumes the implementation level is related to the strength of evidence in favor of the treatment, 2 implementation-adjusted EVSI calculation methods are developed. These novel methods circumvent the need for analytical calculations, which were restricted to settings in which normality could be assumed. The first method developed in this article uses computationally demanding nested simulations, based on the definition of the implementation-adjusted EVSI. The second method is based on adapting the moment matching method, a recently developed efficient EVSI computation method, to adjust for imperfect implementation. The implementation-adjusted EVSI is then calculated with the 2 methods across 3 examples. Results The maximum difference between the 2 methods is at most 6% in all examples. The efficient computation method is between 6 and 60 times faster than the nested simulation method in this case study and could be used in practice. Conclusions This article permits the calculation of an implementation-adjusted EVSI using realistic assumptions. The efficient estimation method is accurate and can estimate the implementation-adjusted EVSI in practice. By adapting standard EVSI estimation methods, adjustments for imperfect implementation can be made with the same computational cost as a standard EVSI analysis. Highlights Standard expected value of sample information (EVSI) analyses do not account for the fact that treatment implementation following research is often slow and incomplete, meaning they incorrectly capture the value of the study. Two methods, based on nested Monte Carlo sampling and the moment matching EVSI calculation method, are developed to adjust EVSI calculations for imperfect implementation when the speed and level of the implementation of a new treatment depends on the strength of evidence in favor of the treatment. The 2 methods we develop provide similar estimates for the implementation-adjusted EVSI. Our methods extend current EVSI calculation algorithms and thus require limited additional computational complexity.

2022 ◽  
Vol 12 (2) ◽  
pp. 837
Jian Xu ◽  
Kean Chen ◽  
Lei Wang ◽  
Jiangong Zhang

Low-frequency sound field reconstruction in an enclosed space has many applications where the plane wave approximation of acoustic modes plays a crucial role. However, the basis mismatch of the plane wave directions degrades the approximation accuracy. In this study, a two-stage method combining ℓ1-norm relaxation and parametric sparse Bayesian learning is proposed to address this problem. This method involves selecting sparse dominant plane wave directions from pre-discretized directions and constructing a parameterized dictionary of low dimensionality. This dictionary is used to re-estimate the plane wave complex amplitudes and directions based on the sparse Bayesian framework using the variational Bayesian expectation and maximization method. Numerical simulations show that the proposed method can efficiently optimize the plane wave directions to reduce the basis mismatch and improve acoustic mode approximation accuracy. The proposed method involves slightly increased computational cost but obtains a higher reconstruction accuracy at extrapolated field points and is more robust under low signal-to-noise ratios compared with conventional methods.

Wind ◽  
2022 ◽  
Vol 2 (1) ◽  
pp. 51-67
Lun Ma ◽  
Pierre-Luc Delafin ◽  
Panagiotis Tsoutsanis ◽  
Antonis Antoniadis ◽  
Takafumi Nishino

A fully resolved (FR) NREL 5 MW turbine model is employed in two unsteady Reynolds-averaged Navier–Stokes (URANS) simulations (one with and one without the turbine tower) of a periodic atmospheric boundary layer (ABL) to study the performance of an infinitely large wind farm. The results show that the power reduction due to the tower drag is about 5% under the assumption that the driving force of the ABL is unchanged. Two additional simulations using an actuator disc (AD) model are also conducted. The AD and FR results show nearly identical tower-induced reductions of the wind speed above the wind farm, supporting the argument that the AD model is sufficient to predict the wind farm blockage effect. We also investigate the feasibility of performing delayed-detached-eddy simulations (DDES) using the same FR turbine model and periodic domain setup. The results show complex turbulent flow characteristics within the farm, such as the interaction of large-scale hairpin-like vortices with smaller-scale blade-tip vortices. The computational cost of the DDES required for a given number of rotor revolutions is found to be similar to the corresponding URANS simulation, but the sampling period required to obtain meaningful time-averaged results seems much longer due to the existence of long-timescale fluctuations.

2022 ◽  
pp. 1-13
James J A Hammond ◽  
Francesco Montomoli ◽  
Marco Pietropaoli ◽  
Richard Sandberg ◽  
Vittorio Michelassi

Abstract This work shows the application of Gene Expression Programming to augment RANS turbulence closure modelling, for flows through complex geometry designed for additive manufacturing. Specifically, for the design of optimised internal cooling channels in turbine blades. One of the challenges in internal cooling design is the heat transfer accuracy of the RANS formulation in comparison to higher fidelity methods, which are still not used in design on account of their computational cost. However, high fidelity data can be extremely valuable for improving current lower fidelity models and this work shows the application of data driven approaches to develop turbulence closures for an internally ribbed duct. Different approaches are compared and the results of the improved model are illustrated; first on the same geometry, and then for an unseen predictive case. The work shows the potential of using data driven models for accurate heat transfer predictions even in non-conventional configurations and indicates the ability of closures learnt from complex flow cases to adapt successfully to unseen test cases.

2022 ◽  
Wenmin Zhang ◽  
Hamed Najafabadi ◽  
Yue Li

Abstract Identifying causal variants from genome-wide association studies (GWASs) is challenging due to widespread linkage disequilibrium (LD). Functional annotations of the genome may help prioritize variants that are biologically relevant and thus improve fine-mapping of GWAS results. However, classical fine-mapping methods have a high computational cost, particularly when the underlying genetic architecture and LD patterns are complex. Here, we propose a novel approach, SparsePro, to efficiently conduct genome-wide fine-mapping. Our method enjoys two major innovations: First, by creating a sparse low-dimensional projection of the high-dimensional genotype data, we enable a linear search of causal variants instead of a combinatorial search of causal configurations used in most existing methods; Second, we adopt a probabilistic framework with a highly efficient variational expectation-maximization algorithm to integrate statistical associations and functional priors. We evaluate SparsePro through extensive simulations using resources from the UK Biobank. Compared to state-of-the-art methods, SparsePro achieved more accurate and well-calibrated posterior inference with greatly reduced computation time. We demonstrate the utility of SparsePro by investigating the genetic architecture of five functional biomarkers of vital organs. We show that, compared to other methods, the causal variants identified by SparsePro are highly enriched for expression quantitative trait loci and explain a larger proportion of trait heritability. We also identify potential causal variants contributing to the genetically encoded coordination mechanisms between vital organs, and pinpoint target genes with potential pleiotropic effects. In summary, we have developed an efficient genome-wide fine-mapping method with the ability to integrate functional annotations. Our method may have wide utility in understanding the genetics of complex traits as well as in increasing the yield of functional follow-up studies of GWASs. SparsePro software is available on GitHub at

Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 223
Pedro González-Rodelas ◽  
Miguel Pasadas ◽  
Abdelouahed Kouibia ◽  
Basim Mustafa

In this paper we propose an approximation method for solving second kind Volterra integral equation systems by radial basis functions. It is based on the minimization of a suitable functional in a discrete space generated by compactly supported radial basis functions of Wendland type. We prove two convergence results, and we highlight this because most recent published papers in the literature do not include any. We present some numerical examples in order to show and justify the validity of the proposed method. Our proposed technique gives an acceptable accuracy with small use of the data, resulting also in a low computational cost.

Drones ◽  
2022 ◽  
Vol 6 (1) ◽  
pp. 18
Salvatore Rosario Bassolillo ◽  
Egidio D’Amato ◽  
Immacolata Notaro ◽  
Gennaro Ariante ◽  
Giuseppe Del Core ◽  

In recent years the use of Unmanned Aerial Vehicles (UAVs) has considerably grown in the civil sectors, due to their high flexibility of use. Currently, two important key points are making them more and more successful in the civil field, namely the decrease of production costs and the increase in navigation accuracy. In this paper, we propose a Kalman filtering-based sensor fusion algorithm, using a low cost navigation platform that contains an inertial measurement unit (IMU), five ultrasonic ranging sensors and an optical flow camera. The aim is to improve navigation in indoor or GPS-denied environments. A multi-rate version of the Extended Kalman Filter is considered to deal with the use of heterogeneous sensors with different sampling rates, and the presence of non-linearities in the model. The effectiveness of the proposed sensor platform is evaluated by means of numerical tests on the dynamic flight simulator of a quadrotor. Results show high precision and robustness of the attitude estimation algorithm, with a reduced computational cost, being ready to be implemented on low-cost platforms.

Abstract A Valid Time Shifting (VTS) method is explored for the GSI-based ensemble variational (EnVar) system modified to directly assimilate radar reflectivity at convective scales. VTS is a cost-efficient method to increase ensemble size by including subensembles before and after the central analysis time. Additionally, VTS addresses common time and phase model error uncertainties within the ensemble. VTS is examined here for assimilating radar reflectivity in a continuous hourly analysis system for a case study of 1-2 May 2019. The VTS implementation is compared against a 36-member control experiment (ENS-36), to increase ensemble size (3×36 VTS), and as a cost-savings method (3×12 VTS), with time-shifting intervals τ between 15 and 120 min. The 3×36 VTS experiments increased the ensemble spread, with largest subjective benefits in early cycle analyses during convective development. The 3×12 VTS experiments captured analysis with similar accuracy as ENS-36 by the third hourly analysis. Control forecasts launched from hourly EnVar analyses show significant skill increases in 1-h precipitation over ENS-36 out to hour 12 for 3×36 VTS experiments, subjectively attributable to more accurate placement of the convective line. For 3×12 VTS, experiments with τ ≥ 60 min met and exceeded the skill of ENS-36 out to forecast hour 15, with VTS-3×12τ90 maximizing skill. Sensitivity results demonstrate preference to τ = 30–60 min for 3x36 VTS and 60 – 120 min for 3×12 VTS. The best 3×36 VTS experiments add a computational cost of 45-67%, compared to the near tripling of costs when directly increasing ensemble size, while best 3×12 VTS experiments save about 24-41% costs over ENS-36.

Sign in / Sign up

Export Citation Format

Share Document