Data-driven Targeted Advertising Recommendation System for Outdoor Billboard

2022 ◽  
Vol 13 (2) ◽  
pp. 1-23
Author(s):  
Liang Wang ◽  
Zhiwen Yu ◽  
Bin Guo ◽  
Dingqi Yang ◽  
Lianbo Ma ◽  
...  

In this article, we propose and study a novel data-driven framework for Targeted Outdoor Advertising Recommendation (TOAR) with a special consideration of user profiles and advertisement topics. Given an advertisement query and a set of outdoor billboards with different spatial locations and rental prices, our goal is to find a subset of billboards, such that the total targeted influence is maximum under a limited budget constraint. To achieve this goal, we are facing two challenges: (1) it is difficult to estimate targeted advertising influence in physical world; (2) due to NP hardness, many common search techniques fail to provide a satisfied solution with an acceptable time, especially for large-scale problem settings. Taking into account the exposure strength, advertisement matching degree, and advertising repetition effect, we first build a targeted influence model that can characterize that the advertising influence spreads along with users mobility. Subsequently, based on a divide-and-conquer strategy, we develop two effective approaches, i.e., a master–slave-based sequential optimization method, TOAR-MSS, and a cooperative co-evolution-based optimization method, TOAR-CC, to solve our studied problem. Extensive experiments on two real-world datasets clearly validate the effectiveness and efficiency of our proposed approaches.

Author(s):  
Cheng Meng ◽  
Ye Wang ◽  
Xinlian Zhang ◽  
Abhyuday Mandal ◽  
Wenxuan Zhong ◽  
...  

With advances in technologies in the past decade, the amount of data generated and recorded has grown enormously in virtually all fields of industry and science. This extraordinary amount of data provides unprecedented opportunities for data-driven decision-making and knowledge discovery. However, the task of analyzing such large-scale dataset poses significant challenges and calls for innovative statistical methods specifically designed for faster speed and higher efficiency. In this chapter, we review currently available methods for big data, with a focus on the subsampling methods using statistical leveraging and divide and conquer methods.


2019 ◽  
Vol 8 (1) ◽  
pp. 21 ◽  
Author(s):  
Mengyu Ma ◽  
Ye Wu ◽  
Luo Chen ◽  
Jun Li ◽  
Ning Jing

Buffer and overlay analysis are fundamental operations which are widely used in Geographic Information Systems (GIS) for resource allocation, land planning, and other relevant fields. Real-time buffer and overlay analysis for large-scale spatial data remains a challenging problem because the computational scales of conventional data-oriented methods expand rapidly with data volumes. In this paper, we present HiBO, a visualization-oriented buffer-overlay analysis model which is less sensitive to data volumes. In HiBO, the core task is to determine the value of pixels for display. Therefore, we introduce an efficient spatial-index-based buffer generation method and an effective set-transformation-based overlay optimization method. Moreover, we propose a fully optimized hybrid-parallel processing architecture to ensure the real-time capability of HiBO. Experiments on real-world datasets show that our approach is capable of handling ten-million-scale spatial data in real time. An online demonstration of HiBO is provided (http://www.higis.org.cn: 8080/hibo).


Author(s):  
Xiao He ◽  
Francesco Alesiani ◽  
Ammar Shaker

Many real-world large-scale regression problems can be formulated as Multi-task Learning (MTL) problems with a massive number of tasks, as in retail and transportation domains. However, existing MTL methods still fail to offer both the generalization performance and the scalability for such problems. Scaling up MTL methods to problems with a tremendous number of tasks is a big challenge. Here, we propose a novel algorithm, named Convex Clustering Multi-Task regression Learning (CCMTL), which integrates with convex clustering on the k-nearest neighbor graph of the prediction models. Further, CCMTL efficiently solves the underlying convex problem with a newly proposed optimization method. CCMTL is accurate, efficient to train, and empirically scales linearly in the number of tasks. On both synthetic and real-world datasets, the proposed CCMTL outperforms seven state-of-the-art (SoA) multi-task learning methods in terms of prediction accuracy as well as computational efficiency. On a real-world retail dataset with 23,812 tasks, CCMTL requires only around 30 seconds to train on a single thread, while the SoA methods need up to hours or even days.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


Author(s):  
Ekaterina Kochmar ◽  
Dung Do Vu ◽  
Robert Belfer ◽  
Varun Gupta ◽  
Iulian Vlad Serban ◽  
...  

AbstractIntelligent tutoring systems (ITS) have been shown to be highly effective at promoting learning as compared to other computer-based instructional approaches. However, many ITS rely heavily on expert design and hand-crafted rules. This makes them difficult to build and transfer across domains and limits their potential efficacy. In this paper, we investigate how feedback in a large-scale ITS can be automatically generated in a data-driven way, and more specifically how personalization of feedback can lead to improvements in student performance outcomes. First, in this paper we propose a machine learning approach to generate personalized feedback in an automated way, which takes individual needs of students into account, while alleviating the need of expert intervention and design of hand-crafted rules. We leverage state-of-the-art machine learning and natural language processing techniques to provide students with personalized feedback using hints and Wikipedia-based explanations. Second, we demonstrate that personalized feedback leads to improved success rates at solving exercises in practice: our personalized feedback model is used in , a large-scale dialogue-based ITS with around 20,000 students launched in 2019. We present the results of experiments with students and show that the automated, data-driven, personalized feedback leads to a significant overall improvement of 22.95% in student performance outcomes and substantial improvements in the subjective evaluation of the feedback.


Nanophotonics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 3003-3010
Author(s):  
Jiacheng Shi ◽  
Wen Qiao ◽  
Jianyu Hua ◽  
Ruibin Li ◽  
Linsen Chen

AbstractGlasses-free augmented reality is of great interest by fusing virtual 3D images naturally with physical world without the aid of any wearable equipment. Here we propose a large-scale spatial multiplexing holographic see-through combiner for full-color 3D display. The pixelated metagratings with varied orientation and spatial frequency discretely reconstruct the propagating lightfield. The irradiance pattern of each view is tailored to form super Gaussian distribution with minimized crosstalk. What’s more, spatial multiplexing holographic combiner with customized aperture size is adopted for the white balance of virtually displayed full-color 3D scene. In a 32-inch prototype, 16 views form a smooth parallax with a viewing angle of 47°. A high transmission (>75%) over the entire visible spectrum range is achieved. We demonstrated that the displayed virtual 3D scene not only preserved natural motion parallax, but also mixed well with the natural objects. The potential applications of this study include education, communication, product design, advertisement, and head-up display.


2021 ◽  
Vol 10 (1) ◽  
pp. e001087
Author(s):  
Tarek F Radwan ◽  
Yvette Agyako ◽  
Alireza Ettefaghian ◽  
Tahira Kamran ◽  
Omar Din ◽  
...  

A quality improvement (QI) scheme was launched in 2017, covering a large group of 25 general practices working with a deprived registered population. The aim was to improve the measurable quality of care in a population where type 2 diabetes (T2D) care had previously proved challenging. A complex set of QI interventions were co-designed by a team of primary care clinicians and educationalists and managers. These interventions included organisation-wide goal setting, using a data-driven approach, ensuring staff engagement, implementing an educational programme for pharmacists, facilitating web-based QI learning at-scale and using methods which ensured sustainability. This programme was used to optimise the management of T2D through improving the eight care processes and three treatment targets which form part of the annual national diabetes audit for patients with T2D. With the implemented improvement interventions, there was significant improvement in all care processes and all treatment targets for patients with diabetes. Achievement of all the eight care processes improved by 46.0% (p<0.001) while achievement of all three treatment targets improved by 13.5% (p<0.001). The QI programme provides an example of a data-driven large-scale multicomponent intervention delivered in primary care in ethnically diverse and socially deprived areas.


Sign in / Sign up

Export Citation Format

Share Document