Empirical performance of the multivariate normal universal portfolio

Author(s):  
Choon Peng Tan ◽  
Sook Theng Pang
2021 ◽  
Vol 36 ◽  
pp. 02002
Author(s):  
Sook Theng Pang ◽  
How Hui Liew

In this research, four proposed finite order universal portfolios were used to study Malaysia’s stock market comprehensively and the constant rebalanced portfolio (CRP) was used as a benchmark for comparison. The empirical performance of the four universal portfolio strategies was analysed experimentally concerning 95 stocks from different categories in Kuala Lumpur Stock Exchange (KLSE) from 1 January 2000 to 31 December 2015. Combinations of three stocks data from the selected 95 stocks are used for study for short-term (1-year duration), middle-term (4-years and 8-years durations) and long-term (12-years and 16-years durations). The empirical results showed that the performances of the proposed universal strategies are outperform CRP in 1 year and 4 years durations, but did poorly in 8-years, 12-years and 16-years durations. Therefore, these four UP strategies are empirically considered to be good investment strategies in the short-term.


2014 ◽  
Vol 8 (2) ◽  
pp. 83-127 ◽  
Author(s):  
Jessica M McLay ◽  
Roy Lay-Yee ◽  
Barry J Milne ◽  
Peter Davis

2021 ◽  
Vol 39 (2) ◽  
pp. 1-29
Author(s):  
Qingyao Ai ◽  
Tao Yang ◽  
Huazheng Wang ◽  
Jiaxin Mao

How to obtain an unbiased ranking model by learning to rank with biased user feedback is an important research question for IR. Existing work on unbiased learning to rank (ULTR) can be broadly categorized into two groups—the studies on unbiased learning algorithms with logged data, namely, the offline unbiased learning, and the studies on unbiased parameters estimation with real-time user interactions, namely, the online learning to rank. While their definitions of unbiasness are different, these two types of ULTR algorithms share the same goal—to find the best models that rank documents based on their intrinsic relevance or utility. However, most studies on offline and online unbiased learning to rank are carried in parallel without detailed comparisons on their background theories and empirical performance. In this article, we formalize the task of unbiased learning to rank and show that existing algorithms for offline unbiased learning and online learning to rank are just the two sides of the same coin. We evaluate eight state-of-the-art ULTR algorithms and find that many of them can be used in both offline settings and online environments with or without minor modifications. Further, we analyze how different offline and online learning paradigms would affect the theoretical foundation and empirical effectiveness of each algorithm on both synthetic and real search data. Our findings provide important insights and guidelines for choosing and deploying ULTR algorithms in practice.


Sign in / Sign up

Export Citation Format

Share Document