Intertemporal Content Variation with Customer Learning

Author(s):  
Fernando Bernstein ◽  
Soudipta Chakraborty ◽  
Robert Swinney

Problem definition: We analyze a firm that sells repeatedly to a customer population over multiple periods. Although this setting has been studied extensively in the context of dynamic pricing—selling the same product in each period at a varying price—we consider intertemporal content variation, wherein the price is the same in every period, but the firm varies the content available over time. Customers learn their utility on purchasing and decide whether to purchase again in subsequent periods. The firm faces a budget for the total amount of content available during a finite planning horizon, and allocates content to maximize revenue. Academic/practical relevance: A number of new business models, including video streaming services and curated subscription boxes, face the situation we model. Our results show how such firms can use content variation to increase their revenues. Methodology: We employ an analytical model in which customers decide to purchase in multiple successive periods and a firm determines a content allocation policy to maximize revenue. Results: Using a lower bound approximation to the problem for a horizon of general length T, we show that, although the optimal allocation policy is not, in general, constant over time, it is monotone: content value increases over time if customer heterogeneity is low and decreases otherwise. We demonstrate that the optimal policy for this lower bound problem is either optimal or very close to optimal for the general T period problem. Furthermore, for the case of T = 2 periods, we show how two critical factors—the fraction of “new” versus “repeat” customers in the population and the size of the content budget—affect the optimal allocation policy and the importance of varying content value over time. Managerial implications: We show how firms that sell at a fixed price over multiple periods can vary content value over time to increase revenues.

2020 ◽  
Vol 22 (4) ◽  
pp. 812-831 ◽  
Author(s):  
Shiman Ding ◽  
Philip M. Kaminsky

Problem definition: We bound the value of collaboration in a decentralized multisupplier multiretailer setting, where several suppliers ship to several retailers through a shared warehouse, and outbound trucks from the warehouse contain the products of multiple suppliers. Academic/practical relevance: In an emerging trend in the grocery industry, multiple suppliers and retailers share a warehouse to facilitate horizontal collaboration, lower transportation costs, and increase delivery frequencies. Thus far, these so-called mixing and consolidation centers are operated in a decentralized manner, with little effort to coordinate shipments from multiple suppliers with shipments to multiple retailers. Facilitating collaboration in this setting would be challenging (both technically and in terms of the level of trust that would be necessary), so it is useful to understand the potential gains of collaboration. Methodology: We extend the classic one-warehouse multiretailer analysis to incorporate multiple suppliers and per-truck outbound transportation cost from the warehouse and develop a cost lower bound on centralized operation as benchmark. We then analyze decentralized versions of the system, in which each retailer and each supplier maximizes his or her own utility in a variety of settings, and we analytically bound the ratio of the cost of decentralized to centralized operation to bound the loss resulting from decentralization. Results: We find analytical bounds on the performance of several decentralized policies. The best, a decentralized zero-inventory ordering policy, has a cost ratio when compared with a lower bound on the centralized policy of no more than 3/2. In computational studies, we find that costs of decentralized policies are even closer to those of centralized policies. Managerial implications: Easy-to-implement decentralized policies are efficient and effective in this setting, suggesting that centralization (and thus a potentially complex and expensive coordination effort) is unlikely to result in significant benefits.


Author(s):  
C. Gizem Korpeoglu ◽  
Ersin Körpeoğlu ◽  
Sıdıka Tunç

Problem definition: We study the contest duration and the award scheme of an innovation contest where an organizer elicits solutions to an innovation-related problem from a group of agents. Academic/practical relevance: Our interviews with practitioners at crowdsourcing platforms have revealed that the duration of a contest is an important operational decision. Yet, the theoretical literature has long overlooked this decision. Also, the literature fails to adequately explain why giving multiple unequal awards is so common in crowdsourcing platforms. We aim to fill these gaps between the theory and practice. We generate insights that seem consistent with both practice and empirical evidence. Methodology: We use a game-theoretic model where the organizer decides on the contest duration and the award scheme while each agent decides on her participation and determines her effort over the contest duration by considering potential changes in her productivity over time. The quality of an agent’s solution improves with her effort, but it is also subject to an output uncertainty. Results: We show that the optimal contest duration increases as the relative impact of the agent uncertainty on her output increases, and it decreases if the agent productivity increases over time. We characterize an optimal award scheme and show that giving multiple (almost always) unequal awards is optimal when the organizer’s urgency in obtaining solutions is below a certain threshold. We also show that this threshold is larger when the agent productivity increases over time. Finally, consistent with empirical findings, we show that there is a positive correlation between the optimal contest duration and the optimal total award. Managerial implications: Our results suggest that the optimal contest duration increases with the novelty or sophistication of solutions that the organizer seeks, and it decreases when the organizer can offer support tools that can increase the agent productivity over time. These insights and their drivers seem consistent with practice. Our findings also suggest that giving multiple unequal awards is advisable for an organizer who has low urgency in obtaining solutions. Finally, giving multiple awards goes hand in hand with offering support tools that increase the agent productivity over time. These results help explain why many contests on crowdsourcing platforms give multiple unequal awards.


2021 ◽  
pp. 002224292110130
Author(s):  
Neeraj Bharadwaj ◽  
Michel Ballings ◽  
Prasad A. Naik ◽  
Miller Moore ◽  
Mustafa Murat Arat

At the intersection of technology and marketing, the authors develop a framework to unobtrusively detect salespersons’ faces and simultaneously extract six emotions: happiness, sadness, surprise, anger, fear, and disgust. They analyze 99,451 sales pitches on a livestream retailing platform and match them with actual sales transactions. Results reveal that each emotional display, including happiness, uniformly exhibits a negative U-shaped effect on sales over time. The maximum sales resistance appears in the middle rather than at the beginning or the end of sales pitches. Taken together, in one-to-many screen-mediated communications, salespersons should sell with a straight face. In addition, the authors derive closed-form formulae for the optimal allocation of the presence of a face and emotional displays over the presentation span. In contrast to the U-shaped effects, the optimal face presence wanes at the start, gradually builds to a crescendo, and eventually ebbs. Finally, they show how to objectively rank salespeople and circumvent biases in performance appraisals, thereby making novel contributions to people analytics. This research integrates new types of data and methods, key theoretical insights, and important managerial implications to inform the expanding opportunity that livestreaming presents to marketers to create, communicate, deliver, and capture value.


Author(s):  
Tingliang Huang ◽  
Zhe Yin

Problem definition: The existing literature on probabilistic or opaque selling has largely focused on understanding why it is attractive to firms. In this paper, we intend to answer a follow-up question: How should opaque selling be managed in a firm’s operations over time? Academic/practical relevance: Answering this question is relevant yet complex, because in practice (i) the profitability of opaque selling depends on how customers respond to the firm’s product-offering strategies and (ii) the firm’s strategies have to be responsive to customers’ purchasing decisions to maximize its total profit. Methodology: We develop a simple game-theoretic framework to capture the dynamic nature of the problem in multiple periods when customers boundedly rationally expect the firm’s strategies through anecdotal reasoning. We characterize the firm’s optimal pricing and product-offering policy. Results: We find that offering the high-value product with a high probability followed by a lower probability is typically optimal over time. We finally analyze several model extensions, such as different numbers of customers, multiple anecdotes, infinitely many periods, and limited inventory, and show the robustness of our results. Managerial implications: We demonstrate the value of using a dynamic probabilistic selling policy and prove that our dynamic policy can double the firm’s profit compared with using the static policy proposed in the existing literature. In a dynamic programming model, we prove that a cycle policy oscillating between two product-offering probabilities is typically optimal in the steady state over infinitely many periods.


2016 ◽  
Vol 15 (1) ◽  
pp. 67-90 ◽  
Author(s):  
Adrien Querbes ◽  
Koen Frenken

We propose a generalized NK-model of late-mover advantage where late-mover firms leapfrog first-mover firms as user needs evolve over time. First movers face severe trade-offs between the provision of functionalities in which their products already excel and the additional functionalities requested by users later on. Late movers, by contrast, start searching when more functionalities are already known and typically come up with superior product designs. We also show that late-mover advantage is more probable for more complex technologies. Managerial implications follow.


Author(s):  
Can Zhang ◽  
Atalay Atasu ◽  
Karthik Ramachandran

Problem definition: Faced with the challenge of serving beneficiaries with heterogeneous needs and under budget constraints, some nonprofit organizations (NPOs) have adopted an innovative solution: providing partially complete products or services to beneficiaries. We seek to understand what drives an NPO’s choice of partial completion as a design strategy and how it interacts with the level of variety offered in the NPO’s product or service portfolio. Academic/practical relevance: Although partial product or service provision has been observed in the nonprofit operations, there is limited understanding of when it is an appropriate strategy—a void that we seek to fill in this paper. Methodology: We synthesize the practices of two NPOs operating in different contexts to develop a stylized analytical model to study an NPO’s product/service completion and variety choices. Results: We identify when and to what extent partial completion is optimal for an NPO. We also characterize a budget allocation structure for an NPO between product/service variety and completion. Our analysis sheds light on how beneficiary characteristics (e.g., heterogeneity of their needs, capability to self-complete) and NPO objectives (e.g., total-benefit maximization versus fairness) affect the optimal levels of variety and completion. Managerial implications: We provide three key observations. (1) Partial completion is not a compromise solution to budget limitations but can be an optimal strategy for NPOs under a wide range of circumstances, even in the presence of ample resources. (2) Partial provision is particularly valuable when beneficiary needs are highly heterogeneous, or beneficiaries have high self-completion capabilities. A higher self-completion capability generally implies a lower optimal completion level; however, it may lead to either a higher or a lower optimal variety level. (3) Although providing incomplete products may appear to burden beneficiaries, a lower completion level can be optimal when fairness is factored into an NPO’s objective or when beneficiary capabilities are more heterogeneous.


Author(s):  
Andres Alban ◽  
Philippe Blaettchen ◽  
Harwin de Vries ◽  
Luk N. Van Wassenhove

Problem definition: Achieving broad access to health services (a target within the sustainable development goals) requires reaching rural populations. Mobile healthcare units (MHUs) visit remote sites to offer health services to these populations. However, limited exposure, health literacy, and trust can lead to sigmoidal (S-shaped) adoption dynamics, presenting a difficult obstacle in allocating limited MHU resources. It is tempting to allocate resources in line with current demand, as seen in practice. However, to maximize access in the long term, this may be far from optimal, and insights into allocation decisions are limited. Academic/practical relevance: We present a formal model of the long-term allocation of MHU resources as the optimization of a sum of sigmoidal functions. We develop insights into optimal allocation decisions and propose pragmatic methods for estimating our model’s parameters from data available in practice. We demonstrate the potential of our approach by applying our methods to family planning MHUs in Uganda. Methodology: Nonlinear optimization of sigmoidal functions and machine learning, especially gradient boosting, are used. Results: Although the problem is NP-hard, we provide closed form solutions to particular cases of the model that elucidate insights into the optimal allocation. Operationalizable heuristic allocations, grounded in these insights, outperform allocations based on current demand. Our estimation approach, designed for interpretability, achieves better predictions than standard methods in the application. Managerial implications: Incorporating the future evolution of demand, driven by community interaction and saturation effects, is key to maximizing access with limited resources. Instead of proportionally assigning more visits to sites with high current demand, a group of sites should be prioritized. Optimal allocation among prioritized sites aims at equalizing demand at the end of the planning horizon. Therefore, more visits should generally be allocated to sites where the cumulative demand potential is higher and counterintuitively, often those where demand is currently lower.


Author(s):  
Tianqin Shi ◽  
Nicholas C. Petruzzi ◽  
Dilip Chhajed

Problem definition: The eco-toxicity arising from unused pharmaceuticals has regulators advocating the benign design concept of “green pharmacy,” but high research and development expenses can be prohibitive. We therefore examine the impacts of two regulatory mechanisms, patent extension and take-back regulation, on inducing drug manufacturers to go green. Academic/practical relevance: One incentive suggested by the European Environmental Agency is a patent extension for a company that redesigns its already patented pharmaceutical to be more environmentally friendly. This incentive can encourage both the development of degradable drugs and the disclosure of technical information. Yet, it is unclear how effective the extension would be in inducing green pharmacy and in maximizing social welfare. Methodology: We develop a game-theoretic model in which an innovative company collects monopoly profits for a patented pharmaceutical but faces competition from a generic rival after the patent expires. A social-welfare-maximizing regulator is the Stackelberg leader. The regulator leads by offering a patent extension to the innovative company while also imposing take-back regulation on the pharmaceutical industry. Then the two-profit maximizing companies respond by setting drug prices and choosing whether to invest in green pharmacy. Results: The regulator’s optimal patent extension offer can induce green pharmacy but only if the offer exceeds a threshold length that depends on the degree of product differentiation present in the pharmaceutical industry. The regulator’s correspondingly optimal take-back regulation generally prescribes a required collection rate that decreases as its optimal patent extension offer increases, and vice versa. Managerial implications: By isolating green pharmacy as a potential target to address pharmaceutical eco-toxicity at its source, the regulatory policy that we consider, which combines the incentive inherent in earning a patent extension on the one hand with the penalty inherent in complying with take-back regulation on the other hand, serves as a useful starting point for policymakers to optimally balance economic welfare considerations with environmental stewardship considerations.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Manfred Bornemann ◽  
Kay Alwert ◽  
Markus Will

PurposeThis article reports on the background, the conceptual ideas and the lessons learned from over more than 20 years of IC Statements and Management with a country focus on Germany and some international developments. It calls for an integrated management approach for IC and offers case study evidence on how to accomplish this quest.Design/methodology/approachReport on the German initiative “Intellectual Capital Statement made in Germany” (ICS m.i.G.). A brief review of the literature describes the background and theoretical foundation of the German IC method. A short description of the method is followed by four detailed case studies to illustrate long-term impact of IC management in very different organizations. A discussion of Lessons Learned from more than 200 implementations and an outlook on current and future developments finalizes the article.FindingsIC Statements made in Germany (ICS m.i.G.) was successful in providing a framework to systematically identify IC, evaluate the status quo of IC relative to the strategic requirements, visualize interdependencies of IC, business processes and business results as well as to connect IC reporting with internal management routines and external communication. However, ICS is not an insulated method but delivers the maximum benefit when integrated with strategy development, strategy implementation, business process optimization accompanied by change management routines. Strong ties to human resource management, information technology departments, quality management, research and development teams as well as business operations as the core of an organization help to yield the most for ICS m.i.G. Over time, the focus of managing IC changes and maturity leads to deutero learning.Practical implicationsICS m.i.G. proved easy to apply, cost efficient for SMEs, larger corporations and networks. It helps to better accomplish their objectives and to adjust their business models. The guidelines in German and English as well as a software application released were downloaded more than 100,000 times. A certification process based on a three-tier training module is available and was successfully completed by more than 400 practitioners. ICS m.i.G. is supporting current standards of knowledge management, such as ISO 9001, ISO 30401 or DIN SPEC PAS 91443 and therefore will most likely have a continuing impact on knowledge-based value creation.Originality/valueThis paper reports lessons learned from the country-wide IC initiative in Germany over the last 20 years initiated and supported by the authors. Several elements of the method have been published over time, but so far no comprehensive view on Lessons Learned had been published.


Author(s):  
Bin Lu ◽  
Jiandong Zhang ◽  
Rongfang Yan

Abstract This paper studies the optimal allocation policy of a coherent system with independent heterogeneous components and dependent subsystems, the systems are assumed to consist of two groups of components whose lifetimes follow proportional hazard (PH) or proportional reversed hazard (PRH) models. We investigate the optimal allocation strategy by finding out the number $k$ of components coming from Group A in the up-series system. First, some sufficient conditions are provided in the sense of the usual stochastic order to compare the lifetimes of two-parallel–series systems with dependent subsystems, and we obtain the hazard rate and reversed hazard rate orders when two subsystems have independent lifetimes. Second, similar results are also obtained for two-series–parallel systems under certain conditions. Finally, we generalize the corresponding results to parallel–series and series–parallel systems with multiple subsystems in the viewpoint of the minimal path and the minimal cut sets, respectively. Some numerical examples are presented to illustrate the theoretical findings.


Sign in / Sign up

Export Citation Format

Share Document