Is the Simple Price Premium that Simple?

2019 ◽  
Vol 16 (1) ◽  
pp. 25-47
Author(s):  
Yuying Shi

Abstract Brand equity has been a perennial topic in marketing. Among all types of measures, price premium is widely accepted due to its simple yet efficient characteristics. However, there is some disagreement regarding whether this measure captures enough facets of brand equity and whether price premium is an appropriate measure to reflect the market position of a brand. With current technique developments, more sophisticated brand equity measures have been developed. A natural question arises as to whether more sophisticated measures are better than price premium. Using large-scale national scanner data covering 47 major U.S. markets, we employ the aggregated demand model to estimate a complex brand equity measure and compare it with price premium. Our results suggest that the simple price premium is consistent with the complex measure under certain conditions. We provide managers with suggestions regarding choosing the most appropriate brand equity measures to signify their brands’ values.

2016 ◽  
Vol 50 (9/10) ◽  
pp. 1672-1702 ◽  
Author(s):  
Ji Yan ◽  
Kun Tian ◽  
Saeed Heravi ◽  
Peter Morgan

Purpose This paper aims to investigate consumers’ demand patterns for products with nutritional benefits and products with no nutritional benefits across processed healthy and unhealthy foods. This paper integrates price changes (i.e. increases and decreases) into a demand model and quantifies their relative impact on the quantity of food purchased. First, how demand patterns vary across processed healthy and unhealthy products is investigated; second, how demand patterns vary across nutrition-benefited (NB) products and non-nutrition-benefited (NNB) products is examined; and third, how consumers respond to price increases and decreases for NB across processed healthy and unhealthy foods is investigated. Design/methodology/approach Here, a demand model quantifying scenarios for price changes in consumer food choice behaviour is proposed, and controlled for heterogeneity at household, store and brand levels. Findings Consumers exhibit greater sensitivity to price decreases and less sensitivity to price increases across both processed healthy and unhealthy foods. Moreover, the research shows that consumers’ demand sensitivity is greater for NNB products than for NB products, supporting our prediction that NB products have higher brand equity than NNB products. Furthermore, the research shows that consumers are more responsive to price decreases than price increases for processed healthy NB foods, but more responsive to price increases than price decreases for unhealthy NB foods. The findings suggest that consumers exhibit a desirable demand pattern for products with nutritional benefits. Originality/value Although studies on the effects of nutritional benefits on demand have proliferated in recent years, researchers have only estimated their impact without considering the effect of price changes. This paper contributes by examining consumers’ price sensitivity for NB products across processed healthy and unhealthy foods based on consumer scanner data, considering both directionalities of price changes.


2019 ◽  
Vol 19 (3) ◽  
pp. 253-277
Author(s):  
Pankaj Kaprwan ◽  
Sameer Mathur

Practitioners and marketers have leveraged brand equity in order to charge a premium price for their products, relative to competition, in the marketplace, yet, we do not find a systematic literature review that captures the importance of this metric as an outcome of brand equity. The purpose of this review is to fill this gap. This review (a) identifies and summarises the relevant literature, providing an understanding of brand equity; (b) highlights the academic literature which compares various brand equity measures and identifies price premium as a key metric; and (c) highlights empirical research work that identifies and validates price premium as a key metric in the B2C marketplace. This review suggests future research directions, exploring the relevance of a price premium metric as a service brand equity measure.


2018 ◽  
Vol 17 (2) ◽  
pp. 150-165 ◽  
Author(s):  
Rafael Barreiros Porto

Objective: Identifying which brand in a category conveys more or less value to the consumer raises questions about the composition of brand equity measures and the brands that make up the category. Measures to identify Consumer-Based Brand Equity (CBBE) may include functional assessments of consumers brand choice and firms brand performance, as long as they embrace competing brands. In view of this, this study comes up with a validation of a measurement model of Consumer-Based Brand Equity for competing brands of products and services, testing for possible moderation (product / service and experienced / non-experienced consumers). Method: Appraising 39 brands, the model was composed of 6 metrics: awareness, perceived quality, loyalty, association, exclusiveness and willingness to pay a price premium. Confirmatory factorial analysis revealed the CBBE structure and multigroup moderation tests showed the comparisons between products and services and between experienced and non-experienced consumers. Main Result: The metrics have convergent validity with very good model fit. The metrics are similar for products / services, but different for consumers with / without experience (evidence of moderation). Contributions: Based on this measure, researchers and marketers can identify whether their brand's performance has been perceived better or worse than that of their competitors. Relevance/Originality: This article is the first to offer a more complete scale to assess the consumer-based brand equity of products and services, allowing the researcher to compare the competitiveness between brands.


2018 ◽  
Vol 16 (1) ◽  
pp. 67-76
Author(s):  
Disyacitta Neolia Firdana ◽  
Trimurtini Trimurtini

This research aimed to determine the properness and effectiveness of the big book media on learning equivalent fractions of fourth grade students. The method of research is Research and Development  (R&D). This study was conducted in fourth grade of SDN Karanganyar 02 Kota Semarang. Data sources from media validation, material validation, learning outcomes, and teacher and students responses on developed media. Pre-experimental research design with one group pretest-posttest design. Big book developed consist of equivalent fractions material, students learning activities sheets with rectangle and circle shape pictures, and questions about equivalent fractions. Big book was developed based on students and teacher needs. This big book fulfill the media validity of 3,75 with very good criteria and scored 3 by material experts with good criteria. In large-scale trial, the result of students posttest have learning outcomes completness 82,14%. The result of N-gain calculation with result 0,55 indicates the criterion “medium”. The t-test result 9,6320 > 2,0484 which means the average of posttest outcomes is better than the average of pretest outcomes. Based on that data, this study has produced big book media which proper and effective as a media of learning equivalent fractions of fourth grade elementary school.


2021 ◽  
Vol 9 (3) ◽  
pp. 264
Author(s):  
Shanti Bhushan ◽  
Oumnia El Fajri ◽  
Graham Hubbard ◽  
Bradley Chambers ◽  
Christopher Kees

This study evaluates the capability of Navier–Stokes solvers in predicting forward and backward plunging breaking, including assessment of the effect of grid resolution, turbulence model, and VoF, CLSVoF interface models on predictions. For this purpose, 2D simulations are performed for four test cases: dam break, solitary wave run up on a slope, flow over a submerged bump, and solitary wave over a submerged rectangular obstacle. Plunging wave breaking involves high wave crest, plunger formation, and splash up, followed by second plunger, and chaotic water motions. Coarser grids reasonably predict the wave breaking features, but finer grids are required for accurate prediction of the splash up events. However, instabilities are triggered at the air–water interface (primarily for the air flow) on very fine grids, which induces surface peel-off or kinks and roll-up of the plunger tips. Reynolds averaged Navier–Stokes (RANS) turbulence models result in high eddy-viscosity in the air–water region which decays the fluid momentum and adversely affects the predictions. Both VoF and CLSVoF methods predict the large-scale plunging breaking characteristics well; however, they vary in the prediction of the finer details. The CLSVoF solver predicts the splash-up event and secondary plunger better than the VoF solver; however, the latter predicts the plunger shape better than the former for the solitary wave run-up on a slope case.


2012 ◽  
Vol 8 (S291) ◽  
pp. 375-377 ◽  
Author(s):  
Gregory Desvignes ◽  
Ismaël Cognard ◽  
David Champion ◽  
Patrick Lazarus ◽  
Patrice Lespagnol ◽  
...  

AbstractWe present an ongoing survey with the Nançay Radio Telescope at L-band. The targeted area is 74° ≲ l < 150° and 3.5° < |b| < 5°. This survey is characterized by a long integration time (18 min), large bandwidth (512 MHz) and high time and frequency resolution (64 μs and 0.5 MHz) giving a nominal sensitivity limit of 0.055 mJy for long period pulsars. This is about 2 times better than the mid-latitude HTRU survey, and is designed to be complementary with current large scale surveys. This survey will be more sensitive to transients (RRATs, intermittent pulsars), distant and faint millisecond pulsars as well as scintillating sources (or any other kind of radio faint sources) than all previous short-integration surveys.


2018 ◽  
Vol 5 (3) ◽  
pp. 172265 ◽  
Author(s):  
Alexis R. Hernández ◽  
Carlos Gracia-Lázaro ◽  
Edgardo Brigatti ◽  
Yamir Moreno

We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals’ interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process.


Energies ◽  
2021 ◽  
Vol 14 (21) ◽  
pp. 7422
Author(s):  
Min-Kyu Son

Upscaling of photoelectrode for a practical photoelectrochemical (PEC) water splitting system is still challenging because the PEC performance of large-scale photoelectrode is significantly low, compared to the lab scale photoelectrode. In an effort to overcome this challenge, sputtered gold (Au) and copper (Cu) grid lines were introduced to improve the PEC performance of large-scale cuprous oxide (Cu2O) photocathode in this work. It was demonstrated that Cu grid lines are more effective than Au grid lines to improve the PEC performance of large-scale Cu2O photocathode because its intrinsic conductivity and quality of grid lines are better than ones containing Au grid lines. As a result, the PEC performance of a 25-cm2 scaled Cu2O photocathode with Cu grid lines was almost double than one without grid lines, resulting in an improved charge transport in the large area substrate by Cu grid lines. Finally, a 50-cm2 scaled Cu2O photocathode with Cu grid lines was tested in an outdoor condition under natural sun. This is the first outdoor PEC demonstration of large-scale Cu2O photocathode with Cu grid lines, which gives insight into the development of efficient upscaled PEC photoelectrode.


2016 ◽  
Author(s):  
Dominik Paprotny ◽  
Oswaldo Morales Nápoles

Abstract. Large-scale hydrological modelling of flood hazard requires adequate extreme discharge data. Models based on physics are applied alongside those utilizing only statistical analysis. The former requires enormous computation power, while the latter are most limited in accuracy and spatial coverage. In this paper we introduce an alternate, statistical approach based on Bayesian Networks (BN), a graphical model for dependent random variables. We use a non-parametric BN to describe the joint distribution of extreme discharges in European rivers and variables describing the geographical characteristics of their catchments. Data on annual maxima of daily discharges from more than 1800 river gauge stations were collected, together with information on terrain, land use and climate of catchments that drain to those locations. The (conditional) correlations between the variables are modelled through copulas, with the dependency structure defined in the network. The results show that using this method, mean annual maxima and return periods of discharges could be estimated with an accuracy similar to existing studies using physical models for Europe, and better than a comparable global statistical method. Performance of the model varies slightly between regions of Europe, but is consistent between different time periods, and is not affected by a split-sample validation. The BN was applied to a large domain covering all sizes of rivers in the continent, both for present and future climate, showing large variation in influence of climate change on river discharges, as well as large differences between emission scenarios. The method could be used to provide quick estimates of extreme discharges at any location for the purpose of obtaining input information for hydraulic modelling.


2006 ◽  
Vol 3 (4) ◽  
pp. 777-803
Author(s):  
W. Connolley ◽  
A. Keen ◽  
A. McLaren

Abstract. We present results of an implementation of the Elastic Viscous Plastic (EVP) sea ice dynamics scheme into the Hadley Centre coupled ocean-atmosphere climate model HadCM3. Although the large-scale simulation of sea ice in HadCM3 is quite good with this model, the lack of a full dynamical model leads to errors in the detailed representation of sea ice and limits our confidence in its future predictions. We find that introducing the EVP scheme results in a worse initial simulation of the sea ice. This paper documents various improvements made to improve the simulation, resulting in a sea ice simulation that is better than the original HadCM3 scheme overall. Importantly, it is more physically based and provides a more solid foundation for future improvement. We then consider the interannual variability of the sea ice in the new model and demonstrate improvements over the HadCM3 simulation.


Sign in / Sign up

Export Citation Format

Share Document