Academic Publications

Can Consumers Learn Price Dispersion? Evidence for Dispersion Spillover Across Categories.

Journal of Consumer Research. With Quentin André and Nicholas Reinholtz.

Price knowledge is a key antecedent of many consumer judgments and decisions. This paper examines consumers’ ability to form accurate beliefs about the minimum, the maximum, and the overall variability of prices for multiple product categories. Eight experiments provide evidence for a novel phenomenon we call dispersion spillover: Consumers tend to overestimate price dispersion in a category after encountering another category in which prices are more dispersed (versus equally or less dispersed). We show that this dispersion spillover is consequential: It influences the likelihood that consumers will search for (and find) better prices and offers, and how much consumers bid in auctions. Finally, we disentangle two cognitive processes that might underlie dispersion spillover. Our results suggest that judgments of dispersion are not only based on specific prices stored in memory, and that dispersion spillover does not simply reflect the inappropriate activation of prices from other categories. Instead, it appears that consumers also form “intuitive statistics” of dispersion: Summary representations that encode the dispersion of prices in the environment, but that are insufficiently category- specific.

Do People Understand the Benefit of Diversification?

Management Science. With Nicholas Reinholtz and Philip Fernbach.

Diversification—investing in imperfectly correlated assets—reduces volatility without sacrificing expected returns. While the expected return of a diversified portfolio is the weighted average return of its constituent parts, the variance of the portfolio is less than the weighted average variance of its constituent parts. Our results suggest that very few people have correct statistical intuitions about the effects of diversification. The average person in our data sees no benefit of diversification in terms of reducing portfolio volatility. Many people, especially those low in financial literacy, believe diversification actually increases the volatility of a portfolio. These people seem to believe that the unpredictability of individual assets compounds when aggregated together. Additionally, most people believe diversification increases the expected return of a portfolio. Many of these people correctly link diversification with the concept of risk reduction, but seem to understand risk reduction to mean greater returns on average. We show that these beliefs can lead people to construct investment portfolios that mismatch investors’ risk preferences. Further, these beliefs may help explain why many investors are underdiversified.

Read the full article here.

How (Not) To Test Theory with Data: Illustrations from Walasek, Mullett, and Stewart (2021).

Journal of Experimental Psychology: General. With Quentin André.

André and de Langhe (2021) pointed out that Walasek and Stewart (2015) estimated loss aversion on different lotteries in different conditions. Because of this flaw in the experimental design, their results should not be taken as evidence that loss aversion can disappear and reverse, or that decision by sampling is the origin of loss aversion. In their response to André and de Langhe (2021), Walasek, Mullett and Stewart (2021) defend the link between decision by sampling and loss aversion. We take their response as an opportunity to emphasize three guiding principles when testing theory with data: 1) Look for data that are uniquely predicted by the theory, 2) Do not ignore data that contradict the theory, and 3) If an experiment is flawed, fix it. In light of these principles, we do not believe that Walasek, Mullett, and Stewart (2021) provide new insights about the origin and stability of loss aversion.

Read the full article here.

No Evidence for Loss Aversion Disappearance and Reversal in Walasek and Stewart (2015).

Journal of Experimental Psychology: General. With Quentin André.

Loss aversion—the idea that losses loom larger than equivalent gains—is one of the most important ideas in Behavioral Economics. In an influential article published in the Journal of Experimental Psychology: General, Walasek and Stewart (2015) test an implication of decision by sampling theory: Loss aversion can disappear, and even reverse, depending on the distribution of gains and losses people have encountered. In this manuscript, we show that the pattern of results reported in Walasek and Stewart (2015) should not be taken as evidence that loss aversion can disappear and reverse, or that decision by sampling is the origin of loss aversion. It emerges because the estimates of loss aversion are computed on different lotteries in different conditions. In other words, the experimental paradigm violates measurement invariance, and is thus invalid. We show that analyzing only the subset of lotteries that are common across conditions eliminates the pattern of results. We note that other recently published articles use similar experimental designs, and we discuss general implications for empirical examinations of utility functions.

Read the full article here.

System 1 is Not Scope Insensitive: A New Dual-Process Account of Subjective Value.

Journal of Consumer Research. With Dan Schley and Andrew Long.

Companies can create value by differentiating their products and services along quantitative attributes. Existing research suggests that consumers’ tendency to rely on relatively effortless and affect-based processes reduces their sensitivity to the scope of quantitative attributes and that this explains why increments along quantitative attributes often have diminishing marginal value. The current article sheds new light on how “system 1” processes moderate the effect of quantitative product attributes on subjective value. Seven studies provide evidence that system 1 processes can produce diminishing marginal value, but also increasing marginal value, or any combination of the two, depending on the composition of the choice set. This is because system 1 processes facilitate ordinal comparisons (e.g., 256 GB is more than 128 GB, which is more than 64 GB) while system 2 processes, which are relatively more effortful and calculation based, facilitate cardinal comparisons (e.g., the difference between 256 and 128 GB is twice as large as between 128 and 64 GB).

Read the full article here.

Circle of Incompetence: Sense of Understanding as an Improper Guide to Investment Risk.

Journal of Marketing Research. With Andrew Long and Philip Fernbach.

Consumers incorrectly rely on their sense of understanding of what a company does to evaluate investment risk. In three correlational studies, greater sense of understanding was associated with lower risk ratings (Study 1) and with prediction distributions of future stock performance that had lower standard deviations and higher means (Studies 2 and 3). In all studies, sense of understanding was unassociated with objective risk measures. Risk perceptions increased when the authors degraded sense of understanding by presenting company information in an unstructured versus structured format (Study 4). Sense of understanding also influenced downstream investment decisions. In a portfolio construction task, both novices and seasoned investors allocated more money to hard-to-understand companies for a risk-tolerant client relative to a risk-averse one (Study 5). Study 3 ruled out an alternative explanation based on familiarity. The results may explain both the enduring popularity and common misinterpretation of the “invest in what you know” philosophy.

Read the full article here.

The Marketing Manager as an Intuitive Statistician.

Journal of Marketing Behavior.

Business decisions are increasingly based on data and statistical analyses. Managerial intuition plays an important role at various stages of the analytics process. It is thus important to understand how managers intuitively think about data and statistics. This article reviews a wide range of empirical results from almost a century of research on intuitive statistics. The results support four key insights: (1) Variance is not intuitive; (2) Perfect correlation is the intuitive reference point; (3) People conflate correlation with slope; and (4) Nonlinear functions and interaction effects are not intuitive. These insights have implications for the development, implementation, and evaluation of statistical models in marketing and beyond. I provide several such examples and offer suggestions for future research.

Read the full article here.

Productivity Metrics and Consumers' Misunderstanding of Time Savings.

Journal of Marketing Research. With Stefano Puntoni.

The marketplace is replete with productivity metrics that put units of output in the numerator and one unit of time in the denominator (e.g., megabits per second [Mbps] to measure download speed). In this article, three studies examine how productivity metrics influence consumer decision making. Many consumers have incorrect intuitions about the impact of productivity increases on time savings: they do not sufficiently realize that productivity increases at the high end of the productivity range (e.g., from 40 to 50 Mbps) imply smaller time savings than productivity increases at the low end of the productivity range (e.g., from 10 to 20 Mbps). Consequently, the availability of productivity metrics increases willingness to pay for products and services that offer higher productivity levels. This tendency is smaller when consumers receive additional information about time savings through product experience or through metrics that are linearly related to time savings. Consumers’ intuitions about time savings are also more accurate when they estimate time savings than when they rank them. Estimates are based less on absolute than on proportional changes in productivity (and proportional changes correspond more with actual time savings).

Read the full article here.

Star Wars: Response to Simonson, Winer/Fader, and Kozinets.

Journal of Consumer Research. With Philip Fernbach and Donald Lichtenstein.

In de Langhe, Fernbach, and Lichtenstein (2016), we argue that consumers trust average user ratings as indicators of objective product performance much more than they should. This simple idea has provoked passionate commentaries from eminent researchers across three subdisciplines of marketing: experimental consumer research, modeling, and qualitative consumer research. Simonson challenges the premise of our research, asking whether objective performance even matters. We think it does and explain why in our response. Winer and Fader argue that our results are neither insightful nor important. We believe that their reaction is due to a fundamental misunderstanding of our goals, and we show that their criticisms do not hold up to scrutiny. Finally, Kozinets points out how narrow a slice of consumer experience our article covers. We agree, and build on his observations to reflect on some big-picture issues about the nature of research and the interaction between the subdisciplines.

Read the full article here.

Navigating by the Stars: Investigating the Actual and Perceived Validity of Online User Ratings.

Journal of Consumer Research. With Philip Fernbach and Donald Lichtenstein.

This research documents a substantial disconnect between the objective quality information that online user ratings actually convey and the extent to which consumers trust them as indicators of objective quality. Analyses of a data set covering 1272 products across 120 vertically differentiated product categories reveal that average user ratings (1) lack convergence with Consumer Reports scores, the most commonly used measure of objective quality in the consumer behavior literature, (2) are often based on insufficient sample sizes which limits their informativeness, (3) do not predict resale prices in the used-product marketplace, and (4) are higher for more expensive products and premium brands, controlling for Consumer Reports scores. However, when forming quality inferences and purchase intentions, consumers heavily weight the average rating compared to other cues for quality like price and the number of ratings. They also fail to moderate their reliance on the average user rating as a function of sample size sufficiency. Consumers’ trust in the average user rating as a cue for objective quality appears to be based on an “illusion of validity.”

Read the full article here.

Bang for the Buck: Gain-Loss Ratio as a Driver of Judgment and Choice.

Management Science. With Stefano Puntoni.

Prominent decision-making theories propose that individuals (should) evaluate alternatives by combining gains and losses in an additive way. Instead, we suggest that individuals seek to maximize the rate of exchange between positive and negative outcomes and thus combine gains and losses in a multiplicative way. Sensitivity to gain-loss ratio provides an alternative account for several existing findings and implies a number of novel predictions. It implies greater sensitivity to losses and risk aversion when expected value is positive, but greater sensitivity to gains and risk seeking when expected value is negative. It also implies more extreme preferences when expected value is positive than when expected value is negative. These predictions are independent of decreasing marginal sensitivity, loss aversion, and probability weighting—three key properties of prospect theory. Five new experiments and re-analyses of two recently published studies support these predictions

Read the full article here.

Fooled by Heteroscedastic Randomness: Local Consistency Breeds Extremity in Price-Based Quality Inferences.

Journal of Consumer Research. With Stijn van Osselaer, Stefano Puntoni, and Ann McGill

In some product categories, low-priced brands are consistently of low quality, but high-priced brands can be anything from terrible to excellent. In other product categories, high-priced brands are consistently of high quality, but quality of low-priced brands varies widely. Three experiments demonstrate that such heteroscedasticity leads to more extreme price-based quality predictions. This finding suggests that quality inferences do not only stem from what consumers have learned about the average level of quality at different price points through exemplar memory or rule abstraction. Instead, quality predictions are also based on learning about the covariation between price and quality. That is, consumers inappropriately conflate the conditional mean of quality with the predictability of quality. We discuss implications for theories of quantitative cue learning and selective information processing, for pricing strategies and luxury branding, and for our understanding of the emergence and persistence of erroneous beliefs and stereotypes beyond the consumer realm.

Read the full article here.

The Effects of Process and Outcome Accountability on Judgment Process and Performance.

Organizational Behavior and Human Decision Processes. With Stijn van Osselaer and Berend Wierenga.

This article challenges the view that it is always better to hold decision makers accountable for their decision process rather than their decision outcomes. In three multiple-cue judgment studies, the authors show that process accountability, relative to outcome accountability, consistently improves judgment quality in relatively simple elemental tasks. However, this performance advantage of process accountability does not generalize to more complex configural tasks. This is because process accountability improves an analytical process based on cue abstraction, while it does not change a holistic process based on exemplar memory. Cue abstraction is only effective in elemental tasks (in which outcomes are a linear additive combination of cues) but not in configural tasks (in which outcomes depend on interactions between the cues). In addition, Studies 2 and 3 show that the extent to which process and outcome accountability affect judgment quality depends on individual differences in analytical intelligence and rational thinking style.

Read the full article here.

The Anchor Contract Effect in International Marketing Research.

Journal of Marketing Research. With Stefano Puntoni, Daniel Fernandes, and Stijn van Osselaer.

In an increasingly globalized marketplace, it is common for marketing researchers to collect data from respondents who are not native speakers of the language in which the questions are formulated. Examples include online customer ratings and internal marketing initiatives in multinational corporations. This raises the issue of whether providing responses on rating scales in a person's native versus second language exerts a systematic influence on the responses obtained. This article documents the anchor contraction effect (ACE), the systematic tendency to report more intense emotions when answering questions using rating scales in a nonnative language than in the native language. Nine studies (1) establish ACE, test the underlying process, and rule out alternative explanations; (2) examine the generalizability of ACE across a range of situations, measures, and response scale formats; and (3) explore managerially relevant and easily implementable corrective techniques.

Read the full article here.

Bilingualism and the Emotional Intensity of Advertising Language.

Journal of Consumer Research. With Stefano Puntoni and Stijn van Osselaer.

This research contributes to the current understanding of language effects in advertising by uncovering a previously ignored mechanism shaping consumer response to an increasingly globalized marketplace. We propose a language-specific episodic trace theory of language emotionality to explain how language influences the perceived emotionality of marketing communications. Five experiments with bilingual consumers show (1) that textual information (e.g., marketing slogans) expressed in consumers' native language tends to be perceived as more emotional than messages expressed in their second language, (2) that this effect is not uniquely due to the activation of stereotypes associated to specific languages or to a lack of comprehension, and (3) that the effect depends on the frequency with which words have been experienced in native- versus second-language contexts.

Read the full article here.