Recent Submissions

  • A perfect match or an arranged marriage? How chief digital officers and chief information officers perceive their relationship: a dyadic research design

    Lorenz, Felix; Buchwald, Arne
    Several organisations have introduced a new leadership role, the Chief Digital Officer (CDO), as a centralised role in their top management team (TMT), tasked with accelerating and coordinating their digital transformation. While previous research proposes a complementary, tight alignment between the CDO and the Chief Information Officer (CIO), role redundancies and the fight for recognition and resources also suggest an inherent tension. We provide insights into the CIO-CDO collaboration quality based on role, TMT cooperation, conflict theory, and a dyadic design approach of 11 CIO-CDO relationships with 33 expert interviews in two waves. Our findings indicate that the CIO-CDO relationship may not always be as complementary as proposed in the literature; instead, in the vast majority of our dyads, there is too much role conflict to achieve tight alignment, leading to separation behaviour between the roles. We identify the involvement in the introduction of the other role, the CIO demand-side orientation, and the CDO supply-side orientation as important contingency factors determining the quality of the CIO-CDO relationship. Finally, unless the CIO-CDO relationship resembles a perfect match, a unified Chief Digital and Information Officer (CDIO) role may better resolve the challenges we identify in our sample’s dyads. Our insights extend the understanding of the CIO-CDO relationship.
  • Multi-project scheduling: A benchmark analysis of metaheuristic algorithms on various optimisation criteria and due dates

    Bredael, Dries; Vanhoucke, Mario (European Journal of Operational Research, 2023)
    This paper reviews a set of ten existing metaheuristic solution procedures for the resource-constrained multi-project scheduling problem. Algorithmic implementations are constructed based on the description of the original procedures in literature. Equivalence is verified on the original test instances for the original objective and parameters through a comparison with the reported results. An extensive benchmark analysis is performed on a novel, publicly available dataset for a variety of optimisation criteria and due date settings for which the original algorithms have not been tested earlier. The impact of the different objectives, due dates and test instance parameters is analysed and an overall ranking of the metaheuristic solution methods for different situations is discussed. Key insights into the structure of competitive solutions for disparate objectives and due date settings are presented and effective algorithmic components are revealed.
  • New resource-constrained project scheduling instances for testing (meta-)heuristic scheduling algorithms

    Coelho, José; Vanhoucke, Mario (Computers & Operations Research, 2023)
    The resource-constrained project scheduling problem (RCPSP) is a well-known scheduling problem that has attracted attention since several decades. Despite the rapid progress of exact and (meta-)heuristic procedures, the problem can still not be solved to optimality for many problem instances of relatively small size. Due to the known complexity, many researchers have proposed fast and efficient meta-heuristic solution procedures that can solve the problem to near optimality. Despite the excellent results obtained in the last decades, little is known why some heuristics perform better than others. However, if researchers better understood why some meta-heuristic procedures generate good solutions for some project instances while still falling short for others, this could lead to insights to improve these meta-heuristics, ultimately leading to stronger algorithms and better overall solution quality. In this study, a new hardness indicator is proposed to measure the difficulty of providing near-optimal solutions for meta-heuristic procedures. The new indicator is based on a new concept that uses the distance metric to describe the solution space of the problem instance, and relies on current knowledge for lower and upper bound calculations for problem instances from five known datasets in the literature. This new indicator, which will be called the indicator, will be used not only to measure the hardness of existing project datasets, but also to generate a new benchmark dataset that can be used for future research purposes. The new dataset contains project instances with different values for the indicator, and it will be shown that the value of the distance metric actually describes the difficulty of the project instances through two fast and efficient meta-heuristic procedures from the literature.
  • The effect of Corporate Governance on corporate environmental sustainability: A multilevel review and research agenda

    Karn, Ina; Mendiratta, Esha; Fehre, Kerstin; Oehmichen, Jana (Business Strategy and the Environment, 2022)
    Climate change is a major challenge facing society. Given its intricate links with business, governance scholars have shown an increasing interest in understanding how corporate governance (CG) actors can improve corporate environmental sustainability (CES). In this article, we review the literature focusing on CG and CES. We assess the importance of the motivations, expertise, and power of CG actors at different levels (individual, team, firm, and supra‐firm) for CES. Using this as a guiding framework, we build a multilevel synthesis highlighting the theoretical mechanisms around motivations, expertise, and power suggested in the literature. Based on this synthesis, we present critical reflections on the extant knowledge. Finally, we develop a future research agenda that calls for research at single levels, but also for integrative research examining how CG actors at multiple levels collectively shape CES.
  • Bilingualism and the emotional intensity of advertising language

    Puntoni, Stefano; De Langhe, Bart; Van Osselaer, Stijn (Journal of Consumer Research, 2009)
    This research contributes to the current understanding of language effects in advertising by uncovering a previously ignored mechanism shaping consumer response to an increasingly globalized marketplace. We propose a language-specific episodic trace theory of language emotionality to explain how language influences the perceived emotionality of marketing communications. Five experiments with bilingual consumers show (1) that textual information (e.g., marketing slogans) expressed in consumers' native language tends to be perceived as more emotional than messages expressed in their second language, (2) that this effect is not uniquely due to the activation of stereotypes associated to specific languages or to a lack of comprehension, and (3) that the effect depends on the frequency with which words have been experienced in native- versus second-language contexts.
  • The anchor contraction effect in international marketing research

    De Langhe, Bart; Puntoni, Stefano; Fernandes, Daniel; van Osselaer, Stijn (Journal of Marketing Research, 2011)
    In an increasingly globalized marketplace, it is common for marketing researchers to collect data from respondents who are not native speakers of the language in which the questions are formulated. Examples include online customer ratings and internal marketing initiatives in multinational corporations. This raises the issue of whether providing responses on rating scales in a person's native versus second language exerts a systematic influence on the responses obtained. This article documents the anchor contraction effect (ACE), the systematic tendency to report more intense emotions when answering questions using rating scales in a nonnative language than in the native language. Nine studies (1) establish ACE, test the underlying process, and rule out alternative explanations; (2) examine the generalizability of ACE across a range of situations, measures, and response scale formats; and (3) explore managerially relevant and easily implementable corrective techniques.
  • The effects of process and outcome accountability on judgment process and performance

    De Langhe, Bart; van Osselaer, Stijn; Wierenga, Berend (Organizational Behavior and Human Decision Processes, 2011)
    This article challenges the view that it is always better to hold decision makers accountable for their decision process rather than their decision outcomes. In three multiple-cue judgment studies, the authors show that process accountability, relative to outcome accountability, consistently improves judgment quality in relatively simple elemental tasks. However, this performance advantage of process accountability does not generalize to more complex configural tasks. This is because process accountability improves an analytical process based on cue abstraction, while it does not change a holistic process based on exemplar memory. Cue abstraction is only effective in elemental tasks (in which outcomes are a linear additive combination of cues) but not in configural tasks (in which outcomes depend on interactions between the cues). In addition, Studies 2 and 3 show that the extent to which process and outcome accountability affect judgment quality depends on individual differences in analytical intelligence and rational thinking style.
  • Fooled by heteroscedastic randomness: Local consistency breeds extremity in price-based quality inferences

    De Langhe, Bart; Van Osselaer, Stijn; Puntoni, Stefano; McGill, Ann L. (Journal of Consumer Research, 2014)
    In some product categories, low-priced brands are consistently of low quality, but high-priced brands can be anything from terrible to excellent. In other product categories, high-priced brands are consistently of high quality, but quality of low-priced brands varies widely. Three experiments demonstrate that such heteroscedasticity leads to more extreme price-based quality predictions. This finding suggests that quality inferences do not only stem from what consumers have learned about the average level of quality at different price points through exemplar memory or rule abstraction. Instead, quality predictions are also based on learning about the covariation between price and quality. That is, consumers inappropriately conflate the conditional mean of quality with the predictability of quality. We discuss implications for theories of quantitative cue learning and selective information processing, for pricing strategies and luxury branding, and for our understanding of the emergence and persistence of erroneous beliefs and stereotypes beyond the consumer realm.
  • Bang for the buck: Gain-loss ratio as a driver of judgment and choice

    De Langhe, Bart; Puntoni, Stefano (Management Science, 2015)
    Prominent decision-making theories propose that individuals (should) evaluate alternatives by combining gains and losses in an additive way. Instead, we suggest that individuals seek to maximize the rate of exchange between positive and negative outcomes and thus combine gains and losses in a multiplicative way. Sensitivity to gain-loss ratio provides an alternative account for several existing findings and implies a number of novel predictions. It implies greater sensitivity to losses and risk aversion when expected value is positive, but greater sensitivity to gains and risk seeking when expected value is negative. It also implies more extreme preferences when expected value is positive than when expected value is negative. These predictions are independent of decreasing marginal sensitivity, loss aversion, and probability weighting--three key properties of prospect theory. Five new experiments and reanalyses of two recently published studies support these predictions.
  • Navigating by the stars: Investigating the actual and perceived validity of online user ratings

    De Langhe, Bart; Fernbach, Philip M.; Lichtenstein, Donald R. (Journal of Consumer Research, 2016)
    This research documents a substantial disconnect between the objective quality information that online user ratings actually convey and the extent to which consumers trust them as indicators of objective quality. Analyses of a data set covering 1272 products across 120 vertically differentiated product categories reveal that average user ratings (1) lack convergence with Consumer Reports scores, the most commonly used measure of objective quality in the consumer behavior literature, (2) are often based on insufficient sample sizes which limits their informativeness, (3) do not predict resale prices in the used-product marketplace, and (4) are higher for more expensive products and premium brands, controlling for Consumer Reports scores. However, when forming quality inferences and purchase intentions, consumers heavily weight the average rating compared to other cues for quality like price and the number of ratings. They also fail to moderate their reliance on the average user rating as a function of sample size sufficiency. Consumers' trust in the average user rating as a cue for objective quality appears to be based on an "illusion of validity."
  • Star Wars: Response to Simonson, Winer/Fader, and Kozinets

    De Langhe, Bart; Fernbach, Philip M.; Lichtenstein, Donald R. (Journal of Consumer Research, 2016)
    In de Langhe, Fernbach, and Lichtenstein (2016), we argue that consumers trust average user ratings as indicators of objective product performance much more than they should. This simple idea has provoked passionate commentaries from eminent researchers across three subdisciplines of marketing: experimental consumer research, modeling, and qualitative consumer research. Simonson challenges the premise of our research, asking whether objective performance even matters. We think it does and explain why in our response. Winer and Fader argue that our results are neither insightful nor important. We believe that their reaction is due to a fundamental misunderstanding of our goals, and we show that their criticisms do not hold up to scrutiny. Finally, Kozinets points out how narrow a slice of consumer experience our article covers. We agree, and build on his observations to reflect on some big-picture issues about the nature of research and the interaction between the subdisciplines.
  • Productivity metrics and consumers’ misunderstanding of time savings

    De Langhe, Bart; Puntoni, Stefano (Journal of Marketing Research, 2016)
    The marketplace is replete with productivity metrics that put units of output in the numerator and one unit of time in the denominator (e.g., megabits per second [Mbps] to measure download speed). In this article, three studies examine how productivity metrics influence consumer decisionmaking. Many consumers have incorrect intuitions about the impact of productivity increases on time savings: they do not sufficiently realize that productivity increases at the high end of the productivity range (e.g., from 40 to 50 Mbps) imply smaller time savings than productivity increases at the low end of the productivity range (e.g., from 10 to 20 Mbps). Consequently, the availability of productivity metrics increases willingness to pay for products and services that offer higher productivity levels. This tendency is smaller when consumers receive additional information about time savings through product experience or throughmetrics that are linearly related to time savings. Consumers' intuitions about time savings are also more accurate when they estimate time savings than when they rank them. Estimates are based less on absolute than on proportional changes in productivity (and proportional changes correspond more with actual time savings)
  • The marketing manager as an intuitive statistician

    De Langhe, Bart (Journal of Marketing Behavior, 2016)
    Business decisions are increasingly based on data and statistical analyses. Managerial intuition plays an important role at various stages of the analytics process. It is thus important to understand how managers intuitively think about data and statistics. This article reviews a wide range of empirical results from almost a century of research on intuitive statistics. The results support four key insights: (1) Variance is not intuitive; (2) Perfect correlation is the intuitive reference point; (3) People conflate correlation with slope; and (4) Nonlinear functions and interaction effects are not intuitive. These insights have implications for the development, implementation, and evaluation of statistical models in marketing and beyond. I provide several such examples and offer suggestions for future research.
  • Linear thinking in a nonlinear world

    De Langhe, Bart; Puntoni, Stefano; Larrick, Richard (Harvard Business Review, 2017)
    The human brain likes simple straight lines. As a result, people tend to expect that relationships between variables and outcomes will be linear. Often this is the case: The amount of data an iPad will hold increases at the same rate as its storage capacity. But frequently relationships are not linear: The time savings from upgrading a broadband connection get smaller and smaller as download speed increases. Would it surprise you to know that upgrading a car from 10 MPG to 20 MPG saves more gas than upgrading from 20 MPG to 50 MPG? Because it does. As fuel efficiency increases, gas consumption falls sharply at first and then more gradually. This is just one of four nonlinear patterns the authors identify in their article. Nonlinear phenomena are all around in business: in the relationship between price, volume, and profits; between retention rate and customer lifetime value; between search rankings and sales. If you don’t recognize when they’re in play, you’re likely to make poor decisions. But if you map out relationships in data visualizations, you can actually see whether they are nonlinear and how—and then make choices that maximize your desired outcome.
  • Circle of incompetence: Sense of understanding as an improper guide to investment risk

    Long, Andrew R.; Fernbach, Philip M.; De Langhe, Bart (Journal of Marketing Research, 2018)
    Consumers incorrectly rely on their sense of understanding of what a company does to evaluate investment risk. In three correlational studies, greater sense of understanding was associated with lower risk ratings (Study 1) and with prediction distributions of future stock performance that had lower standard deviations and higher means (Studies 2 and 3). In all studies, sense of understanding was unassociated with objective risk measures. Risk perceptions increased when the authors degraded sense of understanding by presenting company information in an unstructured versus structured format (Study 4). Sense of understanding also influenced downstream investment decisions. In a portfolio construction task, both novices and seasoned investors allocated more money to hard-to-understand companies for a risk-tolerant client relative to a risk-averse one (Study 5). Study 3 ruled out an alternative explanation based on familiarity. The results may explain both the enduring popularity and common misinterpretation of the “invest in what you know” philosophy.
  • The dangers of categorical thinking

    De Langhe, Bart; Fernbach, Philip (Harvard Business Review, 2019)
    Human beings are categorization machines, taking in voluminous amounts of messy data and then simplifying and structuring it. That’s how we make sense of the world and communicate our ideas to others. But according to the authors, categorization comes so naturally to us that we often see categories where none exist. That warps our view of the world and harms our ability to make sound decisions—a phenomenon that should be of special concern to any business that relies on data collection and analysis for decision making. Categorical thinking, the authors argue, creates four dangerous consequences. When we categorize, we compress category members, treating them as more alike than they are; we amplify differences between members of different categories; we discriminate, favoring certain categories over others; and we fossilize, treating the categorical structure we’ve imposed as static. In the years ahead, companies will have to focus attention on how best to mitigate those consequences
  • System 1 is not scope insensitive: A new, dual-process account of subjective value

    Schley, Dan R.; De Langhe, Bart; Long, Andrew R. (Journal of Consumer Research, 2020)
    Companies can create value by differentiating their products and services along quantitative attributes. Existing research suggests that consumers’ tendency to rely on relatively effortless and affect-based processes reduces their sensitivity to the scope of quantitative attributes and that this explains why increments along quantitative attributes often have diminishing marginal value. The current article sheds new light on how “system 1” processes moderate the effect of quantitative product attributes on subjective value. Seven studies provide evidence that system 1 processes can produce diminishing marginal value, but also increasing marginal value, or any combination of the two, depending on the composition of the choice set. This is because system 1 processes facilitate ordinal comparisons (e.g., 256 GB is more than 128 GB, which is more than 64 GB) while system 2 processes, which are relatively more effortful and calculation based, facilitate cardinal comparisons (e.g., the difference between 256 and 128 GB is twice as large as between 128 and 64 GB).
  • Unanswered questions in entrepreneurial finance

    Manigart, Sophie; Khosravi, Sara (Venture Capital, 2023)
    While the academic literature on entrepreneurial finance has expanded exponentially, many gaps in our knowledge remain. This is driven by digitalization impacting the development of new investment types such as crowdfunding and ICO, the emergence of new investors based upon digital technologies, and the functioning of existing investors. Next, the supply of entrepreneurial finance has become more diverse and new types of investors developed, like incubators and accelerators, family funds, impact investors, or sovereign wealth funds. This increases the sources and type of funding new ventures can get access to. Third, investors pay increasingly attention to non-financial goals like providing solutions to environmental or societal challenges. This paper explores these trends and suggests avenues for future research.
  • Increased bullwhip in retail: A side effect of improving forecast accuracy with more data?

    Wellens, Arnoud, P.; Boute, Robert; Udenio, Maximiliano (Foresight: The International Journal of Applied Forecasting, 2023)
    Can there be side effects of improved forecast accuracy? In this study of the Belgian food retailer Colruyt Group, we show how adding explanatory variables (such as promotions, weather forecasts, national events, etc.) increases forecast accuracy compared to methods using only historical sales data. Furthermore, when using these sales forecasts to determine inventory levels and order decisions in a numerical experiment, we see that these more accurate forecasts require less inventory to maintain a target service level, indicating that more accurate predictions may reduce stockouts and operational costs related to high inventories. These are expected findings. We also found the use of explanatory variables makes the sales forecasts (and consequently the replenishment) more responsive towards changes in customer demand patterns. This creates a higher bullwhip effect regarding the variability of the supermarket’s replenishment orders -- a less desirable outcome of more accurate forecasting using explanatory variables.
  • Data-driven preventive maintenance for a heterogeneous machine portfolio

    Deprez, Laurens; Antonio, Katrien; Arts, Joachim; Boute, Robert (Operations Research Letters, 2023)
    We describe a data-driven approach to optimize periodic maintenance policies for a heterogeneous portfolio with different machine profiles. When insufficient data are available per profile to assess failure intensities and costs accurately, we pool the data of all machine profiles and evaluate the effect of (observable) machine characteristics by calibrating appropriate statistical models. This reduces maintenance costs compared to a stratified approach that splits the data into subsets per profile and a uniform approach that treats all profiles the same.

View more