Browsing Research Output by Title
Now showing items 877-896 of 6674
-
Cadbury Schweppes (C): The performance management processThis is the third of a three case series. The (A) case describes the situation of Cadbury Schweppes (CS) and its sugar confectionery business, in a state of 'satisfactory underperformance' in which past strategies and practices make it hard for new management to initiate change in this widely respected company. The (B) case shows how from 1997 to 1999 John Sunderland, the new CEO and a new divisional manager used value based management (VBM) as a vehicle for transforming respectively the company and the sugar confectionery division with strong emphasis on people and leadership practices. The (C) case describes how CS' performance management system was redesigned in line with the Managing for Value (MfV) philosophy. It illustrates the new management performance process in action in the beverages business in Spain, where the country manager is faced with major competitive challenges. The immediate purpose of the Cadbury Schweppes series is to allow an informed discussion on the use and implementation of value based management, from a broader managerial rather than the typical financial perspective. The broader purpose is to illustrate how VBM can lead to corporate transformation and a sharpening of leadership practices in large firms. The series further describes how the design of the performance management system supports the implementation of MfV.
-
Can deep reinforcement learning improve inventory management? Performance on lost sales, dual-sourcing, and multi-echelon problemsProblem definition: Is deep reinforcement learning (DRL) effective at solving inventory problems? Academic/practical relevance: Given that DRL has successfully been applied in computer games and robotics, supply chain researchers and companies are interested in its potential in inventory management. We provide a rigorous performance evaluation of DRL in three classic and intractable inventory problems: lost sales, dual sourcing, and multi-echelon inventory management. Methodology: We model each inventory problem as a Markov decision process and apply and tune the Asynchronous Advantage Actor-Critic (A3C) DRL algorithm for a variety of parameter settings. Results: We demonstrate that the A3C algorithm can match the performance of the state-of-the-art heuristics and other approximate dynamic programming methods. Although the initial tuning was computationally demanding and time demanding, only small changes to the tuning parameters were needed for the other studied problems. Managerial implications: Our study provides evidence that DRL can effectively solve stationary inventory problems. This is especially promising when problem-dependent heuristics are lacking. Yet, generating structural policy insight or designing specialized policies that are (ideally provably) near optimal remains desirable.