• Login
    View Item 
    •   Vlerick Repository Home
    • Research Output
    • Articles
    • View Item
    •   Vlerick Repository Home
    • Research Output
    • Articles
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of Vlerick RepositoryCommunities & CollectionsPublication DateAuthorsTitlesSubjectsKnowledge Domain/IndustryThis CollectionPublication DateAuthorsTitlesSubjectsKnowledge Domain/Industry

    My Account

    LoginRegister

    Contact & Info

    ContactVlerick Journal ListOpen AccessVlerick Business School

    Statistics

    Display statistics

    Can deep reinforcement learning improve inventory management? Performance on lost sales, dual-sourcing, and multi-echelon problems

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Publication type
    FT ranked journal article  
    Author
    Gijsbrechts, Joren
    Boute, Robert
    Van Mieghem, Jan A.
    Zhang, Dennis J.
    Publication Year
    2022
    Journal
    Manufacturing & Service Operations Management
    Publication Volume
    24
    Publication Issue
    3
    Publication Begin page
    1349
    Publication End page
    1368
    
    Metadata
    Show full item record
    Abstract
    Problem definition: Is deep reinforcement learning (DRL) effective at solving inventory problems? Academic/practical relevance: Given that DRL has successfully been applied in computer games and robotics, supply chain researchers and companies are interested in its potential in inventory management. We provide a rigorous performance evaluation of DRL in three classic and intractable inventory problems: lost sales, dual sourcing, and multi-echelon inventory management. Methodology: We model each inventory problem as a Markov decision process and apply and tune the Asynchronous Advantage Actor-Critic (A3C) DRL algorithm for a variety of parameter settings. Results: We demonstrate that the A3C algorithm can match the performance of the state-of-the-art heuristics and other approximate dynamic programming methods. Although the initial tuning was computationally demanding and time demanding, only small changes to the tuning parameters were needed for the other studied problems. Managerial implications: Our study provides evidence that DRL can effectively solve stationary inventory problems. This is especially promising when problem-dependent heuristics are lacking. Yet, generating structural policy insight or designing specialized policies that are (ideally provably) near optimal remains desirable.
    Keyword
    Artificial Intelligence, Deep Reinforcement Learning, Inventory Control, Dual Sourcing, Lost Sales, Multi-Echelon
    Knowledge Domain/Industry
    Operations & Supply Chain Management
    DOI
    10.1287/msom.2021.1064
    URI
    http://hdl.handle.net/20.500.12127/7011
    ae974a485f413a2113503eed53cd6c53
    10.1287/msom.2021.1064
    Scopus Count
    Collections
    Articles

    entitlement

     
    DSpace software (copyright © 2002 - 2023)  DuraSpace
    Quick Guide | Contact Us
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.