The Vlerick Repository is a searchable Open Access publication database, containing the complete archive of research output (articles, books, cases, doctoral dissertations,…) written by Vlerick faculty and researchers and preserved by the Vlerick Library.
Find out more about Open Access.
Call to action!
Making your past and future work Open Access in the Vlerick Repository is easy. Send the details of your research output (incl. post-print version) to firstname.lastname@example.org.
Communities in Vlerick Repository
Select a community to browse its collections.
Analysis of lead time correlation under a base-stock policy (Accepted)(Elsevier, 2019)We analyze the impact of lead time correlation on the inventory distribution, assuming a periodic review base-stock policy. We present an efficient method to compute the shortfall distribution for any Markovian lead time process, and we provide structural results when lead times are characterized by a 2-state Markov-modulated process. The latter reveals how lead time correlation increases the inventory variance and enables a closed form for the asymptotic behavior of the shortfall's variance in case the two possible lead time values are sufficiently different. We also establish upper and lower bounds on the inventory variance, which hold for any general time-homogeneous lead time process. Our results are complemented by a numerical experiment that indicates how commonly used approximations of the shortfall distribution mis-specify base-stock levels in the presence of lead time correlation. Not only does the inventory distribution increase in variance as the lead time correlation increases, it also becomes multi-modal.
The impact of solution representations on heuristic net present value optimization in discrete time/cost trade-off project scheduling with multiple cash flow and payment models(Elsevier, 2019)The goal of this paper is to investigate the impact of different solution representations, as part of a metaheuristic approach, on net present value optimization in project scheduling. We specifically consider the discrete time/cost trade-off problem with net present value optimization and apply three payment models from literature. Each of these models determines the timing and size of cash flows from the contractor’s viewpoint. The contribution of this paper to literature is twofold. First, we include cash flow distribution variants in the payment models, to also distinguish between different manners in which value is created and costs are incurred, as part of a general model for the contractor’s cash flow management. This general model is developed in order to explicitly include the progress of activities in the determination of the timing and size of payments to the contractor, which is currently lacking in literature. Second, we employ an iterated local search framework to compare different solution representations and their corresponding local search and repair heuristics. The goal is to unambiguously show that the choice of a solution representation deserves a fair amount of attention, alongside the selection of appropriate diversification and intensification operators, even though this is not always the case in literature. Each part of the proposed algorithm is validated on a large dataset of test instances, generated to allow for a broad comparison of the solution representations. Our results clearly quantify the statistically significant differences between three types of representations for the project scheduling problem under study.
Computing project makespan distributions: Markovian PERT networks revisited(Elsevier, 2019)This paper analyses the project completion time distribution in a Markovian PERT network. Several techniques to obtain exact or numerical expressions for the project completion time distribution are evaluated, with the underlying assumption that the activity durations are exponentially distributed random variables. We show that some of the methods advocated in the project scheduling literature are unable to solve standard datasets from the literature. We propose a framework to analyse the applicability, accuracy and sensitivity of different methods to compute project makespan distributions. An alternative data generation process is proposed to benchmark the different methods and the influence of project dataset parameters on the obtained results is extensively assessed.
Tolerance limits for project control: An overview of different approaches (Published Online)(Elsevier, 2018)Monitoring the performance of projects in progress and controlling their expected outcome by taking corrective actions is a crucial task for any project manager. Project control systems are in use to quantify the project performance at a certain moment in time, and allow the project manager to predict the expected outcome if no action is taken. Consequently, these systems serve as mechanism that provide warning signals that tell the project manager when it is time to take corrective actions to bring the expected project outcome back on track. In order to trust these generated warning signals, the project manager has to set limits on the provide performance metrics that serve as thresholds for these actions. This paper gives an overview of different approaches discussed in the literature to control projects using such actions thresholds. First and foremost, the paper discusses three classes of actions thresholds,ranging from very easy-to-use rules-of-thumb to more advanced statistical project control methodologies. Each of these tools have been the subject to research studies, each of which aim at showing their power to predict project problems during its progress. In addition, the paper will emphasize the fundamental different between statistical project control using tolerance limits and statistical process control for projects. Finally, three different quality metrics to evaluate the performance of such control methods are presented and discussed.
The value of neighborhood information in prospect selection models investigating the optimal level of granularity(2013)Within analytical customer relationship management (CRM), customer acquisition models suffer the most from a lack of data quality because the information of potential customers is mostly limited to socio-demographic and lifestyle variables obtained from external data vendors. Particularly in this situation, taking advantage of the spatial correlation between customers can improve the predictive performance of these models. This study compares the predictive performance of an autoregressive and hierarchical technique in an application that identifies potential new customers for 25 products and brands. In addition, this study shows that the predictive improvement can vary significantly depending on the granularity level on which the neighborhoods are composed. Therefore, a model is introduced that simultaneously incorporates multiple levels of granularity resulting in even more accurate predictions.