Cluster Events

Research Seminar 2020/2021

  • Presentation, Leandro C. COELHO - Université Laval, Canada (March 4th 2021) Online
    • Title: Traffic and the city: Traffic, Data and Optimization
    • Abstract: All major cities face problems related to the number of cars on the streets, urban congestion, and lost hours in traffic. This leads to longer travel times, uncertainties about the best routes, irregular public transportation, among others. In this presentation we will see how massive data can help us correct these and other problems, what techniques and operations have been used, and what trends and opportunities for the future.
      We describe a practical approach to measure and compute not only traffic but also the time it takes to traverse any arc in a large network based on a large amount of data obtained through crowdsourcing. Our method is capable of providing the state of the network with 15-minute intervals for any day of the week. Using this large database, we propose new algorithms for time-dependent quickest path (point to point) and for vehicle routing problems with a comprehensive objective function encompassing not only distance, but also time, costs, and emissions. We also compare the results of classical emission estimation models against new machine learning tools that can significantly better predict emissions incurred by driving in different conditions.

  • Presentation, Maryam DARVISH - Université Laval, Canada (March 30th 2021) Online
    • Title:  Logic-Based Benders algorithms for a Time-Depenent Vehicle Routing Problem
    • Abstract: The classic Vehicle Routing Problem (VRP) is very well studied in the operations research literature. Time-Dependent Vehicle Routing Problem (TDVRP) is an extension of the VRP, which has gained considerable attention during the past years due to its applicability to transportation planning in urban areas. In this talk, we present two models for the TDVRP and show the results obtained by a commercial solver for instances generated based on Quebec City’s road network. Furthermore, we propose a Logic-Based Benders decomposition framework and demonstrate its success in obtaining quality solutions.

Research Seminar 2019/2020

  • Presentation, Victor MARTINEZ DE ALBENIZ, IESE Business School -  University of Navarra, Spain (October 3rd 2019) N516 
    • Title: A model for Integrated Inventory and Assortment Planning
    • Abstract: Integrating inventory and assortment planning decisions is a challenging task that requires comparing the value of demand expansion through broader choice for consumers, with the value of higher product availability. We develop a model for trading off these values in a setting with limited store capacities and inventory replenishment, two features missing in the literature. The exact analysis of the single product case allows us to develop an accurate approximation for the multi-product case, which incorporates the cost of higher availability in a tractable manner, and hence can be embedded in an optimization framework. We provide algorithms to find optimal or near-optimal solutions.
                 Joint work with Sumit Kunnumkal

  • Presentation, Saed ALIZAMIR, School of Management -  Yale University, United States (October 7th 2019) N405
    • Title: Electricity Pricing with Limited Consumer Response
    • Abstract: Matching demand with supply has been a long-standing challenge in operating residential electricity markets. The utility firms often face stochastic demand functions that are affected by the unpredictable exogenous random shocks (e.g., outdoor weather condition). Although various Demand Response programs are in place to regulate electricity consumption, the effectiveness of these programs has been undermined, largely because the consumers have demonstrated limited capability in adjusting their household appliances' settings. In this paper, we construct a demand model to describe how consumers make consumption decisions in response to random external factors representing their ambient environment at a given price. To that end, we adopt the notion of "rational inattention" to capture the consumers' inertia in readjusting their decisions over time. Subsequently, we investigate an electricity firm’s pricing decision as well as its impact on social welfare and system reliability. Our findings highlight the nuanced implications of rationally inattentive consumers, and lead to guidelines for better regulating retail electricity markets.

  • Presentation, Guillaume ROELS, INSEAD, France (November 7th 2019) N305
    • Title: Competitive Bundling in a Bertrand Duopoly
    • Abstract: In competitive industries, some firms bundle their products whereas others unbundle them; still other firms occupy a niche position and offer only a subset of products. No simple theory has been advanced to explain this variety of bundling strategies. We characterize the strategies of two symmetric firms competing (in a Bertrand fashion) with regard to two homogeneous components, where the firms' bundling and pricing decisions are modeled as a two-stage non-cooperative game. Firms in the first stage select their product offerings, which may include any single-component product and/or the bundle; in the second stage, firms simultaneously set their products' prices. We show that, under pure bundling, there is always an equilibrium in which one firm bundles while the other offers only a single-component product. Such bundling serves to soften price competition via product differentiation, and the bundling firm earns more profit---indicative of a first-mover advantage. Under mixed bundling, almost any combination of offerings can be sustained in equilibrium when customer valuations are sufficiently homogeneous. Yet when those valuations are heterogeneous, only three types of equilibria emerge. Our analysis yields these main results: various strategies can be sustained in equilibrium; asymmetric bundling equilibria can emerge even when firms are symmetric; and the more homogeneous are customer valuations, the larger is the set of equilibria.
                Joint work with Araz Khodabakhshian and Uday S. Karmarkar. 

  • Presentation, Roberto ROBERTI, School of Business and Economics, Netherlands (November 20th 2019) N517
    • Title: Exact Methods for the Traveling Salesman Problem with Drone 
    • Abstract: Efficiently  handling  last-mile  deliveries  becomes  more  and  more  important  nowadays.  Using  drones  to support classical vehicles allows improving delivery schedules as long as efficient solution methods to plan last-mile deliveries with drones are available. We study exact solution approaches for some variants of the traveling  salesman  problem  with  drone  (TSP-D)  in  which  a  truck  and  a  drone  are  teamed  up  to  serve  a set of customers. This combination of truck and drone can exploit the benefits of both vehicle types: the truck has a large capacity but usually low travel speed in urban areas; the drone is faster and not restricted to street networks, but its range and carrying capacity are limited. We present a compact mixed-integer linear program (MILP) for several TSP-D variants that is based on timely synchronizing truck and drone flows; such a MILP is easy to implement but nevertheless leads to competitive results compared to the state-of-the-art  MILPs.  Furthermore,  we  introduce  dynamic  programming  recursions  to  model  several  TSP-D variants. We show how these dynamic programming recursions can be exploited in an exact branch-and-price approach based on a set partitioning formulation and using ng-route relaxation, dual variable stabilization, and a three-level hierarchical branching. The proposed branch-and-price can solve instances with up to 39 customers to optimality outperforming the state-of-the-art by more than doubling the manageable instance size.    
                 Joint work with Mario Ruthmair.

  • Presentation, Maria Grazia SPERANZA, University of Brescia , Italy (January 16th 2020) N516
    • Title: A General Heuristic  Approach to MILP Problems
    • Abstract: The Kernel Search (KS) is a general and simple heuristic framework for the solution of Mixed Integer Linear Programming (MILP) problems. The basic idea of the KS is to solve a sequence of MILP sub-problems, each restricted to a subset of variables.  The KS has been applied to a variety of problems, often taking advantage of the specific problem structure. The first applications of the KS have been to the multi-dimensional knapsack problem,  to a portfolio selection problem and to an index tracking problem. Afterwards, the KS has been successfully applied to capacitated facility location problems, both in the multi-source version and in the single-source variant. More recently, the KS has been extended to the Adaptive Kernel Search (AKS), a heuristic framework for the solution of any MILP problem. The AKS solves a sequence of carefully constructed restricted MILP problems (using a commercial solver). The computational effort required to solve the first restricted MILP problem guides the construction of the subsequent MILP problems. The restricted MILP problems are constructed around a kernel, which contains the variables that are presumably non-zero in an optimal solution. Computational results, on a set of benchmark instances from the MIPLIB library, show that the AKS significantly outperforms other state-of-the-art heuristics for the solution of MILP problems. The AKS also compares favorably to CPLEX and offers more flexibility to trade-off solution quality and computing time. In this talk the KS and the AKS will be presented together with their most relevant applications. Research on the application of the KS to a routing problem will also be presented.

  • Presentation, Raghu RAGHAVAN, Robert H. Smith School of Business - University of Maryland (February 11th 2020) N405
    • Title: Tageted Online Adversiting with Multi-Dimensional Bid Adjustments
    • Abstract: The year 2020 marks the 25th anniversary of online advertising. No other major advertising medium (radio, television, and cable) achieved the growth online advertising had in its first twenty-five years. With new ad delivery methods, platforms, and formats on the rise, the advertisers are now able to collect user characteristic data at an unprecedented level of granularity and harvest the data to gain insights on these characteristics.
      Online advertising platforms like Google and Bing allow for bid adjustments that permit advertisers to target desired demographics. In addition to bids, the advertisers submit bid adjustments for ad query features such as geographical location, time of day, device, and audience. We introduce the Bid and Bid Adjustment Problem in Online Advertising (BBAP) where an advertiser determines base bids and bid adjustments to maximize their return on investment.
      The BBAP can be modeled as a nonlinear mathematical program. However, the multiplicative nature of bid adjustments makes the problem very hard to solve optimally. We develop an approach where the mathematical programming formulation is reduced to two tractable subproblems. One to determine the base bids and one to determine the bid adjustments. We show that both subproblems can be modeled as Multiple-Choice Knapsack Problems (MCKPs) when the domain of base bids and bid adjustments are discretized. Furthermore, we show how the solution to the linear programming relaxation of the MCKP can be used to obtain a feasible solution to the BBAP. We iteratively create and solve subproblems, which we call the Iterative Adjustment Algorithm (IAA), to determine a high-quality solution to the BBAP. The IAA is a particularly attractive algorithm from a practical standpoint since the linear programming relaxation of the MCKP can be solved easily even for large problem sizes.  

      To evaluate the quality of the IAA and the benefits of using bid adjustments, we performed computational experiments on simulated data generated based on sample data on Google Keyword Planner. We show the IAA provides near optimal solutions by comparing the revenue obtained from the IAA with an upper bound obtained from a MIP formulation in small instances. Our findings indicate the revenue benefit of using bid adjustments increases as the revenue-per-click variation across features increases and as the budget decreases. We observe the performance of the IAA is robust under varying problem sizes, budget amounts, and revenue-per-click variations.

Research Seminar 2018/2019

  • Presentation, Georges ZACCOUR, HEC Montréal / GERAD, Canada (September 7th 2018) N231 (The Club)
    • Title: Brand Imitation: A Dynamic-Game Approach
    • Abstract: Brand imitation is a common practice that can take different forms, i.e., legal copying, as in the case of clones and knockoffs, or illegal, in the case of counterfeiting. We consider a scenario in which a producer enters the market with a "similar" product to the incumbent's and we assess the impact of this entry on the incumbent's strategies and outcomes. A distinctive feature of our model is that it allows for brand dilution, which means that the original brand suffers due to imitation, and for brand enhancement, when the availability of the imitation product actually promotes the original brand. We characterize and contrast the solutions for the scenario with entry and the benchmark case where no entry occurs, in a fully dynamic context and we examine the effect of a change in the date of entry on the entrant's profit.
      Keywords: Brand Imitation; Counterfeiting; Pricing Strategy; Advertising Strategy; Differential Games.
                  Joint work with Bertrand Crettez and Naila Hayek


  • Presentation, Hassan BENCHEKROUN, McGill University, Canada (October 4th 2018) Learning Lab (K-Lab)
    • Title: OPEC, Shale Oil and Global Warming. On the importance of the Order of Extraction
    • Abstract: We show that OPEC’s market power contributes to global warming by enabling

      producers of relatively expensive and dirty oil to start producing before OPEC reserves are depleted. We fully characterize the equilibrium of a cartel-fringe model and use a calibration to examine the importance of this extraction sequence effect. While welfare under the cartel-fringe equilibrium can be significantly lower than under a first-best outcome, almost all of this welfare loss is due to the sequence effect. Moreover, the recent boom in shale oil reserves may reduce social welfare.



  • Presentation, Pietro DE GIOVANNI , ESSEC Business School, France (November 15th 2018) N305 (Nautile bulding)
    • Title: Conformace Quality, Goodwill and Cooperative programs in a Dynamic Supply Chain  
    • Abstract: This research explores a supply chain game with operations and marketing problems. It focuses on the detrimental effect of non-conformance quality on goodwill and how it can be mitigated by using a cooperative program. To address this question, we adopt a dynamic approach as both conformance quality and goodwill are dynamic phenomenon. We use a proactive approach to conformance quality where firms invest in appraisal and prevention to avoid selling defective items. High conformance quality rates translate into high marginal production cost, while low conformance quality rates generate lots of returns and consumers dissatisfaction. The latter will be explored by modelling the interface between conformance quality and goodwill. Several types of cooperative programs can be offered to alleviate the negative effect of conformance quality on supply chain members' profits. We investigate various forms of cooperation to make firms economically better-off and minimize the number of dissatisfied consumers.

                                    Video link:

  • Presentation, Raouf BOUCEKKINE, Université de Aix-Marseille, France (December 4th 2018) N231 (The Club)
    • Title: On a Class of Stochastic Dynamic Lobbying Games: Theory and application to Petropolitics
    • Abstract: We study a 2-players stochastic dynamic symmetric lobbying game. Players have opposite interests; at any date, each player invests
      in lobbying activities to alter the legislation in her own benefit. The payoffs are quadratic and uncertainty is driven by a Wiener process. We prove that while a symmetric Markov Perfect Equilibrium (MPE) always exists, (two) asymmetric MPE only emerge when uncertainty is large enough. In the latter case case, the legislative state converges to a stationary invariant distribution. More importantly, we show that rent dissipation is superior in the asymmetric equilibrium (compared to the symmetric) provided uncertainty is high but remains moderate. If uncertainty is too high, lobbying efforts drop in both types of equilibria. But the specific uncertainty-induced mechanism being inherently more affected, rent dissipation is lower in the asymmetric equilibrium. Last but not least, we depart from the latter model by introducing stochastic resource revenues (as a second state equation) and asymmetric lobbying power depending on the level of resources. We use this enlarged model to study a stochastic version of the so-called "First Law of Petropolitics

  • Presentation, Alessandra BURATTO, University of Padova, Italy (January 11th 2019 cancelled) Learning Lab (K-Lab)

  • Presentation, Bartholomew McCARTHY, Notthingham University Business School, United Kingdom (February 7th 2019) Learning Lab (K-Lab)
    • Title: "Opportunities & Challenges for OM & OR Research in Omni-Channel Retailing"
    • Abstract: The nature of retailing is changing. Historically, all customers were served from a network of bricks-and-mortar stores. Today’s retail customers seek to access information and make purchase decisions in whatever way, and on whatever device they wish. They also want to receive their orders whenever, wherever, and however they wish.  Thus, retailers are under pressure to offer multiple retail channels and multiple fulfilment modes in order to satisfy very different customer journeys. ‘Omni-Channel’ retailing changes the nature and granularity of retail order fulfilment fundamentally.  We first discuss different types of multi-channel retailing and provide a conceptual map for omni-channel order fulfilment decisions at strategic, operational, and tactical levels. We illustrate models for one of the most popular modes of Omni-Channel fulfilment - ‘Buy-online-pickup-in-store’ (BOPS) – where retailers use their store network for fulfilment and advertise an ordering window with a guarantee of when their order will be available for collection, often the same day. Omni-Channel retailing is practice-led and retailers are experimenting, trialling, and implementing different solutions and systems, not always successfully. The wide opportunities for both modelling research and empirical OM research across the omni-channel retailing landscape are discussed.

                                        Video link:

  • Presentation, Claudia ARCHETTI, University of Berscia, Italy (March 6th 2019) Learning Lab (K-Lab)
    • Title: "The Flexible Periodic Vehicule Routing Problem"
    • Abstract: The Flexible Periodic Vehicle Routing Problem (FPVRP) is the problem where a carrier has to establish a distribution plan to serve the customers over a planning horizon. Each customer has a total demand that must be served within the horizon and a limit on the maximum quantity that can be delivered at each visit. A fleet of homogeneous capacitated vehicles is available to perform the services and the objective is to minimize the total routing cost. The FPVRP can be seen as a generalization of the Periodic Vehicle Routing Problem (PVRP) which instead has fixed service frequencies and schedules and where the quantity delivered at each visit is fixed. Moreover, the FPVRP shares some common characteristics with the Inventory Routing Problem (IRP) where inventory levels are considered at each time period and, typically, an inventory cost is involved in the objective function. This study presents a worst-case analysis which shows the advantages of the FPVRP with respect to both PVRP and IRP. Moreover, a mathematical formulation for the problemi s proposed, together with some valid inequalities. Computational results show that adding flexibility improves meaningfully the routing costs in comparison with both PVRP and IRP.
                                       Video link:

  • Presentation, Foad MAHDAVI PANJOUH, University of Massachusetts, United States (April 29th 2019) N231 (Club)
    • Title: "Detecting a Most Closeness-Central Clique in Complex Networks"
    • Abstract: Centrality is a powerful concept for detecting influential components of a graph applicable to various areas such as analysis of social, collaboration, and biological networks. In this study, we employ one of the well-known centrality measures, closeness centrality, to detect a group of pairwise connected members (a highly connected community known as a clique) with the highest accessibility to the entire network. To measure accessibility of a clique, we use two metrics, the maximum distance and the total distance to the clique from other members of the network. Hence, we are dealing with two variants of the most central clique problem referred to as maximum-distance-closeness-central clique and total-distance-closeness-central clique problems. We study the computational complexity of these two problems, and prove that their decision versions are NP-complete. We also propose new mixed 0–1 integer programming formulations and the first combinatorial branch-and-bound algorithms to solve these problems exactly. We show that our algorithmic approaches offer at least 83-fold speed-up on over 97% of benchmark instances in comparison to existing approaches. 
                                           Video link:

  • Presentation, Jon LEE, University of Michighan, United States (May 27th 2019) Learning Lab (K-Lab)  
    • Title:  "Explainable Models via Sparse Reflexive Generalized Inverses"
    • Abstract: The Moore-Penrose (M-P) pseudoinverse of a matrix A is a standard tool for fitting least-squares models and solving other key problems.
      But even when A is sparse, the M-P pseudoinverse is usually dense. This can be problematic for situations in which A is huge. Focusing on the rank-deficient situation, we propose many new sparse generalized inverses, examine their properties, and give efficient and simple approximation algorithms for them. Our sacrifice in approximating is compensated for by explainability.
                                        Video link:

  • Presentation, Hamed GHODDUSI, Stevens Institute of Technology, United States (May 29th 2019) Learning Lab (K-Lab)
    • Title: "Risk Management of Commodity Processing Firms: An Equilibrium View"
    • Abstract: We provide a stylized model of a commodity production chain with endogenously determined input and output prices. We characterize the hedging policies with a financial contract on the price of the input, and show that hedging effectiveness is bounded. Our model allows firms to include forward-looking information; e.g., regarding future volatility, to the calculation of their hedging strategy. We estimate a parametrized version of our model for the crude oil to refined products supply chain,  using the simulated method of moments (SMM). Our model approximates the dynamics in the crude oil and refined products markets, and can be used to study the impact of technological change, changes in volatility, and shocks to the supply side in the crude oil, refined products, and the refinery industries on the optimal hedging strategy.
                   Joint work with Sheridan Titman and Stathis Tompaidis.

                                        Video link:

  • Presentation, Hamed QAHRI SAREMI,  DePaul University, United States (June 13th 2019) Learning Lab (K-Lab) 
    • Title: "How Does Supplier Integration Influence Firm Performance? Insights from a Meta-Analytic Structural Equation Modeling Study”  
    • Abstract: Prior research shows an overall positive effect of supplier integration on firm performance, but also suggests the presence of a series of contingencies that may affect this relation. In this study, we propose a structural model of four performance mediators –– efficiency, effectiveness, product innovation, and customer service –– that shape the effects of three dimensions of supplier integration –– informational, operational, and relational integration –– on the financial and market performance of a firm. We test our structural model using a meta-analysis of 146 studies comprising 31,129 observations and a state-of-the-art two-stage meta-analytic structural equation modeling (TSSEM) technique. Our findings reveal a myriad of ways that the three dimensions of supplier integration can indirectly influence financial and market performance of a firm and the interactions between the four performance mediators that form these indirect effects. These findings shed light on the mechanisms through which supplier integration can influence firm performance, thereby provide important theoretical and practical contributions.

Research Seminar 2017/2018

    • Presentation, Fabio D'ANDREAGIOVANNI, CRNS-Laboratory Heudiasyc, Université de Compiègne, France (September 27th 2017)
      • Title: Zero-price Energy Offering by (Multiband) Robust Optimization
      • Abstract: We consider the problem of a price-taker generating company that wants to select energy o ering strategies for its generation units, to maximize the pro t while considering the uncertainty of market price. First, we review central references available in literature about the use of Robust Optimization (RO) for price-uncertain energy o ering, pointing out how they can expose to the risk of suboptimal and even infeasible o ering. We then propose a new RO-based o ering method, which is characterized by making o ers at zero price and overcomes all the limits of the benchmark methods. We show the e ectiveness of the new method on realistic instances provided by our industrial partners, getting very high increases in pro t. Our method is based on Multiband Robustness (MR - Büsing, D'Andreagiovanni, 2012), an RO model that re nes the classical Bertsimas-Sim model, while maintaining its computational tractability and accessibility. MR is essentially based on the use of histogram-like uncertainty sets, which result particularly suitable to represent empirical distributions commonly available in uncertain real-world optimization problems. 

    • Presentation, Laurent ALFANDARI, ESSEC Business School - France (October 23th 2017)
      • Title: Tighter Models for Barge Container Ship Routing 
      • Abstract: This paper addresses the problem of optimal planning of a liner service for a barge container shipping company. Given estimated weekly demands between pairs of ports, our goal is to determine the subset of ports to be called and the amount of containers to be shipped between each pair of ports, so as to maximize the pro.t of the shipping company.  In order to save possible leasing or storage costs of empty containers at the respective ports, our approach takes into account the repositioning of empty containers. The line has to follow the outbound-inbound principle, starting from the port at the river mouth. We propose a novel integrated approach in which the shipping company can simultaneously optimize the route (along with repositioning of empty containers), the choice of the .final port, length of the turnaround time and the size of its fleet. To solve this problem, a new mixed integer programming model is proposed. On the publicly available set of benchmark instances for barge container routing, we demonstrate that this model provides very tight dual bounds and signi.cantly outperforms the existing approaches from the literature for splittable demands. We also show how to further improve this model by projecting out arc variables for modeling the shipping of empty containers. Our numerical study indicates that the latter model improves the computing times for the challenging case of unsplittable demands. We also study the impact of the turnaround time optimization on the total pro.t of the company.    Joint work with T. Damjanovic, F. Furini, I. Ljubic, V. Maras, S. Martin.
                                Video link:

    • Presentation, Jean-François CORDEAU, HEC Montreal - Canada (December 13th 2017)
      • Title:  A Unified Decomposition Matheuristic for Assembly, Production and Inventory Routing
      • Abstract: While the joint optimization of production and outbound distribution decisions in a manufacturing context has been intensively studied in the past decade, the integration of production, inventory and inbound transportation from suppliers has received much less attention despite its practical relevance. We aim to fill the gap by introducing a general model for the assembly routing problem (ARP), which consists of simultaneously planning the assembly of a finished product at a plant and the routing of vehicles collecting materials from suppliers to meet the inventory requirements imposed by the production. We formulate the problem as a mixed-integer linear program and we propose a three-phase decomposition matheuristic that relies on the iterative solution of different subproblems. The first phase determines a setup schedule while the second phase optimizes production quantities, supplier visit schedules and shipment quantities. The third phase solves a vehicle routing problem for each period in the planning horizon. The algorithm is flexible and we show how it can also be used to solve two well-known problems related to the ARP: the production routing problem (PRP) and the inventory routing problem (IRP). Using the same parameter setting for all problems and instances, we obtain 781 new best known solutions out of 2,628 standard IRP and PRP test instances. In particular, on large-scale multi-vehicle instances, the new algorithm outperforms specialized state-of-the-art heuristics for these two problems.

    • Presentation, Herbert DAWID, Bielefeld University -  Germany (January 26th 2018)
      • Title: Dynamic Investment Strategies and Leadership in Product Innovation
      • Abstract: We employ a dynamic market model with endogenous creation of submarkets tostudy the optimal product innovation strategies of incumbent firms. Firms invest in production capacity and R&D knowledge stock, where the latter determines the hazard rate of innovation. We find that under Markov Perfect Equilibrium behavior the firm with a larger market share on the established market is less likely to be the first innovator. Investment in R&D knowledge is negatively affected by the opponent’s production capacity on the established market if the opponent has not innovated yet. However, this effect is reversed after the opponent has successfully introduced the new product. The firm with higher costs of adjusting capacity for the established product has a larger incentive to engage in product innovation and might even achieve higher long run profit than its more efficient competitor.
                           Joint work with Herbert Dawid, Michel Y. Keoula, Michael Kopel, Peter M. Kort.

                            Video link:

    • Presentation, Tom ARCHIBALD, University of Edinburgh, Business School - United Kingdom (February 22th 2018)
      • Title: Modelling transshipments in retail networks when demand is not fully observed
      • Abstract: Transshipments are a means to redistribute inventory in a retail network in order to better meet customer demand.  Models of transshipments generally assume that customer demand is observed.  This research relaxes this assumption and proposes a model that can be used to plan transshipments in a general setting.  A numerical study shows that a proposed heuristic based on approximate dynamic programming can lead to substantial cost savings compared to commonly used policies.
                           Video link:

    • Presentation, Fabio FURINI, LAMSADE Paris-Dauphine University - France (June 1st 2018)
      • Title: Reformulation and Decompositions of Mixed Integer Linear Programs
      • Abstract: The talk  will present basic and advanced techniques to identify and take advantage of innovative decomposition and reformulation methods for hard Combinatorial Optimisation Problems. Divide and conquer, from Latin divide et impera, is one of the key techniques for tackling combinatorial optimization problems. It relies on the idea of decomposing complex problems into a sequence of subproblems that are then easier to handle. Decomposition techniques (such as Dantzig-Wolfe, Lagrangian, or Benders decomposition) are extremely effective in a wide range of applications, including cutting & packing, production & scheduling, routing & logistics, telecommunications, transportation and many others. Moreover, decomposition techniques are playing an important role in many different fields of mixed-integer linear and non-linear optimization, multi objective optimization, optimization under uncertainty, bilevel optimization, etc. Despite the tremendous amount of research on these topics, the mathematical optimization community is constantly faced with new challenges coming from theoretical aspects and real world applications that require the development of new advanced tools.
                            Video link:

    • Presentation, Bissan GHADDAR, Ivey Business School at Western University - United Kingdom (June 12th 2018)
      • Title: The Vehicle Routing Problem with Floating Targets: Formulation and Solution Approaches
      • Abstract: We present a generalization of the vehicle routing problem which consists of intercepting non-stationary targets with a fleet of vehicles in order to bring them to a common destination. This problem models new applications in drone routing, ridesharing, and logistics where a vehicle agrees to meet another vehicle or a customer at a location that is away from the designated home location. We propose a Mixed Integer Second Order Cone Program formulation for the problem, along with valid inequalities for strengthening the continuous relaxation. We further exploit the problem structure using a Lagrangian decomposition and propose an exact branch-and-price algorithm. Computational results on instances with varying characteristics are presented and the results are compared to the solution of the
        full problem using CPLEX. The proposed valid inequalities reduce the computational time of CPLEX by up to 30% on average while the proposed branch-and-price is capable of solving instances where CPLEX fails in finding the optimal solution within the imposed time limit.
                                Video link:

Research Seminar 2016/2017

    • Presentation, Pietro DE GIOVANNI, ESSEC Business School, France (October 3rd 2016)
      • Title: Environmental Collaboration and Process Innovationin Supply Chain Management
      • Abstract: This paper investigates a dynamic supply chain model in which a manufacturer decides some process innovation investments and a retailer sets the price. The process innovation investments contribute to the development of the environmental performace, which represents our state variable. The environmental performance is damaged by some negative externalities implied by the demand. This generates an interesting operational trade-off between sales and environmental damages that both players seek to solve. We show the overall bene…ts that cooperation in a process innovation program generates for all supply chain members.
    • Presentation, Jean-François CORDEAU, HEC Montréal, Canada (October 31th 2016)
      • Title: 
      • Abstract:

    • Presentation, Elena BELAVINA, University of Chicago Booth School of Business, United States (October 13th 2016)
      • Title: Grocery Access, Market Structure and Food Waste
      • Abstract: This paper studies how access to grocery stores, and the extent and nature of competition in the grocery retail market influences food waste. Access to grocery, or how dense is the network of retail stores in a neighborhood, varies extensively as a result of zoning laws and other city government initiatives. Similarly, some markets are dominated by one chain, while others have a high degree of competition with a lot of independent grocery stores. And finally consumers in some markets are more price-sensitive, while in others the degree of product availability or service level is the key competitive variable. We build a multi-echelon arborescent supply chain model that includes heterogeneous customers each making optimal perishable inventory replenishment timing and level decisions in the face of demand uncertainty, as the lowest tiers. Competing grocery stores are the next tier, their demand arises as the superposition of the stochastic order processes of customers, and they themselves manage store inventories. We use these model to compute food waste and its dependence on store density and market structure.
        My analysis reveals that, independent of the market structure, denser grocery store networks result in higher food waste at the store level, but lower consumer food waste, in contrast with the conventional wisdom that higher level of price competition would lead to higher consumer food waste due to resulting lower price of groceries. The conventional logic does not take into account the reduction in food waste due to increased convenience of grocery shopping. Overall, denser grocery store networks have lower food waste as consumer-side waste is substantially higher than waste at the store level. Further, keeping store density fixed, when price is main competitive dimension, market structures with low degree of competition (a single dominant chain) are more environmentally friendly than one with many individual retailers. On the other hand, when the main competitive dimension is service level, a higher degree of competition is preferred. That is, even without changing grocery store density by instilling the “right” competitive structure city governments can influence food waste levels.

    • Presentation, Bernard GENDRON, CIRRELT, Canada (November 21th 2016)
      • Title: Branch-and-Price-and-Cut for Multicommodity Network Design
      • Abstract: We consider a mixed-integer programming model that represents a large number of network design applications in transportation and logistics. We consider several alternatives for solving this model, in particular a column-and-row generation approach, recently introduced by Frangioni and Gendron (2013) under the name "Structured Dantzig-Wolfe Decomposition". We present preliminary computational results that show comparisons of the different variants on a set of large-scale network design instances.

    • Presentation, Bernard FORTZ, Université Libre de Bruxelles, Belgium  (December 15th 2016)
      • Title: Computational Strategies for a Multi-Period Network Design and Routing Problem
      • Abstract: The conventional multicommodity capacitated network design problem deals with the simultaneous optimization of capacity installation and traffic flow routing, where a fixed cost is incurred for opening a link and a linear routing cost is paid for sending traffic flow over a link. The routing decision must be performed such that traffic flows remain
        bounded by the installed capacities. In this talk, we generalize this problem over multiple time periods using an increasing convex cost function which takes into account congestion (number of routing paths per edge) and delay (routing path length). We propose a compact Mixed Integer Linear Program (MILP) formulation for this problem, based on the aggregation of traffic flows by destination following the per-destination routing decision process underlying packet networks. We observe that the resolution with realistic topologies and traffic demands becomes rapidly intractable with state-of-the-art solvers due to the weak linear programming bound of the proposed MILP formulation. We also introduce an extended formulation where traffic flows are disaggregated by source-destination pairs, while keeping the requirement of destination-based routing decisions. This extended formulation provides for all evaluated topologies stronger linear programming lower bounds than the base formulation. However, this formulation still suffers from the large size of the resulting variables and constraints sets; hence, solving the linear relaxation of the problem becomes intractable when the network size increases. In this talk, we investigate different computational strategies to overcome the computational limits of the formulations. We propose different branch-and-cut strategies and a Lagrangian relaxation approach.
        Joint work with Enrico Gorgone (ULB) and Dimitri Papadimitriou (Nokia - Bell Labs)

    • Presentation, Moritz FLEISCHMANN, University of Mannheim, Germany  (January 30th 2017)
      • Title: Strategic Grading in the Product Acquisition Process of a Reverse Supply Chain
      • Abstract: Most recommerce providers are applying a quality-dependent process for the acquisition of used products. They acquire the products via websites at which product holders submit upfront quality statements and receive quality-dependent acquisition prices for their used devices.This presentation is based on two papers that are motivated by this development of reverse logistics practice and aim to analyse the product assessment process of a recommerce provider in detail. We first propose a sequential bargaining model with complete information which captures the individual behaviour of the recommerce provider and the product holder. We determine the optimal strategies of the product holder and the recommerce provider in this game. We find that the resulting strategies lead to an efficient allocation, although the recommerce provider can absorb most of the bargaining potential due to his last mover advantage. We then relax the assumption of complete information and include uncertainty about the product holder's valuation of the product. We show the trade-off underlying the recommerce provider's optimal counteroffer decision and analyse the optimal strategy, using a logistic regression approach on a real-life data set of nearly 6,000 product submissions. The results reveal a significant improvement potential, compared to the currently applied strategy. The second paper takes the analysis of incomplete information further and derives the equilibrium strategies if both players are facing some uncertainties. At its core, it then addresses the recommerce provider’s optimization of the price-quality menu. Finally, we propose an alternative acquisition process that overcomes some observed deficiencies of the current process.

    • Presentation, Jean-Charles Chebat, HEC Montreal, Canada (February 20th 2017)
      • Title: Counterproductive Effects of Commonsensical Marketing Strategies in Services Marketing
      • Abstract: Customers are increasingly violent toward frontline employees. Service corporations developed three commonsensical strategies to deal with consumers, that is, "customer is king", "service with a smile", and "corporation as a family". Our empirical data (some 500 service employees) show that such strategies bring about paradoxical negative consequences on the employees' behaviors toward the service corporation, especially in terms of reduced commitment to the employer and deviant behavior. Employees show an increased level of anger toward the corporation, emotional exhaustion. Managerial conclusions are drawn from the findings.

    • Presentation, Zeynep AKSIN, Koç University, Turkey (March 27th 2017)
      • Title: How Experienced Waits Drive Queue Behavior in the Lab
      • Abstract: Using laboratory experiments, we study join and quit decisions by subjects from a single server, observable,
        first come first served queue. In a set-up that incentivizes decisions that would maximize an expected utility function that is linear in waiting costs, we explore the role queue length and encountered service time experience plays on these decisions. We show that both the probability of quitting a queue and the survival time in a queue are affected by the queue length as well as experienced service times, for the same total waiting times. We further find that subjects are less inclined to join queues with random service times relative to a benchmark queue with deterministic service times. The implications of the results in terms of queue design and delay announcements in queues are discussed, along with ongoing experiments that explore the role of a waiting time announcement upon entry. 
        Joint work with Busra Gencer, Evrim Gunes, Ozge Pala.

    • Presentation, Jamal OUENNICHE, University of Edinburg, United Kingdom  (April 10th 2017)
      • Title: A Dual Local Search Framework for Combinatorial Optimization Problems with TSP Application
      • Abstract:  In practice, solving realistically sized combinatorial optimization problems to optimality is often too time consuming to be affordable; therefore, heuristics are typically implemented within most applications software. A specific category of heuristics has attracted considerable attention, namely local search methods. Most local search methods are primal in nature; that is, they start the search with a feasible solution and explore the feasible space for better feasible solutions. In this research, we propose a dual local search method and customize it to solve the traveling salesman problem (TSP); that is, a search method that starts with an infeasible solution, explores the dual space—each time reducing infeasibility, and lands in the primal space to deliver a feasible solution. The proposed design aims to replicate the designs of optimal solution methodologies in a heuristic way. To be more specific, we solve a combinatorial relaxation of a TSP formulation, design a neighborhood structure to repair such an infeasible starting solution, and improve components of intermediate dual solutions locally. Sample-based evidence along with statistically significant t-tests support the superiority of this dual design compared to its primal design counterpart.

    • Presentation, Emanuele BORGONOVO, Bocconi University, Italy (May 22th 2017)
      • Title: Sensitivity Analysis in the Management Sciences: A Review
      • Abstract:  The solution of several operations research problems requires the creation of a quantitative model. Sensitivity analysis is a crucial step in the model building and result communication process. Through sensitivity analysis, we gain essential insights on model behavior, on its structure and on its response to changes in the model inputs. Several interrogations are possible and several sensitivity analysis methods have been developed, giving rise to a vast and growing literature. We present an overview of available methods, structuring them into local and global methods. For local methods, we discuss Tornado diagrams, one way sensitivity functions, differentiation-based methods and scenario decomposition through finite change sensitivity indices, providing a unified view of the associated sensitivity measures. We then analyze global sensitivity methods, first discussing screening methods such as sequential bifurcation and the Morris method. We then address variance-based, moment-independent and value of information-based sensitivity methods. We discuss their formalization in a common rationale and present recent results that permit the estimation of global sensitivity measures by post-processing the sample generated by a traditional Monte Carlo simulation. We then investigate in detail the methodological issues concerning the crucial step of correctly interpreting the results of a sensitivity analysis. A classical example is worked out to illustrate some of the approaches.

        Joint work with Elmar Plischke, Clausthal University of Technology.

  • Research Seminar 2015/2016

    • Presentation, Immanuel Bomze, University of Vienna, Austria (October 26th 2015)
      • Title: Data Analysis, Machine Learning, Ternary and other Hard Decision Problems: How Copositive Optimization Can Help
      • Abstract: Automated data analysis by now has reached an impressive impact level on everyday life and will most certainly intensify its influence on an overwhelming part of our society. Most of the algorithms currently employed are based on criteria leading to optimization problems which are notoriously hard to solve. By consequence, often heuristic and/or incomplete routines are used.However, both false-positive and false-negative results can be literally lethal, so it is mandatory to have quality guarantees, at least if human control is only superficially involved in key decision processes. Only exact optimization methods can assist in this situation.The task complexity is partly due to the discrete (or mixed-integer)structure, and partly to non-convexity of the functions involved, even if only continuous decision variables are considered. Here we will deal with one example of this phenomenon, namely yes/no decision with an abstention possibility. A similarity-based clustering or prediction criterion, which considers this third option seriously, leads to a ternary fractional quadratic optimization problem [1]. This class recently has been shown to be very hard from the worst-case complexity point of view [2] and has several applications, e.g. via graph tripartitioning in the analysis of social networks [5].Copositive optimization is a 15 years old paradigm [3,4] which transforms both above-mentioned sources of difficulty, discreteness and nonconvexity, into an optimization problem with a linear objective function, subject to linear constraints, over a convex (matrix) cone, thus pushing all problem complexity into the description of the cone. This approach allows for developing tractable approximations of these hard problems, building upon the by now well-established interior-point optimization technology with its polynomial-time worst-case complexity. The resulting bounds can serve as a benchmark on the quality of decision, and may be used, e.g., for alerting human supervision in an automated way. 
    • Presentation, Richard Hartl, University of Vienna, Austria (November 19th 2015)
      • Title: A Multi-Newsvendor Distribution System with Resupply and Vehicule Routing
      • Abstract: This paper analyzes the problem of delivering perishable products from a depot to stores and introduces the option of performing a second delivery per day to allow a retailer to better deal with uncertainty. For each store we have to determine the initial delivery quantity, whether or not the store will receive a resupply, the timing and quantity of resupply and the sequence in which stores have to be visited. The problem consists of two intertwined sub-problems, a stochastic inventory optimization problem for given delivery routes and delivery times (second stage), and a deterministic profitable tour problem for the a-priori route, determining the stores receiving a second resupply, the sequence of the stores, and the resupply timing. We provide lower and upper bounds on profits and propose a hybrid solution procedure. The stochastic inventory optimization problem is solved by stochastic dynamic programming, while the profitable tour problem is solved using variable neighborhood search. In order to provide an efficient solution method, the second stage profit is not always computed exactly but approximations and bounds are used whenever possible. The algorithm is tested on randomly generated test instances. Furthermore, a real-world scenario is investigated. The results show a considerable improvement compared to the "single-order" model.
    • Presentation, Eugene Khmelnitsky, Tel-Aviv University, Israel (December 15th 2015)
  • Title: Bucket Brigade with Stochastic Worker Pace
  • Abstract: Work-sharing in production systems is a modern approach that improves throughput rate. Work is shifted between cross-trained workers in order to better balance the material flow in the system. When a serial system is concerned, a common work-sharing approach is the Bucket-Brigade (BB), by which downstream workers sequentially take over items from adjacent upstream workers. When the workers are located from slowest-to-fastest and their speeds are deterministic, it is known that the line does not suffer from blockage or starvation, and achieves the maximal theoretical throughput rate. Very little is known in the literature on stochastic self-balancing systems with work-sharing, and on BB in particular. This paper studies a basic BB model of Bartholdi & Eisenstein (1996) under the assumption of stochastic speeds. We identify settings in which conclusions that emerge from deterministic analysis fail to hold when speeds are stochastic, in particular relating to worker order assignment. Significantly, in a stochastic environment the BB can improve the throughput rate compared to parallel workers, despite the fact that no blockage or starvation occurs in the latter. Joint work with Y. Bukchin and E. Hanany.
    • Presentation, Pietro De Giovanni, ESSEC Business School, France (January 15th 2016)
      • Title: Sharing Contracts: Legend or reality ? A Triangulation Analysis to Discover the Truth
      • Abstract: While the effectiveness of sharing contracts has been widely lauded by the theoretical research, their real applicability and concrete advantages remain vague and ambiguous. Starting from a case study, we have developed a game theory model to replicate several scenarios in which the adoption of sharing contracts depends on the partnership life cycle. Interestingly, we are able to demonstrate that a sharing contract is a very useful mechanism to coordinate a supply chain. Then, we have used the related findings to develop an empirical analysis as well as a qualitative investigation. In the empirical analysis we have collected some survey data and run various Multinomial Logit Regression Models to statistically verify the suitability of sharing contracts. Similarly, we have conducted some comparative case studies within seven different sectors in the Netherlands to qualitatively check the applicability as well as the success of sharing contracts in real business. Both the empirical and the qualitative research reveal a clear finding: Sharing contracts are marginally used in practice while their real advantages are negligible. So, are they a legend or a reality?
    • Presentation, Guiomar Martín-Herrán, University of Valladolid, Spain  (February 19th 2016)
      • Title: Local and National Advertising Pulsing in a Marketing Channel
      • Abstract: Despite the fact that the use of sporadic advertising schedules is well established in both the advertising literature and market place, the marketing channel literature that focuses on vertical interactions has consistently prescribed continuous advertising strategies over time. This paper investigates, in a bilateral monopoly context a situation in which a manufacturer and a retailer control their pricing and advertising decisions, the optimal scheduling of local and national advertising in a planning horizon of three periods. We found that, consistent with the advertising literature, the integrated channel adopts pulsing to benefit from advertising positive carryover effects. Conversely, when the channel is uncoordinated, three advertising schedules are identified as equilibria. The continuous schedule where channel members advertise in the three periods. The pulsing schedule in which the two channel members advertise only in the first and third periods. The mix schedule where the retailer advertises in the three periods and the manufacturer advertises exclusively in the first and third periods. The conditions under which either schedule can be implemented critically depend on the long-term effects of local and national advertising. Among others, pulsing is the optimal schedule when local advertising has very large negative or positive long-term effects.
    • Presentation, Wout Dullaert , VU University Amsterdam, Netherlands (March 4th 2016)
  • Title: Efficient Local Search for Routing Problems
  • Abstract:  In the Vehicle Routing Problem with Multiple Time Windows (VRPMTW) a time window per customer has to be selected to minimize the total duration of the solution. This increases the complexity of the routing problem compared to the VRP with a single time window per customer. By determining the optimal selection of time windows the waiting time in a route can be reduced. We present an exact polynomial time algorithm to efficiently determine the optimal time window selection when a neighborhood operation is applied. This algorithm is embedded in a variable neighborhood tabu search metaheuristic to solve the VRPMTW and the results are compared to the best-known solutions of the VRPMTW instances from the literature. The second topic of the seminar discusses several strategies for a more efficient implementation of the concept of Static Move Descriptors (SMDs), a recently developed technique that drastically speeds up Local Search based algorithms. SMDs exploit the fact that each local search step affects only a small part of the solution and allow for efficient tracking of changes at each iteration, such that unnecessary reevaluations can be avoided. The concept is highly effective at reducing computation times and is sufficiently generic to be applied in any Local Search-based algorithm. Despite its significant advantages, the design proposed in the literature suffers from high overhead and high implementational complexity. Our proposals lead to a much leaner and simpler implementation that offers better extendibility and significant further speedups of local search algorithms. We compare implementations for the Capacitated Vehicle Routing Problem (CVRP) - a well-studied, complex problem that serves as a benchmark for a wide variety of optimization techniques.
    • Presentation, Kalyan Talluri, Imperial College, United kingdom (April 15th 2016)
      • Title: The Network Revenue Management Dynamic Program and Approximations
      • Abstract: We first survey the applications of network revenue management, and its various formulations as a stochastic dynamic program.Given the difficulty of solving these dynamic programs, a number of good approximation algorithms have been proposed for this.  We describe some of the theoretical results on approximation bounds as well as the complexity limits.
    • PresentationVinay Ramani, Indian Institute of Management, India (April 27th 2016)
  • Title: Product Cannibalization and the Effect of a Service Strategy
  • Abstract: Product cannibalization can push some consumers to shift their purchasing preferences from new to used products. This is a costly issue for manufacturers, who have to adjust their pricing strategies accordingly to mitigate the negative effect of cannibalization. In this paper, we characterize a channel to examine the effect of product cannibalization on firms' profits. In particular, we investigate how the presence of a Goodwill agency in a second hand market impacts the business of a manufacturer in a new market through cannibalization, and how the manufacturer reacts to mitigate its effects. We show that even if the manufacturer adjusts its price to decrease the negative effects of cannibalization, this effect is so severe that it always loses some profits. Nevertheless, when the manufacturer provides some additional services to new consumers, the negative effects of cannibalization can be partially overcome, while the channel achieves a profit-Pareto-improving outcome.
    • Presentation, Bruno Viscolani, University of Padova, Italy (May 13th 2016)
      • Title: Age-structured Goodwill: from Optimal Control to Differential Games
      • Abstract: An age-dependent market segmentation is often suitable for real life products. In the first part of the talk, we introduce a simple age-structured model for the advertising process of a firm and the consequent goodwill evolution. The model formal structure is characterized by a first order linear partial differential equation. We formulate the advertising problem for a new product introduction as a distributed parameter optimal control problem and we use the suitable Maximum Principle conditions to characterize the optimal advertising flow. In the second part of the talk we move to differential games. Aso ne may expect, the necessary conditions for an open-loop Nash equilibrium in a distributed parameter differential
        game are hard to discuss. Therefore, we search for conditions on age-structured differential games to make their analysis more tractable. We focus on a class of games which show the features of ordinary linear-state differential games, and we prove that their open-loop Nash equilibria are sub- game perfect. By means of a simple age-structured advertising problem, we provide an example of the theoretical results presented in the paper, and we show how to determine an open-loop Nash equilibrium. This is a joint work with Luca Grosset.

  • Research Seminar 2014/2015

    • Presentation, Aurelie Thiele, Lehigh University, USA (May 18th 2015)
  • Title: Robust Design of (American) Health Insurance Plans
  • Abstract: We investigate optimization models to design a menu of health insurance plans offered by a large (American) employer self-insuring through a private health exchange, in the presence of uncertainty in employees’ choice of plans, health status and health resource utilization. While the application of this talk is grounded in the American healthcare landscape, it incorporates risk management, choice models, robust optimization, insurance concepts and, more broadly, decision-making under high uncertainty and customer choice. Our goal is to optimize, subject to a budget constraint on the employer’s part, an employee-driven criterion such as the minimization of employees’ out-of-pocket costs or the maximization of the fairness of the design, with a focus on healthcare cost as a fraction of employee’s income. We analyze the employer’s cost/long-term risk trade-off and analyze policy choices to promote employees’ health while maintaining premiums at a sustainable level. Of key interest is the allocation of the employer’s budget to various health care costs, such as prescription drugs or hospital stays, to achieve health objectives for the employee population without waste. We consider both traditional health plans and High-Deductible Health Plans (HDHPs), which give employees an incentive to be cost-conscious in occurring health expenses below the deductible – with the exception of preventive services required to be offered for free by the plan – but may also delay care. We provide an heterogeneous view of the employee population based on the type of plan employees select (from Health Maintenance Organizations with a local provider network and a requirement to have a referral to access the specialty-care network, to more expensive Preferred Provider Organizations that do not require referrals to make appointments with specialists) and plans’ actuarial value or metal level (bronze, silver, gold and platinum having actuarial values of 60%, 70%, 80% and 90%, respectively, which represents the part of healthcare costs the payer is expected to shoulder for a benchmark enrollee population). In addition, we discuss how to select the optimal number of plans on offer in the private health exchange. Joint work with Dimitris Bertsimas and Jerry Chen of MIT.
  • Presentation, Felix Papier, ESSEC Business School, France (April 13th 2015)
  • Title: How to Split the Pie ? Sequential Supply Allocation with Forecast Updates
  • Abstract: We study the problem of allocating supply under advance demand information (ADI). We consider a company that must allocate limited inventory to different markets that open sequentially. To reduce uncertainty, the company receives advance demand information and updates forecasts about its markets each time it makes an allocation decision. We study the value and optimal use of this information. Our research is motivated by an agri-food manufacturer that operates in several European countries. We develop the optimal policy under relaxed conditions and an efficient heuristic policy that performs close to optimal under general conditions. We derive structural properties of the model to gain managerial insights, and we derive the optimal policy in closed-form for the case of markets with identical prices. We use numerical experiments to demonstrate that the value of ADI can be significant. The managerial insights of this study include the observations that in environments as the one that motivated our research, early markets receive systematically less supply than late markets, and that the value of ADI is greatest if the initial supply is close to the initial forecasts.
  • Presentation, Francis De Vericourt, EMTS Berlin, Germany (March 2nd 2015)
  • Title: Financing Capacity Investment under Demand Uncertainty
  • Abstract: This paper studies the interplay between the operational and financial facets of capacity investment. We consider the capacity choice problem of a firm with limited liquidity and whose access to external capital markets is hampered by moral hazard. The firm must therefore not only calibrate its capacity investment and the corresponding funding needs, but also optimize its sourcing of funds. Importantly, the set of available sources of funds is derived endogenously and includes standard financial claims (debt, equity, etc.). We find that when higher demand realizations are more indicative of high effort, debt financing is optimal for any given capacity level. In this case, the optimal capacity is never below the efficient capacity level but sometimes strictly above that level. Further, the optimal capacity level increases with the moral hazard problem's severity and decreases with the firm's internal funds. This runs counter to the newsvendor logic and to the common intuition that by raising the cost of external capital and hence the unit capacity cost, financial market frictions should lower the optimal capacity level. We trace the value of increasing capacity beyond the efficient level to a bonus effect and a demand elicitation effect. Both stem from the risk of unmet demand, which is characteristic of capacity decisions under uncertainty. Joint work with Denis Gromb.
  • Presentation, Oualid Jouini, Ecole Centrale Paris, France (February 16th 2015)
  • Title: Call Centers with Delay Information: Models and Insights
  • Abstract: We analyze a call center with impatient customers. We study how informing customers about their anticipated delays affects performance. Customers react by balking upon hearing the delay announcement, and may subsequently renege, particularly if the realized waiting time exceeds the delay that has originally been announced to them. The balking and reneging from such a system are a function of the delay announcement. Modeling the call center as an M/M/s+M queue with endogenized customer reactions to announcements, we analytically characterize performance measures for this model. The analysis allows us to explore the role announcing different percentiles of the waiting time distribution, i.e., announcement coverage, plays on subsequent performance in terms of balking and reneging. Through a numerical study we explore when informing customers about delays is beneficial, and what the optimal coverage should be in these announcements. It is shown how managers of a call center with delay announcements can control the tradeoff between balking and reneging, through their choice of announcements to be made.
  • Presentation, Mohammed Abdellaoui, HEC-Paris/CNRS, France (December 15th 2014)
  • TitleRecursive Rank-dependent Utility for Ambiguity
  • Abstract: This paper proposes an ambiguity model that accounts for Ellsberg-and-Allais type behavior in the famous Anscombe and Aumann framework. Ambiguity attitudes are captured through both utility (as in recursive expected utility) and non-additive probabilities (as in Choquet expected utility), hence, combining `smooth' and `kinked' approaches to ambiguity. The model is based on a single preference principle called substitution consistency. Due to a natural embedding of backward induction, substitution consistency allows for one-stroke representations of preferences including Schmeidler's Choquet expected utility, recursive expected utility and subjective expected utility as particular cases, without referring to mixtures of lotteries. In addition to the provision of a unified setup in which different ambiguity models can jointly be analyzed and compared, substitution consistency simplifies the formal study of relative concavity of utility for risk and ambiguity without committing to recursive expected utility. We also show how our general recursive model can facilitate the descriptive study of ambiguity attitudes. Joint work with Horst Zank (University of Manchester).
  • Presentation, Gila E. Fruchter, Bar-Ilan University, Israel (November 24th 2014)
  • TitleProblem-Driven Theory with Theory-Driven Solutions
  • Abstract: In my research in marketing I seek to develop a problem-driven theory and find theory-driven solutions. In finding such solutions I draw on my long-standing experience of optimal control. I chose a closer look into two of my recent publications to have a taste of my work. We start with dynamic pricing for subscription services, continue with production location decisions of brands.
  • Presentation, Georges Zaccour, HEC Montréal, Canada (November 3rd 2014)
  • Title: Optional-Contingent-Products Pricing in Supply Chains
  • Abstract: This paper studies the pricing strategies of firms belonging to a vertical channel structure where optional contingent products are sold. Optional contingent products are characterized by unilateral demand interdependencies. That is, the base product can be used independently of a contingent product. On the other hand, the contingent product's purchase is conditional on the possession of the base product. We find that the retailer decreases the price of the base product to stimulate demand on the contingent-product market. Even a loss-leader strategy could be optimal, which happens when reducing the base product's price has a large positive effect on demand of this base product, and thus on the number of potential consumers of the contingent product. The price reduction of the base product also mitigates the double-marginalization problem, which is well known in a supply-chain setting with one manufacturer and one retailer, in a large part of the parameter space. Joint work with P.M. Kort (Tilburg University), and S. Taboubi (GERAD, HEC Montréal).
  • Presentation, Steffen Jorgensen, University of Southern Denmark, Denmark (October 6th 2014)
  • Title: Recent Development in Lanchester Differential Games.
  • Abstract: Lanchester games have their origin in the work of F.W. Lanchester (F.W. Lanchester: Aircraft in Warfare: The Dawn of the Fourth Arm. Constable, London, UK, 1916) who used ordinary differential equations to study stylized problems of military combat. Later on, one of Lanchester’s models has been used - quite successfully - to describe the effects of advertising competition in oligopolistic markets. A stream of literature in the field of advertising has used this model as a component of a dynamic game. I introduce a couple of Lanchester’s models and a simple dynamic advertising game model. The advertising game has been extended in various directions, for example, to take into account market growth/contraction and multiple types of advertising efforts. Such extensions are discussed. I present two examples of my own current research in the area and conclude by suggesting some avenues for future research.
  • Presentation, Alain Haurie, HEC-Genève, Switzerland (canceled)

  • Phd Thesis Defense of affiliated cluster member Cerasela Tanasescu, ESSEC Business School (November 5th 2014, 10h, room N517, presentation in French)

  • Research Seminar 2013/2014

    • Presentation, Sourour Elloumi, CEDRIC, CNAM Paris, France (19 May 2014)
    • Title: Facility location and network design: the p-median problem and the notion of good formulation in discrete mathematical optimization
    • Abstract: We consider the p-median problem where one has to open a given number p of facilities, and assign customers to their closest open facilities in such a way that the total distance between customers and facilities is as small as possible. We show some applications of this fundamental problem in discrete location theory. We then present several formulations of the p-median problem by mixed integer linear programming. We compare these formulations from different aspects. Our objective will be to illustrate how the choice of a formulation may have an important influence on the running time needed to compute an optimal solution.
    • Presentation, Virginie Gabrel, LAMSADE, University of Paris-Dauphine, France (5 May 2014)
    • Title: Designing Incentive Systems for Truthful Information Sharing in Supply Chains
    • Abstract: We consider the problem of portfolio optimization with uncertainty on asset returns. In the context of a scenario-based approach, we want to determine a robust portfolio performing an acceptable compromise between the expected returns and the risk of making losses. Many approaches have been suggested, all based on the formulation of two criteria : one for expected return and the other for measuring the risk. Some approaches propose to determine the set of Pareto-optimal solutions, other approaches consists in optimizing one criterion and transforming the second criterion into a constraint. In this latter approach, we propose a new criterion (inducing a new optimization model), called the pw-robustness criterion : the parameter w allows to manage the risk while the parameter p handles the maximization of expected return. This criterion generalizes the classical risk measure Value-at-Risk and can be compared to the Conditional Value-at-Risk measure regarding the worst cases. Joint work with Cécile Murat.
  • Presentation, Ulrich Thonemann, University of Cologne, Germany (31 March 2014)
    • Title: Designing Incentive Systems for Truthful Information Sharing in Supply Chains
    • Abstract: We consider a firm where sales is responsible for demand forecasting and operations is responsible for ordering. Sales has better information about the demand than operations and sends a non-binding demand forecast to operations. Based on the demand forecast, operations determines the order quantity. To incentivize truthful demand information sharing, we include a penalty for forecast errors in the incentive system of sales. In the utility function of sales, we also include the behavioral factors lying aversion and loss aversion. We model the setting as a signaling game and derive equilibria of the game. In a laboratory experiment, we observe human behavior that is in-line with the model predictions, but deviates substantially from expected payoff maximizing behavior. Finally, we use the behavioral model to design incentive systems for truthful information sharing and conduct an experiment to validate the approach with out-of-sample treatments and out-of-sample subjects.

  • Presentation, Victor Martinez de Albéniz Margalef, IESE Business School, Spain (17 February 2014)
    • Title: A Closed-Loop Approach to Dynamic Assortment Planning
    • Abstract: Firms are constantly trying to keep the customers interested by refreshing their assortments. In industries such as fashion retailing, products are becoming short-lived and, without product introductions or in-store novelties, category sales quickly decrease. We model these dynamics by assuming that products lose their attractiveness over time and we let the firm enhance the assortment at a cost, for single or multiple categories. We characterize the optimal closed-loop policy that maximizes firm profits. When adjustment costs are linear in the attractiveness, we find that an assort-up-to policy is best: it is optimal to increase category attractiveness to a target level, which is independent of the current attractiveness. Furthermore, we develop heuristics to quickly determine good assort-up-to levels.  Finally, we show that a closed-loop approach is valuable compared to open-loop strategies, especially when there is significant uncertainty about the decay rate of products. Joint work with Esra Çinar.
  • Presentation, Fouad El Ouardighi, ESSEC Business School, France (10 February 2014)
    • Title: Operations and Marketings Strategies under Wholesale Price and Revenue Sharing Contracts in a Dynamic Supply Chain
    • Abstract: The objective of the paper is to study how wholesale price and revenue sharing contracts affect operations and marketing strategies in a supply chain under different dynamic informational structures. We suggest a differential game model of a stylized supply chain consisting of a manufacturer and a single retailer.The focus is on the production and sales of a single product. The model includes key operational and marketing activities in the supply chain. The manufacturer decides a production rate and the rate of national advertising efforts while the retailer chooses a purchase rate and the consumer price. Depending on whether the information on the current state of key operational and marketing variables of the supply chain is available or not, firms may either make decisions contingent on information on the current state of the game (feedback Nash equilibrium strategy), or commit to a predetermined plan of action during the whole game (open-loop Nash equilibrium strategy). The state of the game is summarized in the firms’ backlogs and the manufacturer’s advertising goodwill. A main result suggests that the double marginalization can be better mitigated if the supply chain members adopt a feedback Nash equilibrium strategy under wholesale price contract and open-loop Nash equilibrium strategy under revenue-sharing contract. Work co-authored with Gary Erickson, Dieter Grass, and Steffen Jorgensen.
  • Presentation, Konstantin Kogan, Bar-Ilan University, Israel (2 December 2013) 
    • Title: A generalized Two-Agent Location Problem: Asymmetric Dynamics and Coordination
    • Abstract:We generalize a static two-agent location problem into dynamic, asymmetric settings. The dynamics is due to the ability of the agents to move at limited speeds. Since each agent has its own objective (demand) function and these functions are interdependent, decisions made by each agent may affect the performance of the other agent and thus affect the overall performance of the system. We show that under a broad range of system's parameters, centralized (system-wide optimal) and non-cooperative (Nash) behavior of the agents are characterized by a similar structure. The timing of these trajectories and the intermediate speeds are however different. Moreover, non-cooperative agents travel more and may never rest and thus the system performance deteriorates under decentralized decision-making. We show that a static linear reward approach recently developed in Golany and Rothblum (2006) can be generalized to provide coordination of the moving agents and suggest its dynamic modification. When the reward scheme is applied, the agents are induced to choose the system-wide optimal solution even though they operate in a decentralized decision-making mode.

    • Presentation, Gustav Feichtinger, Vienna University of Technology, Austria (4 November 2013)
      • Title: Dynamics and Control of Deviant Behavior
      • Abstract: Two essential aspects of deviant behavior are the time aspect and social interactions. The consumption of illicit drugs, the spread of corruption and the incidence of violence exhibit epidemic structure. Thus, mathematical models describing deviant behavior are intertemporal and non-linear. We illustrate how the efficient control of such behavior can be analysed by using dynamic optimisation models. Typically, one gets multiple equilibria and tipping behavior (history-dependence) of optimal solutions. In particular, it is shown how the optimal mix of various control instruments evolve over time. Homogenous socio-economic agents are often unrealistic simplifications. A first step to include heterogeneity is the distinction of light and heavy deaviance. It is the escalation from light to heavy which has to be controled efficiently. Finally, we discuss age-specific extensions as well as dynamic game issues in the economics of crime.

      Presentation, Yael Perlman, Bar-Ilan University, Israel (9 September 2013)
      • Title: Reducing shoplifting by investment in security
      • Abstract: We consider a single retailer with a number of potential customers, who sells a product that is subject to shoplifting. In order to decrease losses due to shoplifting and to maximize his profit, the retailer can invest in security measures. In particular, we assume that the retailer hires the security services of a single security supplier. While the retailer decides how many security services to buy, the security supplier decides which price to charge the retailer for these services, with the purpose of maximizing his own profit. We address this problem using a game theoretic approach, where the retailer competes with the supplier—the leader—who specifies first the service price. The retailer responds by deciding how much to invest in security. We study the conditions under which both players are profitable and the extent to which the double marginalization affects the supply chain performance.

  • Research Seminar 2012/2013
    • Presentation, Raik Stolletz, University of Mannheim (3 June 2013)
      • Title: Designing Lean Manufacturing Systems: Stationary and Dynamic Buffer Allocation in Flow Lines
      • Abstract: KANBAN-controlled production systems are often installed if the effective processing times of machines  in production systems are stochastic. These stochastic influences are due to machine breakdowns, uncertain times of repair, and random processing times. The optimal allocation of KANBAN-cards (buffer spaces) throughout a line guarantees a certain average throughput while minimizing the required buffer space. Besides the uncertainty, such flow lines often operate under time-dependent influences. They are caused by learning effects during the ramp up phase, changing station capacities, or time-dependent demand patterns. We discuss different decision models for the static and dynamic buffer allocation in stochastic flow lines. Sampling approaches for the analytical performance evaluation are proposed. They are based on huge mixed integer decision models in discrete and continuous time. Efficient solution approaches for the respective optimization models of the buffer allocation are presented. The numerical study demonstrates the accuracy of the proposed approaches.
    • Presentation, Laurent Alfandari, ESSEC Business School (18 March 2013)
      • Title: A Column-Generation Based Method for Optimal Electricity Production and Maintenance Planning
      • Abstract: This talk presents a heuristic method based on column generation for the EDF (Electricité De France) long term electricity production planning problem proposed as subject of the ROADEF/EURO 2010 Challenge. This is to our knowledge the first-ranked method among those methods based on mathematical programming, and was ranked fourth overall. The problem consists in determining a production plan over the time horizon for each thermal power plant of the French electricity company, and for nuclear plants, a schedule of plant outages which are necessary for refueling and maintenance operations. The average cost of the overall outage and production planning, computed over a set of demand scenarios, is to be minimized, so as to find a robust solution. The method proceeds in two stages. In the first stage, dates for outages are fixed once for all for each nuclear plant. Data are aggregated with a single average scenario and reduced time steps, and a set-partitioning reformulation is solved for fixing outage dates with a heuristic based on column generation.
        The pricing problem associated with each nuclear plant is a shortest path problem in a specific graph. In the second stage, the reload level is determined at each date of an outage, considering all scenarios. Finally, the production quantities between two outages are optimized for each plant and each scenario by solving independent linear programs. The efficiency of the approach is demonstrated by numerical results.
    • Presentation, Dominique de Werra, EPFL Lausanne (22 April 2013)
      • TitleGrouping Processors in Open Shop Scheduling
      • Abstract: In the basic classical open shop model we have processors and jobs to be processed on the processors according to some requirements. We shall concentrate on the case in which some processors have to be grouped (becoming so multiprocessors) in order to perform some of the tasks required by the jobs. Such a model is motivated by applications in timetabling as well as in testing of electronic systems in particular. We shall describe these applications and review the basic results related to the simple cases of ordinary open shop. We will examine the implications of processor grouping on the form of optimal schedules. Complexity issues will be discussed for the situations with some grouping of processors. The presentation is based on joint work with W. Kubiak and T. Kis.
    • Presentation, Matthew Myers, University of Tennessee at Knoxville (25 February 2013)
      • Title: The Value of Collaborative Knowledge Sharing in Global Supply Chains
      • Abstract: Research in collaborative inter-organizational relationships has typically focused on the value of these relationships to a specific supply chain partner. Furthermore, the literature has largely ignored the potential benefits of knowledge sharing between independent buyers and suppliers in a global setting. In this study, we investigate the influence of inter-firm knowledge sharing on the performance of both the buyer and the supplier within global dyads, testing the contention that both members benefit from knowledge sharing efforts, and both enjoy equal pieces of the benefits pie. Using primary data from 132 cross national dyads representing relationships in Asia-Pacific, Europe, Latin America, and the United States, we investigate three specific types of knowledge sharing and their influence on firm performance.

  • Workshop on G-GRAPHS defined from groups, organized by ESSEC Business School (12 Feburary 2013). Find more information here.


To register for an event, please send an e-mail to Everyone is welcome, from within ESSEC as well as from outside.