Eduardo MORENO - Universidad Adolfo Ibanez , Chile
June 13th, 4.15pm - 5.30pm / Room LearningLab (KLAB)
Abstract: Decomposition methods for optimization problems are crucial for solving large-scale instances, especially those from real-world applications. Two of the most known examples of decomposition methods are the Dantzig-Wolf decomposition (column generation) and the Benders decomposition (row generation), broadly used in many applications. In this talk, we present the Bienstock-Zuckerberg (BZ) algorithm, which can be seen as a different way of doing column generation (correspondingly, an alternative way of generating Benders cuts). This method tries to keep an orthogonal basis as the column set and produce linear combinations of them. We present two families of applications of this method, where it has been very successful. The first is on precedence-constrained problems (e.g., precedence-constrained knapsack or production scheduling), notably when the number of tasks to include/schedule and their precedences is very large, for example, in the mining industry. The second example is two-stage stochastic problems with fixed recourse, like most stochastic network flow problems, especially when the number of scenarios in the uncertainty set is very large.
Michel BIERLAIRE - EPFL, SWITZERLAND
May 16th, 12pm - 1.15pm / Room F134 (Galion Building, 1st floor)
Abstract: Mathematical modeling of choice behavior is rooted in microeconomic theory, and relies on the assumption that economic actors behave rationally. However, there is empirical evidence that the rationality assumption does not always hold. In this talk, we will present some of this evidence and describe how to develop behavioral models that can account for this apparent irrationality. We will then consider a formulation of these models that is designed to be integrated into optimization models.
Bilal GOKPINAR - UCL School of Management, United Kingdom
April 26th, 12pm - 1.15pm / Room F135
Abstract: This paper examines the impact brought by robot use on manufacturing firms’ innovation performance. The analysis uses a rich panel dataset of Spanish manufacturing firms over 27 years (1990-2016). Our findings suggest that robot use has a negative effect on firms’ process innovation. However, we do not observe a similar effect on firms’ product innovation. We also explore possible mechanisms by which robot use may affect process innovation. Our analysis reveals that the negative effect of robot use on process innovation is only salient for complex manufacturing industries, rather than light or heavy manufacturing. In addition, we find that the negative effects brought by robot use on process innovation are smaller for older firms. Our results point to a possible mechanism whereby robots impede process innovation through reducing human involvement in the manufacturing process. Our findings highlight possible disadvantages brought by robots in manufacturing, a notion neglected in the previous literature.
Emma FREJINGER - Université de Montréal (Canada)
April 20th, 12pm - 1.15pm / Room F137
Abstract: Predictions play a central role when solving various types of optimization problems. Yet prediction tasks and optimization problems are typically studied separately. In this talk we describe settings where integrating prediction and optimization may be of high value. We discuss related challenges for both machine learning and optimization using transport applications as illustrations. The objective of the talk is threefold: (i) provide background on some of the literature situated at the intersection between machine learning and operations research (ii) outline a few open research topics and (iii) present some of our research results at a high level.
Dolores ROMERO MORALES - Copenhagen Business School (Denmark)
March 23rd, 12pm - 1.15pm / Room LearningLab
Abstract: Traditional benchmarking based on simple key performance indicators is widely used and easy to understand. Unfortunately, such indicators cannot fully capture the complex relationship between multiple inputs and outputs in most firms. Data Envelopment Analysis (DEA) offers an attractive alternative. It builds an activity analysis model of best practices considering the multiple inputs used and products and services produced. This allows more substantial evaluations and also offers a framework that can support many other operational, tactical and strategic planning efforts.
Suresh P. SETHI - Naveen Jindal School of Management, University of Texas (USA)
March 13th, 12pm - 1.15pm / F026 (Galion Building- Ground floor)
Abstract: We consider two applications dynamic stochastic supply chains. The first application is a decentralized two-period supply chain in which a manufacturer produces a product with benefits of cost learning, and sells it through a retailer facing a price-dependent demand. The manufacturer’s second-period production cost declines linearly in the first-period production, but with a random learning rate. The manufacturer may or may not have the inventory carryover option. We formulate the problem as a two-period Stackelberg games and obtain their feedback equilibrium solutions explicitly. We then examine the impact of mean learning rate and learning rate variability on the pricing strategies of the channel members, on the manufacturer’s production decisions, and on the retailer’s procurement decisions. We show that as the mean learning rate or the learning rate variability increases, the traditional double marginalization problem becomes more severe, leading to greater efficiency loss in the channel. We obtain revenue sharing contracts that can coordinate the dynamic supply chain. The second application studies a novel manufacturer-retailer cooperative advertising game where, in addition to the traditional setup into which the manufacturer subsidizes the retailer's advertising effort, we also allow the reverse support from the retailer to the manufacturer. This is modeled as a mixed leadership game in which one player is a leader on some decisions and a follower on other decisions. We find an equilibrium that can be expressed by a solution of a set of algebraic equations. We then conduct an extensive numerical study to assess the impact of model parameters on the equilibrium.
Suresh P. Sethi is Eugene McDermott Chair Professor of Operations Management and Director of the Center for Intelligent Supply Networks (C4ISN) at The University of Texas at Dallas. He has contributed significantly to manufacturing and operations management,finance and economics, marketing, industrial engineering, operations research, and optimal control. He is well known for his developments of the Sethi advertising model, Sethi-Skiba points, and his textbook on optimal control.
Mathieu DAHAN - Georgia Institute of Technology (USA)
December 14th, 12pm - 1.15pm / F124, Campus Cergy
Abstract: In this work, we propose a prescriptive approach for designing a middle mile consolidation network that aims to maximize the profit of large e-commerce retailers. We embed lead-time dependent sales volume predictions into a new mixed-integer program (MIP) that determines shipment lead times and consolidation plans to maximize sales revenue net logistics cost. The MIP extends traditional flat network designs to capture waiting delays between load dispatches and ensure that shipment lead-time requirements are satisfied with a desired probability. We propose a solution approach that uses an IP-based local search and dynamic lead-time selection to find excellent consolidation plans for large, practically-sized instances. Computational experiments using data from a large U.S.-based e-commerce partner specializing in large and bulky items demonstrate the significant impact of flexible lead-time selection on the structure of the consolidation network designs and their concomitant operating costs.
This work is joint with Lacy Greening, Jisoo Park, Alan Erera, and Benoit Montreuil.
Martin Schmidt - Trier University (Germany)
November 17th, 12pm - 1.15pm / F022, Galion Building, Campus Cergy
Abstract: The minimum sum-of-squares clustering (MSSC) problem is a very important problem in data mining and (unsupervised) machine learning with very many applications in, e.g., medicine or social sciences. However, it is known to be NP-hard in all relevant cases and to be notoriously hard to be solved to global optimality in practice. Moreover, in many modern applications the clustering suffers from unstructured measurement errors because the MSSC result then represents a clustering of the erroneous measurements instead of the true but unknown underlying data.
In this talk, we discuss both mentioned issues. First, we present different tailored mixed-integer programming techniques to improve the performance of state-of-the-art MINLP solvers when applied to the problem - among them are cutting planes, propagation techniques, branching rules, or primal heuristics. Our numerical study shows that our techniques significantly improve the performance of the open-source MINLP solver SCIP. Second, we tackle the other issue by applying techniques from robust optimization to hedge the clustering result against unstructured errors in the observed data. To this end, we derive strictly and Gamma-robust counterparts. Since the nominal problem is already NP-hard, global approaches are often not feasible in practice. As a remedy, we develop tailored alternating direction methods by decomposing the search space of the robustified problems to quickly obtain feasible points of good quality. Our numerical results reveal an interesting feature: the less conservative Gamma-approach is clearly outperformed by the strictly robust clustering method. In particular, the strictly robustified clustering method is able to recover clusterings of the original data even if only erroneous measurements are observed.
Yanlu ZHOU - ESSEC Business School (France)
September 02nd, 12pm - 1pm / room will be communicated soon - Campus Cergy
in the food processing times. Traditional methods in this field include greedy (online matching) and batching (matching in periodic intervals) policies. We introduce a new level-k policy to delay the assignment until we have no more than k matching options left, and we show numerically that this policy performs exceedingly well on real data. Numerical experiments based on a real dataset from Meituan show that our policy can reduce total costs by 42% compared to matching policies that are currently in use. To explain the advantage of the level-k policy, we evaluate its performance on a simplified model and derive several analytical properties - the total costs are quasiconvex in the level of market thickness, implying that an intermediate level of thickness is optimal. We also find that the food processing time is key information for the matching decisions. Finally, our findings indicate that if the unit delay penalty cost is high, the decision-maker of the platform should match conservatively.