2022-2023
Eduardo MORENO - Universidad Adolfo Ibanez , Chile
June 13th, 4.15pm - 5.30pm / Room LearningLab (KLAB)
BZ Algorithm: a novel column-generation approach and its applications on scheduling and stochastic optimization
Abstract: Decomposition methods for optimization problems are crucial for solving large-scale instances, especially those from real-world applications. Two of the most known examples of decomposition methods are the Dantzig-Wolf decomposition (column generation) and the Benders decomposition (row generation), broadly used in many applications. In this talk, we present the Bienstock-Zuckerberg (BZ) algorithm, which can be seen as a different way of doing column generation (correspondingly, an alternative way of generating Benders cuts). This method tries to keep an orthogonal basis as the column set and produce linear combinations of them. We present two families of applications of this method, where it has been very successful. The first is on precedence-constrained problems (e.g., precedence-constrained knapsack or production scheduling), notably when the number of tasks to include/schedule and their precedences is very large, for example, in the mining industry. The second example is two-stage stochastic problems with fixed recourse, like most stochastic network flow problems, especially when the number of scenarios in the uncertainty set is very large.
Michel BIERLAIRE - EPFL, SWITZERLAND
May 16th, 12pm - 1.15pm / Room F134 (Galion Building, 1st floor)
Mathematical Modeling of Irrational Behavior: towards a linear formulation
Abstract: Mathematical modeling of choice behavior is rooted in microeconomic theory, and relies on the assumption that economic actors behave rationally. However, there is empirical evidence that the rationality assumption does not always hold. In this talk, we will present some of this evidence and describe how to develop behavioral models that can account for this apparent irrationality. We will then consider a formulation of these models that is designed to be integrated into optimization models.
Bilal GOKPINAR - UCL School of Management, United Kingdom
April 26th, 12pm - 1.15pm / Room F135
How Do Robots Affect Firms’ Innovation Performance? Evidence from Spanish Manufacturers’
Abstract: This paper examines the impact brought by robot use on manufacturing firms’ innovation performance. The analysis uses a rich panel dataset of Spanish manufacturing firms over 27 years (1990-2016). Our findings suggest that robot use has a negative effect on firms’ process innovation. However, we do not observe a similar effect on firms’ product innovation. We also explore possible mechanisms by which robot use may affect process innovation. Our analysis reveals that the negative effect of robot use on process innovation is only salient for complex manufacturing industries, rather than light or heavy manufacturing. In addition, we find that the negative effects brought by robot use on process innovation are smaller for older firms. Our results point to a possible mechanism whereby robots impede process innovation through reducing human involvement in the manufacturing process. Our findings highlight possible disadvantages brought by robots in manufacturing, a notion neglected in the previous literature.
Emma FREJINGER - Université de Montréal (Canada)
April 20th, 12pm - 1.15pm / Room F137
Integrated Prediction and Optimization for Better Decisions
Abstract: Predictions play a central role when solving various types of optimization problems. Yet prediction tasks and optimization problems are typically studied separately. In this talk we describe settings where integrating prediction and optimization may be of high value. We discuss related challenges for both machine learning and optimization using transport applications as illustrations. The objective of the talk is threefold: (i) provide background on some of the literature situated at the intersection between machine learning and operations research (ii) outline a few open research topics and (iii) present some of our research results at a high level.
Dolores ROMERO MORALES - Copenhagen Business School (Denmark)
March 23rd, 12pm - 1.15pm / Room LearningLab
From Counterfactual Analysis in Machine Learning to Benchmarking
Abstract: Traditional benchmarking based on simple key performance indicators is widely used and easy to understand. Unfortunately, such indicators cannot fully capture the complex relationship between multiple inputs and outputs in most firms. Data Envelopment Analysis (DEA) offers an attractive alternative. It builds an activity analysis model of best practices considering the multiple inputs used and products and services produced. This allows more substantial evaluations and also offers a framework that can support many other operational, tactical and strategic planning efforts.
Unfortunately, a DEA model may be hard to understand by managers. In turn, this may lead to mistrust in the model, and to difficulties in deriving actionable information from the model beyond the efficiency scores. In this paper, we propose the use of counterfactual analysis to overcome these problems. We define DEA counterfactual instances as alternative combinations of inputs and outputs that are \emph{close} to the original inputs and outputs of the firm and lead to desired improvements in its performance. We formulate the problem of finding counterfactual explanations in DEA as a bilevel optimization model. For a rich class of cost functions, reflecting the effort an inefficient firm will need to spend to change to its counterfactual, finding counterfactual explanations boils down to solving Mixed Integer Convex Quadratic Problems with linear constraints. We illustrate our approach using both a small numerical example and a real-world dataset on banking branches. This is joint work with Peter Bogetoft and Jasone Ramírez-Ayerbe.
Suresh P. SETHI - Naveen Jindal School of Management, University of Texas (USA)
March 13th, 12pm - 1.15pm / F026 (Galion Building- Ground floor)
Hierarchical and Mixed Leadership Games for Dynamic Supply Chains: Applications to Cost Learning and Co-op Advertising
Abstract: We consider two applications dynamic stochastic supply chains. The first application is a decentralized two-period supply chain in which a manufacturer produces a product with benefits of cost learning, and sells it through a retailer facing a price-dependent demand. The manufacturer’s second-period production cost declines linearly in the first-period production, but with a random learning rate. The manufacturer may or may not have the inventory carryover option. We formulate the problem as a two-period Stackelberg games and obtain their feedback equilibrium solutions explicitly. We then examine the impact of mean learning rate and learning rate variability on the pricing strategies of the channel members, on the manufacturer’s production decisions, and on the retailer’s procurement decisions. We show that as the mean learning rate or the learning rate variability increases, the traditional double marginalization problem becomes more severe, leading to greater efficiency loss in the channel. We obtain revenue sharing contracts that can coordinate the dynamic supply chain. The second application studies a novel manufacturer-retailer cooperative advertising game where, in addition to the traditional setup into which the manufacturer subsidizes the retailer's advertising effort, we also allow the reverse support from the retailer to the manufacturer. This is modeled as a mixed leadership game in which one player is a leader on some decisions and a follower on other decisions. We find an equilibrium that can be expressed by a solution of a set of algebraic equations. We then conduct an extensive numerical study to assess the impact of model parameters on the equilibrium.
Suresh P. Sethi is Eugene McDermott Chair Professor of Operations Management and Director of the Center for Intelligent Supply Networks (C4ISN) at The University of Texas at Dallas. He has contributed significantly to manufacturing and operations management,finance and economics, marketing, industrial engineering, operations research, and optimal control. He is well known for his developments of the Sethi advertising model, Sethi-Skiba points, and his textbook on optimal control.
He has written 11 books and published over 400 research papers in manufacturing and operations management, finance and economics, marketing, and optimization theory. He teaches an optimal control theory/applications course and organizes a seminar series on operations management topics. He initiated and developed doctoral programs in operations management at The University of Texas at Dallas UTD and the University of Toronto. He built the operations management area of UTD’s Naveen Jindal School of Management into its current research powerhouse.
He has received prestigious honors and awards such as IEEE Fellow, INFORMS Fellow, SIAM Fellow, POMS Fellow, AAAS Fellow, IITB Distinguished Alum, Tepper School of Business-Alumni Achievement Award, and POMS President (2012), INFORMS Fellows Selection Committee (2014-16), Alumni Achievement Award, Tepper School of Business, Carnegie Mellon University (2015).
Two conferences have been organized in his honor: in Aix en Provence in 2005 and at The University of Texas at Dallas in 2006, with Harry M. Markowitz, a 1990 Nobel Laureate in Economics, as the keynote speaker.
Also, two books have been edited in his honor. Production and Operations Management Society (POMS) has instituted a Suresh Sethi Best Interdisciplinary Paper Award every two years beginning in 2023.
He serves/served on the editorial boards of several journals, including Production and Operations Management, SIAM Journal on Control and Optimization, Manufacturing & Service Operations Management, Operations Research, Optimal Control Applications and Methods, and Journal of Mathematical Analysis & Applications.
Mathieu DAHAN - Georgia Institute of Technology (USA)
December 14th, 12pm - 1.15pm / F124, Campus Cergy
Prescriptive Analytics for Profit Maximization in Time-Sensitive E-Commerce Retailing
Abstract: In this work, we propose a prescriptive approach for designing a middle mile consolidation network that aims to maximize the profit of large e-commerce retailers. We embed lead-time dependent sales volume predictions into a new mixed-integer program (MIP) that determines shipment lead times and consolidation plans to maximize sales revenue net logistics cost. The MIP extends traditional flat network designs to capture waiting delays between load dispatches and ensure that shipment lead-time requirements are satisfied with a desired probability. We propose a solution approach that uses an IP-based local search and dynamic lead-time selection to find excellent consolidation plans for large, practically-sized instances. Computational experiments using data from a large U.S.-based e-commerce partner specializing in large and bulky items demonstrate the significant impact of flexible lead-time selection on the structure of the consolidation network designs and their concomitant operating costs.
This work is joint with Lacy Greening, Jisoo Park, Alan Erera, and Benoit Montreuil.
Martin Schmidt - Trier University (Germany)
November 17th, 12pm - 1.15pm / F022, Galion Building, Campus Cergy
The minimum sum-of-squares clustering problem: Techniques for computing global solutions and a robust approach to deal with measurement errors
Abstract: The minimum sum-of-squares clustering (MSSC) problem is a very important problem in data mining and (unsupervised) machine learning with very many applications in, e.g., medicine or social sciences. However, it is known to be NP-hard in all relevant cases and to be notoriously hard to be solved to global optimality in practice. Moreover, in many modern applications the clustering suffers from unstructured measurement errors because the MSSC result then represents a clustering of the erroneous measurements instead of the true but unknown underlying data.
In this talk, we discuss both mentioned issues. First, we present different tailored mixed-integer programming techniques to improve the performance of state-of-the-art MINLP solvers when applied to the problem - among them are cutting planes, propagation techniques, branching rules, or primal heuristics. Our numerical study shows that our techniques significantly improve the performance of the open-source MINLP solver SCIP. Second, we tackle the other issue by applying techniques from robust optimization to hedge the clustering result against unstructured errors in the observed data. To this end, we derive strictly and Gamma-robust counterparts. Since the nominal problem is already NP-hard, global approaches are often not feasible in practice. As a remedy, we develop tailored alternating direction methods by decomposing the search space of the robustified problems to quickly obtain feasible points of good quality. Our numerical results reveal an interesting feature: the less conservative Gamma-approach is clearly outperformed by the strictly robust clustering method. In particular, the strictly robustified clustering method is able to recover clusterings of the original data even if only erroneous measurements are observed.
Presentation: https://drive.google.com/file/d/1k2Wp9QDSauMwsaWiYoMPtYmbL8ctFTK/view?usp=share_link
Yanlu ZHOU - ESSEC Business School (France)
September 02nd, 12pm - 1pm / room will be communicated soon - Campus Cergy
Market Thickness in Online Food Delivery Platforms: The Impact of Food Processing Times
Abstract: Online food delivery (OFD) platforms are expanding rapidly worldwide, allowing customers to order food from a wide array of restaurants on their mobile phones. The pace of this expansion is accelerated in part by changes in consumer preferences during the COVID-19 pandemic. We study new algorithms to match drivers with orders on OFD platforms - a core function of many of these platforms. We develop real-time matching algorithms that use the (highly variable) food processing time to `delay' the assignment of drivers to orders to thicken the `market' of orders and drivers. The policy relies on machine learning techniques to predict the food processing time of an order, and the dispatch timings are obtained through a careful trade-off analysis of the supply-demand dynamics and the variability
in the food processing times. Traditional methods in this field include greedy (online matching) and batching (matching in periodic intervals) policies. We introduce a new level-k policy to delay the assignment until we have no more than k matching options left, and we show numerically that this policy performs exceedingly well on real data. Numerical experiments based on a real dataset from Meituan show that our policy can reduce total costs by 42% compared to matching policies that are currently in use. To explain the advantage of the level-k policy, we evaluate its performance on a simplified model and derive several analytical properties - the total costs are quasiconvex in the level of market thickness, implying that an intermediate level of thickness is optimal. We also find that the food processing time is key information for the matching decisions. Finally, our findings indicate that if the unit delay penalty cost is high, the decision-maker of the platform should match conservatively.