Research Seminar 2016/2017
- Presentation, Pietro DE GIOVANNI, ESSEC Business School, France (October 3rd 2016)
- Title: Environmental Collaboration and Process Innovationin Supply Chain Management
- Abstract: This paper investigates a dynamic supply chain model in which a manufacturer decides some process innovation investments and a retailer sets the price. The process innovation investments contribute to the development of the environmental performace, which represents our state variable. The environmental performance is damaged by some negative externalities implied by the demand. This generates an interesting operational trade-off between sales and environmental damages that both players seek to solve. We show the overall bene…ts that cooperation in a process innovation program generates for all supply chain members.
- Presentation, Jean-François CORDEAU, HEC Montréal, Canada (October 31th 2016)
- Presentation, Elena BELAVINA, University of Chicago Booth School of Business, United States (October 13th 2016)
- Title: Grocery access, Market Structure and Food Waste
- Abstract: This paper studies how access to grocery stores, and the extent and nature of competition in the grocery retail market influences food waste. Access to grocery, or how dense is the network of retail stores in a neighborhood, varies extensively as a result of zoning laws and other city government initiatives. Similarly, some markets are dominated by one chain, while others have a high degree of competition with a lot of independent grocery stores. And finally consumers in some markets are more price-sensitive, while in others the degree of product availability or service level is the key competitive variable. We build a multi-echelon arborescent supply chain model that includes heterogeneous customers each making optimal perishable inventory replenishment timing and level decisions in the face of demand uncertainty, as the lowest tiers. Competing grocery stores are the next tier, their demand arises as the superposition of the stochastic order processes of customers, and they themselves manage store inventories. We use these model to compute food waste and its dependence on store density and market structure.
My analysis reveals that, independent of the market structure, denser grocery store networks result in higher food waste at the store level, but lower consumer food waste, in contrast with the conventional wisdom that higher level of price competition would lead to higher consumer food waste due to resulting lower price of groceries. The conventional logic does not take into account the reduction in food waste due to increased convenience of grocery shopping. Overall, denser grocery store networks have lower food waste as consumer-side waste is substantially higher than waste at the store level. Further, keeping store density fixed, when price is main competitive dimension, market structures with low degree of competition (a single dominant chain) are more environmentally friendly than one with many individual retailers. On the other hand, when the main competitive dimension is service level, a higher degree of competition is preferred. That is, even without changing grocery store density by instilling the “right” competitive structure city governments can influence food waste levels.
- Presentation, Bernard GENDRON, CIRRELT, Canada (November 21th 2016)
- Title: Branch-and-price-and-Cut for Multicommodity Network Design
- Abstract: We consider a mixed-integer programming model that represents a large number of network design applications in transportation and logistics. We consider several alternatives for solving this model, in particular a column-and-row generation approach, recently introduced by Frangioni and Gendron (2013) under the name "Structured Dantzig-Wolfe Decomposition". We present preliminary computational results that show comparisons of the different variants on a set of large-scale network design instances.
- Presentation, Bernard FORTZ, Université Libre de Bruxelles, Belgium (December 15th 2016)
- Title: Computational Strategies for a Multi-Period Network Design and Routing Problem
- Abstract: The conventional multicommodity capacitated network design problem deals with the simultaneous optimization of capacity installation and traffic flow routing, where a fixed cost is incurred for opening a link and a linear routing cost is paid for sending traffic flow over a link. The routing decision must be performed such that traffic flows remain
bounded by the installed capacities. In this talk, we generalize this problem over multiple time periods using an increasing convex cost function which takes into account congestion (number of routing paths per edge) and delay (routing path length). We propose a compact Mixed Integer Linear Program (MILP) formulation for this problem, based on the aggregation of traffic flows by destination following the per-destination routing decision process underlying packet networks. We observe that the resolution with realistic topologies and traffic demands becomes rapidly intractable with state-of-the-art solvers due to the weak linear programming bound of the proposed MILP formulation. We also introduce an extended formulation where traffic flows are disaggregated by source-destination pairs, while keeping the requirement of destination-based routing decisions. This extended formulation provides for all evaluated topologies stronger linear programming lower bounds than the base formulation. However, this formulation still suffers from the large size of the resulting variables and constraints sets; hence, solving the linear relaxation of the problem becomes intractable when the network size increases. In this talk, we investigate different computational strategies to overcome the computational limits of the formulations. We propose different branch-and-cut strategies and a Lagrangian relaxation approach.
Joint work with Enrico Gorgone (ULB) and Dimitri Papadimitriou (Nokia - Bell Labs)
- Presentation, Moritz FLEISCHMANN, University of Mannheim, Germany (January 30th 2017)
- Title: Strategic Grading in the Product Acquisition Process of a Reverse Supply Chain
- Abstract: Most recommerce providers are applying a quality-dependent process for the acquisition of used products. They acquire the products via websites at which product holders submit upfront quality statements and receive quality-dependent acquisition prices for their used devices.This presentation is based on two papers that are motivated by this development of reverse logistics practice and aim to analyse the product assessment process of a recommerce provider in detail. We first propose a sequential bargaining model with complete information which captures the individual behaviour of the recommerce provider and the product holder. We determine the optimal strategies of the product holder and the recommerce provider in this game. We find that the resulting strategies lead to an efficient allocation, although the recommerce provider can absorb most of the bargaining potential due to his last mover advantage. We then relax the assumption of complete information and include uncertainty about the product holder's valuation of the product. We show the trade-off underlying the recommerce provider's optimal counteroffer decision and analyse the optimal strategy, using a logistic regression approach on a real-life data set of nearly 6,000 product submissions. The results reveal a significant improvement potential, compared to the currently applied strategy. The second paper takes the analysis of incomplete information further and derives the equilibrium strategies if both players are facing some uncertainties. At its core, it then addresses the recommerce provider’s optimization of the price-quality menu. Finally, we propose an alternative acquisition process that overcomes some observed deficiencies of the current process.
- Presentation, Jean-Charles Chebat, HEC Montreal, Canada (February 20th 2017)
- Title: Counterproductive Effects of Commonsensical Marketing Strategies in Services Marketing
- Abstract: Customers are increasingly violent toward frontline employees. Service corporations developed three commonsensical strategies to deal with consumers, that is, "customer is king", "service with a smile", and "corporation as a family". Our empirical data (some 500 service employees) show that such strategies bring about paradoxical negative consequences on the employees' behaviors toward the service corporation, especially in terms of reduced commitment to the employer and deviant behavior. Employees show an increased level of anger toward the corporation, emotional exhaustion. Managerial conclusions are drawn from the findings.
- Presentation, Zeynep AKSIN, Koç University, Turkey (March 27th 2017)
- Title: How Experienced Waits Drive Queue Behavior in the Lab
- Abstract: Using laboratory experiments, we study join and quit decisions by subjects from a single server, observable,
first come first served queue. In a set-up that incentivizes decisions that would maximize an expected utility function that is linear in waiting costs, we explore the role queue length and encountered service time experience plays on these decisions. We show that both the probability of quitting a queue and the survival time in a queue are affected by the queue length as well as experienced service times, for the same total waiting times. We further find that subjects are less inclined to join queues with random service times relative to a benchmark queue with deterministic service times. The implications of the results in terms of queue design and delay announcements in queues are discussed, along with ongoing experiments that explore the role of a waiting time announcement upon entry. Joint work with Busra Gencer, Evrim Gunes, Ozge Pala.
- Presentation, Jamal OUENNICHE, University of Edinburg, United Kingdom (April 10th 2017)
- Title: A Dual Local Search Framework for Combinatorial Optimization Problems with TSP Application
- Abstract: In practice, solving realistically sized combinatorial optimization problems to optimality is often too time consuming to be affordable; therefore, heuristics are typically implemented within most applications software. A specific category of heuristics has attracted considerable attention, namely local search methods. Most local search methods are primal in nature; that is, they start the search with a feasible solution and explore the feasible space for better feasible solutions. In this research, we propose a dual local search method and customize it to solve the traveling salesman problem (TSP); that is, a search method that starts with an infeasible solution, explores the dual space—each time reducing infeasibility, and lands in the primal space to deliver a feasible solution. The proposed design aims to replicate the designs of optimal solution methodologies in a heuristic way. To be more specific, we solve a combinatorial relaxation of a TSP formulation, design a neighborhood structure to repair such an infeasible starting solution, and improve components of intermediate dual solutions locally. Sample-based evidence along with statistically significant t-tests support the superiority of this dual design compared to its primal design counterpart.
- Presentation, Emanuele BORGONOVO, Bocconi University, Italy (May 2017)
- Title: Sensitivity Analysis in the Management Sciences:
- Abstract: The solution of several operations research problems requires the creation of a quantitative model. Sensitivity analysis is a crucial step in the model building and result communication process. Through sensitivity analysis, we gain essential insights on model behavior, on its structure and on its response to changes in the model inputs. Several interrogations are possible and several sensitivity analysis methods have been developed, giving rise to a vast and growing literature. We present an overview of available methods, structuring them into local and global methods. For local methods, we discuss Tornado diagrams, one way sensitivity functions, differentiation-based methods and scenario decomposition through finite change sensitivity indices, providing a unified view of the associated sensitivity measures. We then analyze global sensitivity methods, first discussing screening methods such as sequential bifurcation and the Morris method. We then address variance-based, moment-independent and value of information-based sensitivity methods. We discuss their formalization in a common rationale and present recent results that permit the estimation of global sensitivity measures by post-processing the sample generated by a traditional Monte Carlo simulation. We then investigate in detail the methodological issues concerning the crucial step of correctly interpreting the results of a sensitivity analysis. A classical example is worked out to illustrate some of the approaches.
Joint work with Elmar Plischke, Clausthal University of Technology.
- Research Seminar 2015/2016
- Presentation, Immanuel Bomze, University of Vienna, Austria (October 26th 2015)
- Title: Data Analysis, Machine Learning, Ternary and other Hard Decision Problems: How Copositive Optimization Can Help
- Abstract: Automated data analysis by now has reached an impressive impact level on everyday life and will most certainly intensify its influence on an overwhelming part of our society. Most of the algorithms currently employed are based on criteria leading to optimization problems which are notoriously hard to solve. By consequence, often heuristic and/or incomplete routines are used.However, both false-positive and false-negative results can be literally lethal, so it is mandatory to have quality guarantees, at least if human control is only superficially involved in key decision processes. Only exact optimization methods can assist in this situation.The task complexity is partly due to the discrete (or mixed-integer)structure, and partly to non-convexity of the functions involved, even if only continuous decision variables are considered. Here we will deal with one example of this phenomenon, namely yes/no decision with an abstention possibility. A similarity-based clustering or prediction criterion, which considers this third option seriously, leads to a ternary fractional quadratic optimization problem . This class recently has been shown to be very hard from the worst-case complexity point of view  and has several applications, e.g. via graph tripartitioning in the analysis of social networks .Copositive optimization is a 15 years old paradigm [3,4] which transforms both above-mentioned sources of difficulty, discreteness and nonconvexity, into an optimization problem with a linear objective function, subject to linear constraints, over a convex (matrix) cone, thus pushing all problem complexity into the description of the cone. This approach allows for developing tractable approximations of these hard problems, building upon the by now well-established interior-point optimization technology with its polynomial-time worst-case complexity. The resulting bounds can serve as a benchmark on the quality of decision, and may be used, e.g., for alerting human supervision in an automated way.
- Presentation, Richard Hartl, University of Vienna, Austria (November 19th 2015)
- Title: A Multi-Newsvendor Distribution System with Resupply and Vehicule Routing
- Abstract: This paper analyzes the problem of delivering perishable products from a depot to stores and introduces the option of performing a second delivery per day to allow a retailer to better deal with uncertainty. For each store we have to determine the initial delivery quantity, whether or not the store will receive a resupply, the timing and quantity of resupply and the sequence in which stores have to be visited. The problem consists of two intertwined sub-problems, a stochastic inventory optimization problem for given delivery routes and delivery times (second stage), and a deterministic profitable tour problem for the a-priori route, determining the stores receiving a second resupply, the sequence of the stores, and the resupply timing. We provide lower and upper bounds on profits and propose a hybrid solution procedure. The stochastic inventory optimization problem is solved by stochastic dynamic programming, while the profitable tour problem is solved using variable neighborhood search. In order to provide an efficient solution method, the second stage profit is not always computed exactly but approximations and bounds are used whenever possible. The algorithm is tested on randomly generated test instances. Furthermore, a real-world scenario is investigated. The results show a considerable improvement compared to the "single-order" model.
- Presentation, Eugene Khmelnitsky, Tel-Aviv University, Israel (December 15th 2015)
- Title: Bucket Brigade with Stochastic Worker Pace
- Abstract: Work-sharing in production systems is a modern approach that
improves throughput rate. Work is shifted between cross-trained workers in
order to better balance the material flow in the system. When a serial system
is concerned, a common work-sharing approach is the Bucket-Brigade (BB), by
which downstream workers sequentially take over items from adjacent upstream
workers. When the workers are located from slowest-to-fastest and their speeds
are deterministic, it is known that the line does not suffer from blockage or
starvation, and achieves the maximal theoretical throughput rate. Very little
is known in the literature on stochastic self-balancing systems with
work-sharing, and on BB in particular. This paper studies a basic BB model of
Bartholdi & Eisenstein (1996) under the assumption of stochastic speeds. We
identify settings in which conclusions that emerge from deterministic analysis
fail to hold when speeds are stochastic, in particular relating to worker order
assignment. Significantly, in a stochastic environment the BB can improve the
throughput rate compared to parallel workers, despite the fact that no blockage
or starvation occurs in the latter. Joint work with Y. Bukchin and E. Hanany.
- Presentation, Pietro De Giovanni, ESSEC Business School, France (January 15th 2016)
- Title: Sharing Contracts: Legend or reality ? A Triangulation Analysis to Discover the Truth
- Abstract: While the effectiveness of sharing contracts has been widely lauded by the theoretical research, their real applicability and concrete advantages remain vague and ambiguous. Starting from a case study, we have developed a game theory model to replicate several scenarios in which the adoption of sharing contracts depends on the partnership life cycle. Interestingly, we are able to demonstrate that a sharing contract is a very useful mechanism to coordinate a supply chain. Then, we have used the related findings to develop an empirical analysis as well as a qualitative investigation. In the empirical analysis we have collected some survey data and run various Multinomial Logit Regression Models to statistically verify the suitability of sharing contracts. Similarly, we have conducted some comparative case studies within seven different sectors in the Netherlands to qualitatively check the applicability as well as the success of sharing contracts in real business. Both the empirical and the qualitative research reveal a clear finding: Sharing contracts are marginally used in practice while their real advantages are negligible. So, are they a legend or a reality?
- Presentation, Guiomar Martín-Herrán, University of Valladolid, Spain (February 19th 2016)
- Title: Local and National Advertising Pulsing in a Marketing Channel
- Abstract: Despite the fact that the use of sporadic advertising schedules is well established in both the advertising literature and market place, the marketing channel literature that focuses on vertical interactions has consistently prescribed continuous advertising strategies over time. This paper investigates, in a bilateral monopoly context a situation in which a manufacturer and a retailer control their pricing and advertising decisions, the optimal scheduling of local and national advertising in a planning horizon of three periods. We found that, consistent with the advertising literature, the integrated channel adopts pulsing to benefit from advertising positive carryover effects. Conversely, when the channel is uncoordinated, three advertising schedules are identified as equilibria. The continuous schedule where channel members advertise in the three periods. The pulsing schedule in which the two channel members advertise only in the first and third periods. The mix schedule where the retailer advertises in the three periods and the manufacturer advertises exclusively in the first and third periods. The conditions under which either schedule can be implemented critically depend on the long-term effects of local and national advertising. Among others, pulsing is the optimal schedule when local advertising has very large negative or positive long-term effects.
- Presentation, Wout Dullaert , VU University Amsterdam, Netherlands (March 4th 2016)
- Title: Efficient Local Search for Routing Problems
- Abstract: In the Vehicle Routing Problem with Multiple Time Windows (VRPMTW) a time window per customer has to be selected to minimize the total duration of the solution. This increases the complexity of the routing problem compared to the VRP with a single time window per customer. By determining the optimal selection of time windows the waiting time in a route can be reduced. We present an exact polynomial time algorithm to efficiently determine the optimal time window selection when a neighborhood operation is applied. This algorithm is embedded in a variable neighborhood tabu search metaheuristic to solve the VRPMTW and the results are compared to the best-known solutions of the VRPMTW instances from the literature. The second topic of the seminar discusses several strategies for a more efficient implementation of the concept of Static Move Descriptors (SMDs), a recently developed technique that drastically speeds up Local Search based algorithms. SMDs exploit the fact that each local search step affects only a small part of the solution and allow for efficient tracking of changes at each iteration, such that unnecessary reevaluations can be avoided. The concept is highly effective at reducing computation times and is sufficiently generic to be applied in any Local Search-based algorithm. Despite its significant advantages, the design proposed in the literature suffers from high overhead and high implementational complexity. Our proposals lead to a much leaner and simpler implementation that offers better extendibility and significant further speedups of local search algorithms. We compare implementations for the Capacitated Vehicle Routing Problem (CVRP) - a well-studied, complex problem that serves as a benchmark for a wide variety of optimization techniques.
- Presentation, Kalyan Talluri, Imperial College, United kingdom (April 15th 2016)
- Title: The Network Revenue Management Dynamic Program and Approximations
- Abstract: We first survey the applications of network revenue management, and its various formulations as a stochastic dynamic program.Given the difficulty of solving these dynamic programs, a number of good approximation algorithms have been proposed for this. We describe some of the theoretical results on approximation bounds as well as the complexity limits.
- Presentation, Vinay Ramani, Indian Institute of Management, India (April 27th 2016)
- Title: Product Cannibalization and the Effect of a Service Strategy
- Abstract: Product cannibalization can push some consumers to shift their purchasing preferences from new to used products. This is a costly issue for manufacturers, who have to adjust their pricing strategies accordingly to mitigate the negative effect of cannibalization. In this paper, we characterize a channel to examine the effect of product cannibalization on firms' profits. In particular, we investigate how the presence of a Goodwill agency in a second hand market impacts the business of a manufacturer in a new market through cannibalization, and how the manufacturer reacts to mitigate its effects. We show that even if the manufacturer adjusts its price to decrease the negative effects of cannibalization, this effect is so severe that it always loses some profits. Nevertheless, when the manufacturer provides some additional services to new consumers, the negative effects of cannibalization can be partially overcome, while the channel achieves a profit-Pareto-improving outcome.
- Presentation, Bruno Viscolani, University of Padova, Italy (May 13th 2015)
- Title: Age-structured Goodwill: from Optimal Control to Differential Games
- Abstract: An age-dependent market segmentation is often suitable for real life products. In the first part of the talk, we introduce a simple age-structured model for the advertising process of a firm and the consequent goodwill evolution. The model formal structure is characterized by a first order linear partial differential equation. We formulate the advertising problem for a new product introduction as a distributed parameter optimal control problem and we use the suitable Maximum Principle conditions to characterize the optimal advertising flow. In the second part of the talk we move to differential games. Aso ne may expect, the necessary conditions for an open-loop Nash equilibrium in a distributed parameter differential
game are hard to discuss. Therefore, we search for conditions on age-structured differential games to make their analysis more tractable. We focus on a class of games which show the features of ordinary linear-state differential games, and we prove that their open-loop Nash equilibria are sub- game perfect. By means of a simple age-structured advertising problem, we provide an example of the theoretical results presented in the paper, and we show how to determine an open-loop Nash equilibrium. This is a joint work with Luca Grosset.
- Research Seminar 2014/2015
- Presentation, Aurelie Thiele, Lehigh University, USA (May 18th 2015)
- Title: Robust Design of (American) Health Insurance Plans
- Abstract: We
investigate optimization models to design a menu of health insurance plans
offered by a large (American) employer self-insuring through a private health
exchange, in the presence of uncertainty in employees’ choice of plans, health
status and health resource utilization. While the application of this talk is
grounded in the American healthcare landscape, it incorporates risk management,
choice models, robust optimization, insurance concepts and, more broadly,
decision-making under high uncertainty and customer choice. Our goal is to optimize, subject
to a budget constraint on the employer’s part, an employee-driven criterion
such as the minimization of employees’ out-of-pocket costs or the maximization
of the fairness of the design, with a focus on healthcare cost as a fraction of
employee’s income. We analyze the employer’s cost/long-term risk trade-off and
analyze policy choices to promote employees’ health while maintaining premiums at
a sustainable level. Of key interest is the allocation of the employer’s budget
to various health care costs, such as prescription drugs or hospital stays, to
achieve health objectives for the employee population without waste. We
consider both traditional health plans and High-Deductible Health Plans
(HDHPs), which give employees an incentive to be cost-conscious in occurring
health expenses below the deductible – with the exception of preventive
services required to be offered for free by the plan – but may also delay care.
We provide an heterogeneous view of the employee population based on the type
of plan employees select (from Health Maintenance Organizations with a local
provider network and a requirement to have a referral to access the specialty-care
network, to more expensive Preferred Provider Organizations that do not require
referrals to make appointments with specialists) and plans’ actuarial value or
metal level (bronze, silver, gold and platinum having actuarial values of 60%,
70%, 80% and 90%, respectively, which represents the part of healthcare costs
the payer is expected to shoulder for a benchmark enrollee population). In
addition, we discuss how to select the optimal number of plans on offer in the
private health exchange. Joint work with Dimitris Bertsimas and Jerry Chen of MIT.
- Presentation, Felix Papier, ESSEC Business School, France (April 13th 2015)
- Title: How to Split the Pie ? Sequential Supply Allocation with Forecast Updates
- Abstract: We study the problem of allocating supply under advance demand information (ADI). We consider a company that must allocate limited inventory to different markets that open sequentially. To reduce uncertainty, the company receives advance demand information and updates forecasts about its markets each time it makes an allocation decision. We study the value and optimal use of this information. Our research is motivated by an agri-food manufacturer that operates in several European countries. We develop the optimal policy under relaxed conditions and an efficient heuristic policy that performs close to optimal under general conditions. We derive structural properties of the model to gain managerial insights, and we derive the optimal policy in closed-form for the case of markets with identical prices. We use numerical experiments to demonstrate that the value of ADI can be significant. The managerial insights of this study include the observations that in environments as the one that motivated our research, early markets receive systematically less supply than late markets, and that the value of ADI is greatest if the initial supply is close to the initial forecasts.
- Presentation, Francis De Vericourt, EMTS Berlin, Germany (March 2nd 2015)
- Title: Financing Capacity Investment under Demand Uncertainty
- Abstract: This paper studies the interplay between the operational and financial facets of capacity investment. We consider the capacity choice problem of a firm with limited liquidity and whose access to external capital markets is hampered by moral hazard. The firm must therefore not only calibrate its capacity investment and the corresponding funding needs, but also optimize its sourcing of funds. Importantly, the set of available sources of funds is derived endogenously and includes standard financial claims (debt, equity, etc.). We find that when higher demand realizations are more indicative of high effort, debt financing is optimal for any given capacity level. In this case, the optimal capacity is never below the efficient capacity level but sometimes strictly above that level. Further, the optimal capacity level increases with the moral hazard problem's severity and decreases with the firm's internal funds. This runs counter to the newsvendor logic and to the common intuition that by raising the cost of external capital and hence the unit capacity cost, financial market frictions should lower the optimal capacity level. We trace the value of increasing capacity beyond the efficient level to a bonus effect and a demand elicitation effect. Both stem from the risk of unmet demand, which is characteristic of capacity decisions under uncertainty. Joint work with Denis Gromb.
- Presentation, Oualid Jouini, Ecole Centrale Paris, France (February 16th 2015)
- Title: Call Centers with Delay Information: Models and Insights
- Abstract: We
analyze a call center with impatient customers. We study how informing
customers about their anticipated delays affects performance. Customers react
by balking upon hearing the delay announcement, and may subsequently renege,
particularly if the realized waiting time exceeds the delay that has originally
been announced to them. The balking and reneging from such a system are a
function of the delay announcement. Modeling the call center as an M/M/s+M
queue with endogenized customer reactions to announcements, we analytically
characterize performance measures for this model. The analysis allows us to
explore the role announcing different percentiles of the waiting time
distribution, i.e., announcement coverage, plays on subsequent performance in
terms of balking and reneging. Through a numerical study we explore when
informing customers about delays is beneficial, and what the optimal coverage
should be in these announcements. It is shown how managers of a call center with
delay announcements can control the tradeoff between balking and reneging,
through their choice of announcements to be made.
- Presentation, Mohammed Abdellaoui, HEC-Paris/CNRS, France (December 15th 2014)
- Title: Recursive Rank-dependent Utility for Ambiguity
- Abstract: This paper proposes an ambiguity model that accounts for
Ellsberg-and-Allais type behavior in the famous Anscombe and Aumann framework.
Ambiguity attitudes are captured through both utility (as in recursive expected
utility) and non-additive probabilities (as in Choquet expected utility),
hence, combining `smooth' and `kinked' approaches to ambiguity. The model is based on a single
preference principle called substitution consistency. Due to a natural embedding
of backward induction, substitution consistency allows for one-stroke
representations of preferences including Schmeidler's Choquet expected utility,
recursive expected utility and subjective expected utility as particular cases,
without referring to mixtures of lotteries. In addition to the provision of a
unified setup in which different ambiguity models can jointly be analyzed and
compared, substitution consistency simplifies the formal study of relative
concavity of utility for risk and ambiguity without committing to recursive
expected utility. We also show how our general recursive model can facilitate
the descriptive study of ambiguity attitudes. Joint work with Horst Zank (University of Manchester).
- Presentation, Gila E. Fruchter, Bar-Ilan University, Israel (November 24th 2014)
- Title: Problem-Driven Theory with Theory-Driven Solutions
- Abstract: In
my research in marketing I seek to develop a problem-driven theory and find
theory-driven solutions. In
finding such solutions I draw on my long-standing experience of optimal control.
chose a closer look into two of my recent publications to have a taste of my
start with dynamic pricing for subscription services, continue with production
location decisions of brands.
- Presentation, Georges Zaccour, HEC Montréal, Canada (November 3rd 2014)
- Title: Optional-Contingent-Products Pricing in Supply Chains
- Abstract: This paper studies the pricing strategies of
firms belonging to a vertical channel structure where optional contingent products
are sold. Optional contingent products are characterized by unilateral demand
interdependencies. That is, the base product can be used independently of a
contingent product. On the other hand, the contingent product's purchase is
conditional on the possession of the base product. We find that the retailer decreases the price
of the base product to stimulate demand on the contingent-product market. Even
a loss-leader strategy could be optimal, which happens when reducing the base
product's price has a large positive effect on demand of this base product, and
thus on the number of potential consumers of the contingent product. The price
reduction of the base product also mitigates the double-marginalization
problem, which is well known in a supply-chain setting with one manufacturer
and one retailer, in a large part of the parameter space. Joint work with P.M.
Kort (Tilburg University), and S. Taboubi (GERAD, HEC Montréal).
- Presentation, Steffen Jorgensen, University of Southern Denmark, Denmark (October 6th 2014)
- Title: Recent Development in Lanchester Differential Games.
- Abstract: Lanchester games have their origin in the work of F.W. Lanchester (F.W. Lanchester: Aircraft in Warfare: The Dawn of the Fourth Arm. Constable, London, UK, 1916) who used ordinary differential equations to study stylized problems of military combat. Later on, one of Lanchester’s models has been used - quite successfully - to describe the effects of advertising competition in oligopolistic markets. A stream of literature in the field of advertising has used this model as a component of a dynamic game. I introduce a couple of Lanchester’s models and a simple dynamic advertising game model. The advertising game has been extended in various directions, for example, to take into account market growth/contraction and multiple types of advertising efforts. Such extensions are discussed. I present two examples of my own current research in the area and conclude by suggesting some avenues for future research.
- Presentation, Alain Haurie, HEC-Genève, Switzerland (canceled)
- Phd Thesis Defense of affiliated cluster member Cerasela Tanasescu, ESSEC Business School (November 5th 2014, 10h, room N517, presentation in French)
- Research Seminar 2013/2014
- Presentation, Sourour Elloumi, CEDRIC, CNAM Paris, France (19 May 2014)
- Title: Facility location and network design: the p-median problem and the notion of good formulation in discrete mathematical optimization
- Abstract: We
consider the p-median problem where one has to open a given number p of
facilities, and assign customers to their closest open facilities in such a way
that the total distance between customers and facilities is as small as
possible. We show some applications of this fundamental problem in discrete
location theory. We then present several formulations of the p-median problem
by mixed integer linear programming. We compare these formulations from
different aspects. Our objective will be to illustrate how the choice of a
formulation may have an important influence on the running time needed to
compute an optimal solution.
- Presentation, Virginie Gabrel, LAMSADE, University of Paris-Dauphine, France (5 May 2014)
- Title: Designing Incentive Systems for Truthful Information Sharing in Supply Chains
- Abstract: We consider the
problem of portfolio optimization with uncertainty on asset returns. In
the context of a scenario-based approach, we want to determine a robust
portfolio performing an acceptable compromise between the expected returns
and the risk of making losses. Many approaches have been suggested, all
based on the formulation of two criteria : one for expected return and the
other for measuring the risk. Some approaches propose to determine the set
of Pareto-optimal solutions, other approaches consists in optimizing one
criterion and transforming the second criterion into a constraint. In this
latter approach, we propose a new criterion (inducing a new
optimization model), called the pw-robustness criterion : the parameter w
allows to manage the risk while the parameter p handles the maximization
of expected return. This criterion generalizes the classical risk
measure Value-at-Risk and can be compared to the Conditional
Value-at-Risk measure regarding the worst cases. Joint work with Cécile
- Presentation, Ulrich Thonemann, University of Cologne, Germany (31 March 2014)
- Title: Designing Incentive Systems for Truthful Information Sharing in Supply Chains
- Abstract: We
consider a firm where sales is responsible for demand forecasting and
operations is responsible for ordering. Sales has better information
about the demand than operations and sends a non-binding demand forecast
to operations. Based on the demand forecast, operations determines the
order quantity. To incentivize truthful demand information sharing, we
include a penalty for forecast errors in the incentive system of sales.
In the utility function of sales, we also include the behavioral factors
lying aversion and loss aversion. We model the setting as a signaling
game and derive equilibria of the game. In a laboratory experiment, we
observe human behavior that is in-line with the model predictions, but
deviates substantially from expected payoff maximizing behavior.
Finally, we use the behavioral model to design incentive systems for
truthful information sharing and conduct an experiment to validate the
approach with out-of-sample treatments and out-of-sample subjects.
- Presentation, Victor Martinez de Albéniz Margalef, IESE Business School, Spain (17 February 2014)
- Title: A Closed-Loop Approach to Dynamic Assortment Planning
- Abstract: Firms are constantly trying to keep the customers
interested by refreshing their assortments. In industries such as
fashion retailing, products are becoming short-lived and, without product
introductions or in-store novelties, category sales quickly decrease. We
model these dynamics by assuming that products lose their
attractiveness over time and we let the firm enhance the assortment at a cost, for
single or multiple categories. We characterize the optimal
closed-loop policy that maximizes firm profits. When adjustment costs are linear
in the attractiveness, we find that an assort-up-to policy is
best: it is optimal to increase category attractiveness to a target
level, which is independent of the current attractiveness. Furthermore,
we develop heuristics to quickly determine good assort-up-to
levels. Finally, we show that a closed-loop approach is valuable compared to
open-loop strategies, especially when there is significant
uncertainty about the decay rate of products. Joint work with Esra Çinar.
- Presentation, Fouad El Ouardighi, ESSEC Business School, France (10 February 2014)
- Title: Operations and Marketings Strategies under Wholesale Price and Revenue Sharing Contracts in a Dynamic Supply Chain
- Abstract: The objective of the paper is to study how wholesale
price and revenue sharing contracts affect operations and marketing strategies
in a supply chain under different dynamic informational structures. We suggest
a differential game model of a stylized supply chain consisting of a
manufacturer and a single retailer.The focus is on the production and sales of a single
product. The model includes key operational and marketing activities in the
supply chain. The manufacturer decides a production rate and the rate of national
advertising efforts while the retailer chooses a purchase rate and the consumer
price. Depending on whether the information on the current state of key
operational and marketing variables of the supply chain is available or not,
firms may either make decisions contingent on information on the current state
of the game (feedback Nash equilibrium strategy), or commit to a predetermined
plan of action during the whole game (open-loop Nash equilibrium strategy). The
state of the game is summarized in the firms’ backlogs and the manufacturer’s
advertising goodwill. A main result suggests that the double marginalization
can be better mitigated if the supply chain members adopt a feedback Nash
equilibrium strategy under wholesale price contract and open-loop Nash
equilibrium strategy under revenue-sharing contract. Work co-authored with Gary Erickson, Dieter Grass, and Steffen Jorgensen.
- Presentation, Konstantin Kogan, Bar-Ilan University, Israel (2 December 2013)
- Title: A generalized Two-Agent Location Problem: Asymmetric Dynamics and Coordination
generalize a static two-agent location problem into dynamic, asymmetric
settings. The dynamics is due to the ability of the agents to move at
limited speeds. Since each agent has its own objective (demand) function
and these functions are interdependent,
decisions made by each agent may affect the performance of the other
agent and thus affect the overall performance of the system. We show
that under a broad range of system's parameters, centralized (system-wide
optimal) and non-cooperative (Nash) behavior of the agents are
characterized by a similar structure. The timing of these trajectories
and the intermediate speeds are however different. Moreover,
non-cooperative agents travel more and may
never rest and thus the system performance deteriorates under
decentralized decision-making. We show that a static linear reward
approach recently developed in Golany and Rothblum (2006) can be
generalized to provide coordination of the
moving agents and suggest its dynamic modification. When the reward
scheme is applied, the agents are induced to choose the system-wide
optimal solution even though they operate in a decentralized decision-making mode.
- Presentation, Gustav Feichinger, Vienna University of Technology, Austria (4 November 2013)
- Title: Dynamics and Control of Deviant Behavior
- Abstract: Two essential aspects of deviant behavior are the time aspect and social interactions. The consumption of illicit drugs, the spread of corruption and the incidence of violence exhibit epidemic structure. Thus, mathematical models describing deviant behavior are intertemporal and non-linear. We illustrate how the efficient control of such behavior can be analysed by using dynamic optimisation models. Typically, one gets multiple equilibria and tipping behavior (history-dependence) of optimal solutions. In particular, it is shown how the optimal mix of various control instruments evolve over time. Homogenous socio-economic agents are often unrealistic simplifications. A first step to include heterogeneity is the distinction of light and heavy deaviance. It is the escalation from light to heavy which has to be controled efficiently. Finally, we discuss age-specific extensions as well as dynamic game issues in the economics of crime.
Presentation, Yael Perlman, Bar-Ilan University, Israel (9 September 2013)
- Title: Reducing shoplifting by investment in security
- Abstract: We consider a single retailer with a number of
potential customers, who sells a product that is subject to shoplifting. In
order to decrease losses due to shoplifting and to maximize his profit, the
retailer can invest in security measures. In particular, we assume that the
retailer hires the security services of a single security supplier. While the
retailer decides how many security services to buy, the security supplier
decides which price to charge the retailer for these services, with the purpose
of maximizing his own profit. We address this problem using a game theoretic
approach, where the retailer competes with the supplier—the leader—who
specifies first the service price. The retailer responds by deciding how much to invest in
security. We study the conditions under which both players are profitable and
the extent to which the double marginalization affects the supply chain performance.
- Research Seminar 2012/2013
- Presentation, Raik Stolletz, University of Mannheim (3 June 2013)
- Title: Designing Lean Manufacturing Systems: Stationary and Dynamic Buffer Allocation in Flow Lines
- Abstract: KANBAN-controlled production
systems are often installed if the effective processing times of machines
in production systems are stochastic. These stochastic influences are due to
machine breakdowns, uncertain times of repair, and random processing times. The
optimal allocation of KANBAN-cards (buffer spaces) throughout a line guarantees
a certain average throughput while minimizing the required buffer space.
Besides the uncertainty, such flow lines often operate under time-dependent
influences. They are caused by learning effects during the ramp up phase,
changing station capacities, or time-dependent demand patterns. We discuss different decision
models for the static and dynamic buffer allocation in stochastic flow lines.
Sampling approaches for the analytical performance evaluation are proposed.
They are based on huge mixed integer decision models in discrete and continuous
time. Efficient solution approaches for the respective optimization models of
the buffer allocation are presented. The numerical study demonstrates the
accuracy of the proposed approaches.
- Presentation, Laurent Alfandari, ESSEC Business School (18 March 2013)
- Title: A Column-Generation Based Method for Optimal Electricity Production and Maintenance Planning
- Abstract: This talk presents a heuristic method based on column generation for the EDF (Electricité De France) long term electricity production planning problem proposed as subject of the ROADEF/EURO 2010 Challenge. This is to our knowledge the first-ranked method among those methods based on mathematical programming, and was ranked fourth overall. The problem consists in determining a production plan over the time horizon for each thermal power plant of the French electricity company, and for nuclear plants, a schedule of plant outages which are necessary for refueling and maintenance operations. The average cost of the overall outage and production planning, computed over a set of demand scenarios, is to be minimized, so as to find a robust solution. The method proceeds in two stages. In the first stage, dates for outages are fixed once for all for each nuclear plant. Data are aggregated with a single average scenario and reduced time steps, and a set-partitioning reformulation is solved for fixing outage dates with a heuristic based on column generation.
The pricing problem associated with each nuclear plant is a shortest path problem in a specific graph. In the second stage, the reload level is determined at each date of an outage, considering all scenarios. Finally, the production quantities between two outages are optimized for each plant and each scenario by solving independent linear programs. The efficiency of the approach is demonstrated by numerical results.
- Presentation, Dominique de Werra, EPFL Lausanne (22 April 2013)
- Title: Grouping Processors in Open Shop Scheduling
- Abstract: In the basic classical open shop model we have processors
and jobs to be processed on the processors according to some requirements. We
shall concentrate on the case in which some processors have to be grouped (becoming
so multiprocessors) in order to perform some of the tasks required by the jobs.
Such a model is motivated by applications in timetabling as
well as in testing of electronic systems in particular. We shall describe these
applications and review the basic results related to the simple cases of
ordinary open shop. We will examine the implications of processor grouping on
the form of optimal schedules. Complexity issues will be discussed for the
situations with some grouping of processors. The presentation is based on joint work with W. Kubiak and
- Presentation, Matthew Myers, University of Tennessee at Knoxville (25 February 2013)
- Title: The Value of Collaborative Knowledge Sharing in Global Supply Chains
- Abstract: Research in collaborative
inter-organizational relationships has typically focused on the value of these
relationships to a specific supply chain partner. Furthermore, the literature
has largely ignored the potential benefits of knowledge sharing between
independent buyers and suppliers in a global setting. In this study, we
investigate the influence of inter-firm knowledge sharing on the performance of
both the buyer and the supplier within global dyads, testing the contention
that both members benefit from knowledge sharing efforts, and both enjoy equal
pieces of the benefits pie. Using primary data from 132 cross national dyads
representing relationships in Asia-Pacific, Europe, Latin America, and the
United States, we investigate three specific types of knowledge sharing and
their influence on firm performance.
AGENDA OF UPCOMING EVENTS
- Workshop on G-GRAPHS defined from groups, organized by ESSEC Business School (12 Feburary 2013). Find more information here.
To register for an event, please send an e-mail to email@example.com. Everyone is welcome, from within ESSEC as well as from outside.