Research Seminars

2015/2016



Bruno VISCOLANI - University of Padova, Italy

May 13th 2016



Age-structured Goodwill: from Optimal Control to Differential Games

Abstract: An age-dependent market segmentation is often suitable for real life products. In the first part of the talk, we introduce a simple age-structured model for the advertising process of a firm and the consequent goodwill evolution. The model formal structure is characterized by a first order linear partial differential equation. We formulate the advertising problem for a new product introduction as a distributed parameter optimal control problem and we use the suitable Maximum Principle conditions to characterize the optimal advertising flow. In the second part of the talk we move to differential games. Aso ne may expect, the necessary conditions for an open-loop Nash equilibrium in a distributed parameter differential game are hard to discuss. Therefore, we search for conditions on age-structured differential games to make their analysis more tractable. We focus on a class of games which show the features of ordinary linear-state differential games, and we prove that their open-loop Nash equilibria are sub- game perfect. By means of a simple age-structured advertising problem, we provide an example of the theoretical results presented in the paper, and we show how to determine an open-loop Nash equilibrium. Joint work with Luca Grosset.





Vinay RAMANI- Indian Institute of Management, India

April 27th 2016



Product Cannibalization and the Effect of a Service Strategy

Abstract: Product cannibalization can push some consumers to shift their purchasing preferences from new to used products. This is a costly issue for manufacturers, who have to adjust their pricing strategies accordingly to mitigate the negative effect of cannibalization. In this paper, we characterize a channel to examine the effect of product cannibalization on firms' profits. In particular, we investigate how the presence of a Goodwill agency in a second hand market impacts the business of a manufacturer in a new market through cannibalization, and how the manufacturer reacts to mitigate its effects. We show that even if the manufacturer adjusts its price to decrease the negative effects of cannibalization, this effect is so severe that it always loses some profits. Nevertheless, when the manufacturer provides some additional services to new consumers, the negative effects of cannibalization can be partially overcome, while the channel achieves a profit-Pareto-improving outcome.




Kalyan TALLURI- Imperial College, United Kingdom

April 15th 2016



The Network Revenue Management Dynamic Program and Approximations

Abstract: We first survey the applications of network revenue management, and its various formulations as a stochastic dynamic program.Given the difficulty of solving these dynamic programs, a number of good approximation algorithms have been proposed for this. We describe some of the theoretical results on approximation bounds as well as the complexity limits.




Wout DULLAERT - VU University Amsterdam, Netherlands

March 4th 2016



Efficient Local Search for Routing Problems

Abstract: In the Vehicle Routing Problem with Multiple Time Windows (VRPMTW) a time window per customer has to be selected to minimize the total duration of the solution. This increases the complexity of the routing problem compared to the VRP with a single time window per customer. By determining the optimal selection of time windows the waiting time in a route can be reduced. We present an exact polynomial time algorithm to efficiently determine the optimal time window selection when a neighborhood operation is applied. This algorithm is embedded in a variable neighborhood tabu search metaheuristic to solve the VRPMTW and the results are compared to the best-known solutions of the VRPMTW instances from the literature. The second topic of the seminar discusses several strategies for a more efficient implementation of the concept of Static Move Descriptors (SMDs), a recently developed technique that drastically speeds up Local Search based algorithms. SMDs exploit the fact that each local search step affects only a small part of the solution and allow for efficient tracking of changes at each iteration, such that unnecessary reevaluations can be avoided. The concept is highly effective at reducing computation times and is sufficiently generic to be applied in any Local Search-based algorithm. Despite its significant advantages, the design proposed in the literature suffers from high overhead and high implementational complexity. Our proposals lead to a much leaner and simpler implementation that offers better extendibility and significant further speedups of local search algorithms. We compare implementations for the Capacitated Vehicle Routing Problem (CVRP) - a well-studied, complex problem that serves as a benchmark for a wide variety of optimization techniques.




Guiomar MARTIN-HERRAN - University of Valladolid, Spain

February 19th 2016



Local and National Advertising Pulsing in a Marketing Channel

Abstract: Despite the fact that the use of sporadic advertising schedules is well established in both the advertising literature and market place, the marketing channel literature that focuses on vertical interactions has consistently prescribed continuous advertising strategies over time. This paper investigates, in a bilateral monopoly context a situation in which a manufacturer and a retailer control their pricing and advertising decisions, the optimal scheduling of local and national advertising in a planning horizon of three periods. We found that, consistent with the advertising literature, the integrated channel adopts pulsing to benefit from advertising positive carryover effects. Conversely, when the channel is uncoordinated, three advertising schedules are identified as equilibria. The continuous schedule where channel members advertise in the three periods. The pulsing schedule in which the two channel members advertise only in the first and third periods. The mix schedule where the retailer advertises in the three periods and the manufacturer advertises exclusively in the first and third periods. The conditions under which either schedule can be implemented critically depend on the long-term effects of local and national advertising. Among others, pulsing is the optimal schedule when local advertising has very large negative or positive long-term effects.




Pietro DE GIOVANNI- ESSEC Business School, France

January 15th 2016



Sharing Contracts: Legend or reality ? A Triangulation Analysis to Discover the Truth

Abstract: While the effectiveness of sharing contracts has been widely lauded by the theoretical research, their real applicability and concrete advantages remain vague and ambiguous. Starting from a case study, we have developed a game theory model to replicate several scenarios in which the adoption of sharing contracts depends on the partnership life cycle. Interestingly, we are able to demonstrate that a sharing contract is a very useful mechanism to coordinate a supply chain. Then, we have used the related findings to develop an empirical analysis as well as a qualitative investigation. In the empirical analysis we have collected some survey data and run various Multinomial Logit Regression Models to statistically verify the suitability of sharing contracts. Similarly, we have conducted some comparative case studies within seven different sectors in the Netherlands to qualitatively check the applicability as well as the success of sharing contracts in real business. Both the empirical and the qualitative research reveal a clear finding: Sharing contracts are marginally used in practice while their real advantages are negligible. So, are they a legend or a reality?




Eugene KHMELNITSKY - Tel-Aviv University, Israel

December 15th 2015



Bucket Brigade with Stochastic Worker Pace

Abstract: Work-sharing in production systems is a modern approach that improves throughput rate. Work is shifted between cross-trained workers in order to better balance the material flow in the system. When a serial system is concerned, a common work-sharing approach is the Bucket-Brigade (BB), by which downstream workers sequentially take over items from adjacent upstream workers. When the workers are located from slowest-to-fastest and their speeds are deterministic, it is known that the line does not suffer from blockage or starvation, and achieves the maximal theoretical throughput rate. Very little is known in the literature on stochastic self-balancing systems with work-sharing, and on BB in particular. This paper studies a basic BB model of Bartholdi & Eisenstein (1996) under the assumption of stochastic speeds. We identify settings in which conclusions that emerge from deterministic analysis fail to hold when speeds are stochastic, in particular relating to worker order assignment. Significantly, in a stochastic environment the BB can improve the throughput rate compared to parallel workers, despite the fact that no blockage or starvation occurs in the latter. Joint work with Y. Bukchin and E. Hanany.




Richard HARTL- University of Vienna, Austria

November 19th 2015



A Multi-Newsvendor Distribution System with Resupply and Vehicule Routing

Abstract: This paper analyzes the problem of delivering perishable products from a depot to stores and introduces the option of performing a second delivery per day to allow a retailer to better deal with uncertainty. For each store we have to determine the initial delivery quantity, whether or not the store will receive a resupply, the timing and quantity of resupply and the sequence in which stores have to be visited. The problem consists of two intertwined sub-problems, a stochastic inventory optimization problem for given delivery routes and delivery times (second stage), and a deterministic profitable tour problem for the a-priori route, determining the stores receiving a second resupply, the sequence of the stores, and the resupply timing. We provide lower and upper bounds on profits and propose a hybrid solution procedure. The stochastic inventory optimization problem is solved by stochastic dynamic programming, while the profitable tour problem is solved using variable neighborhood search. In order to provide an efficient solution method, the second stage profit is not always computed exactly but approximations and bounds are used whenever possible. The algorithm is tested on randomly generated test instances. Furthermore, a real-world scenario is investigated. The results show a considerable improvement compared to the "single-order" model.




Immanuel BOMZE - University of Vienna, Austria

October 26th 2015



Data Analysis, Machine Learning, Ternary and other Hard Decision Problems: How Copositive Optimization Can Help

Abstract: Automated data analysis by now has reached an impressive impact level on everyday life and will most certainly intensify its influence on an overwhelming part of our society. Most of the algorithms currently employed are based on criteria leading to optimization problems which are notoriously hard to solve. By consequence, often heuristic and/or incomplete routines are used.However, both false-positive and false-negative results can be literally lethal, so it is mandatory to have quality guarantees, at least if human control is only superficially involved in key decision processes. Only exact optimization methods can assist in this situation.The task complexity is partly due to the discrete (or mixed-integer)structure, and partly to non-convexity of the functions involved, even if only continuous decision variables are considered. Here we will deal with one example of this phenomenon, namely yes/no decision with an abstention possibility. A similarity-based clustering or prediction criterion, which considers this third option seriously, leads to a ternary fractional quadratic optimization problem [1]. This class recently has been shown to be very hard from the worst-case complexity point of view [2] and has several applications, e.g. via graph tripartitioning in the analysis of social networks [5].Copositive optimization is a 15 years old paradigm [3,4] which transforms both above-mentioned sources of difficulty, discreteness and nonconvexity, into an optimization problem with a linear objective function, subject to linear constraints, over a convex (matrix) cone, thus pushing all problem complexity into the description of the cone. This approach allows for developing tractable approximations of these hard problems, building upon the by now well-established interior-point optimization technology with its polynomial-time worst-case complexity. The resulting bounds can serve as a benchmark on the quality of decision, and may be used, e.g., for alerting human supervision in an automated way.