
Past
Seminars
Fall 2010
Winter/Spring 2010
Fall 2009
Spring 2009
Fall 2008
Fall 2007  Spring 2008
Spring 2007
Fall 2006
Spring
2006
Winter 2006
Fall 2005
Fall 2010
<
"Health Savings Accounts: Consumer Contribution
Strategies & Policy Implications"
Talk by Stefanos Zenios, Stanford GSB
September 23rd, 2010
12:001:00 PM, room 561
A Health Savings Account (HSA) is a taxadvantaged savings account available only to households with
highdeductible health insurance. Each year, the household contributes a pretax dollar amount to the HSA
and uses the account to cover its outofpocket medical expenses. This paper provides initial answers to two
questions related to HSAs: 1) How should a household's annual contributions be influenced by the health
status of the members of its household? and 2) Do current contribution limits provide households with the
flexibility to use HSAs efficiently? To answer these questions, we formulate the household's decision problem
as one of determining annual contributions that minimize total expected discounted medical costs, with
costs not covered by the HSA balance paid with after tax dollars and thus carrying an extra penalty. In
this formulation, costs vary from year to year according to a discrete time continuous space markov process
with the state reflecting the household's health status. An optimal dynamic threshold policy, in which the
contribution each year brings the HSA balance to a health statedependent threshold, is derived. This is
compared to a simpler static policy in which the annual contribution is stateindependent. The parameters
of the Markovian cost model are estimated using longitudinal cost data for 43,000 households from the 1996
2004 Medical Expenditure Panel Survey. Policies are then derived and tested for the Stanford University
HSA plan using the estimated cost model. The results show that: a) Optimal annual contributions can vary
by as much as $3,500 because of differences in health status; b) Total discounted costs for static policies are
$12,000$150,000 higher then the corresponding dynamic policies; c) A twotiered form for taxadvantaged
contribution limits, in which the contribution size is unrestricted up to a certain HSA balance threshold
(tied to the plan's outofpocket maximum) and restricted beyond it, is necessary for the household to enjoy
the benefits of the optimal threshold policies.
"Is Outsourcing a WinWin Game? The Effect of Competition, Contractual Form, and Merger"
Talk by Lauren Lu, UNC
September 29th, 2010
12:001:00 PM, room 561
Two wellaccepted notions exist in the outsourcing literature: First, outsourcing is a winwin game when suppliers possess operational advantage; Second, outsourcing mitigates competition. This paper challenges these notions using a differentiated duopoly model with two competing supply chains. Each supply chain consists of an upstream supplier and a downstream manufacturer, who engage in a bilateral negotiation of an outsourcing contract. We demonstrate that the benefits of outsourcing depend on some key economic primitives of a model, such as the mode of competition, contractual form, and upstream industry structure. With a wholesale price contract, outsourcing is always a winwin game as long as the suppliers possess operational advantage. When the outsourcing contract takes the form of a twopart tariff, however, the suppliers’ operational advantage is a doubleedged sword for outsourcing. On the one hand, it reduces the manufacturers’ costs. On the other hand, it may intensify downstream competition or weaken the manufacturers’ bargaining position, depending on the mode of competition. Specifically, outsourcing imposes a strategic liability by intensifying the competition between the manufacturers who compete with quantities. Because quantities are strategic substitutes, the suppliers are trapped in a race to oversubsidize downstream production, in an attempt to gain market share by negotiating a twopart tariff with a belowcost unit price. The suppliers’ operational advantage exacerbates the intensity of this race. When the manufacturers compete with prices, in contrast, the suppliers have an incentive to charge abovecost unit prices to soften downstream competition, because prices are strategic complements. However, the manufacturers may still get hurt by the upstream’s operational advantage – their bargaining position is weakened due to a less profitable option to insource when competing against a rival who outsources to a lowcost supplier. These results continue to hold qualitatively when the upstream suppliers merge into a single firm. Whether the suppliers have an incentive to merge depends on the changes in both the competition externality and the bargaining structure, which again hinge on the mode of competition and the contractual form.
"Dynamic Assortment Customization with Limited Inventories"
Talk by Gurhan Kok, Fuqua School of Business
October 6th, 2010
12:001:00 PM, room 561
We consider a retailer with limited inventories of identically priced, substitutable products. Customers arrive sequentially and the firm decides which subset of the products to offer to each arriving customer depending on the customer's preferences, the inventory levels and the remaining time in the season. We show that the optimal assortment policy is to offer all available products if the customer base is homogeneous with respect to their product preferences. However, with multiple customer segments characterized by different product preferences, it may be optimal to limit the choice set of some customers. That is, it may be optimal not to offer products with low inventories to some customer segments and reserve them for future customers (who may have a stronger preference for those products). For the case of two products and two customer segments and for a special case with multiple products and multiple customer segments, we show that the optimal assortment policy is a threshold policy under which a product is offered to a customer segment if its inventory level is higher than a threshold value. The threshold levels are decreasing in time and increasing in the inventory levels of other products. For the general case, we perform a large numerical study, and confirm that the optimal policy continues to be of the threshold type. We find that the revenue impact of assortment customization can be significant, especially when customer heterogeneity is high and the starting inventory levels of the products are asymmetric. This demonstrates the use of assortment customization as another lever for revenue maximization in addition to pricing.
Winter/Spring 2010
<

View
past seminars
"Estimating the value of waiting list information in liver transplant decision making"
Talk by Burhaneddin Sandikci, Chicago Booth
January 27th, 2010
2:003:00 PM, room 561
In the United States, patients with endstage liver disease must join a waiting list to be eligible for cadaveric liver transplantation.
Due to privacy concerns, the details of the composition of this waiting list are not publicly available. This
paper considers the benefits associated with creating a more transparent waiting list. We study these benefits by modeling
the organ accept/reject decision faced by these patients as a Markov decision process in which the state of the process is
described by patient health, quality of the offered liver, and a measure of the rank of the patient in the waiting list. We
prove conditions under which there exist structured optimal solutions, such as monotone value functions and controllimit
optimal policies. We define the concept of the patient’s price of privacy, namely, the number of expected life days lost
due to the lack of complete waiting list information. We conduct extensive numerical studies based on clinical data, which
indicate that this price of privacy is typically on the order of 5% of the optimal solution value.
"Coordinating leadtime and safety stock decisions
in a twoechelon supply chain"
Talk by Robert Boute, Vlerick Leuven Gent Management School, visiting Kellogg School of Management
February 10th, 2010
12:001:00 PM, room 561
We study a twoechelon (retailermanufacturer) supply chain, modeled as a discrete time production/inventory system with random period consumer demands. The retailer’s inventory levels are reviewed periodically and managed using a basestock policy. The manufacturer produces the retailer’s orders on a maketoorder basis and he decides on the lead time based on the retailer’s order process. The manufacturer’s production system is capacitated in the sense that there is a single server that sequentially processes single units one at a time with stochastic unit processing times. The resulting lead times determine safety stock levels at the retailer. Making use of matrixanalytic methods and Phase Type distributions, we analyze the interaction between the consumer demand process, the retailer’s replenishment decision (and corresponding safety stocks), and the manufacturer’s production lead time. We show that by including endogenous lead times in our analysis, the retailer’s order variability can be dampened without increasing his stock levels. This leads to a situation where both supply chain echelons are better off.
"Is Leasing Greener than Selling?"
Talk by Beril Toktay, GIT
February 17th, 2010
12:001:00 PM, room 561
Based on the proposition that leasing is environmentally
superior to selling, some firms have adopted a leasing strategy
and others promote their existing leasing programs as
environmentally superior in order to ``green'' their image. The
argument is that as a leasing firm retains ownership of the
offlease units, it has an incentive to remarket the products,
resulting in a lower production and disposal volume. However, some
argue that leasing might be environmentally inferior due to the
direct control the firm has over the offlease products, which may
prompt their premature disposal to avoid cannibalizing the demand
for new products. Motivated by these issues, we adopt a lifecycle
environmental impact perspective and analytically investigate if
either leasing or selling can be both more profitable for a
monopolist and have a lower total environmental impact. We
identify conditions where each of these outcomes can occur,
depending on the magnitude of the disposal cost, the differential in disposal
costs faced by the firm and consumers, and the environmental
impact profile of the product. These results provide insights for
firms who want to promote their marketing strategy as the
``greener'' choice.
"Distributional sensitivity in manyserver queues"
Talk by Jim Dai, Georgia Tech
March 1st, 2010
2:003:00 PM, room 561
Manyserver queues are building blocks to model many largescale
service systems such as call centers and hospitals. In such a system,
customer abandonment can have a significant impact on revenue and
system performance. When a customer's waiting time in queue exceeds
his patience time, he abandons the system without service. We assume
that customer service and patience times form two sequences of
independent, identically distributed (iid) nonnegative random
variables, having general distributions. Recent call center and
hospital data show that these distributions are not exponential,
despite most of the research to date assumes that at least one of
these two distributions is exponential. I will discuss the sensitivity
of service and patience time distributions for queues in manyserver
heavy traffic, and its implications on model identification, numerical
algorithms, and asymptotic analysis. This is joint work with Shuangchi
He at Georgia Tech.
"Capacity TradeOff and Trading Capacity"
Talk by Nicole Adler, The Hebrew University of Jerusalem
April 21st, 2010
12:001:00 PM, room 561 We present a modeling approach that accounts for desire for capacity within the consumer demand function. Using a two stage hybrid competitivecooperative game, we analyze differentiated oligopolies under varying market structures from competition through different pooling contracts up to antitrust immune alliances and mergers. The results suggest that pooling agreements maximize consumer surplus and social welfare, which may be of interest to competition authorities. Moreover cooperatively setting capacity and pooling, but competing in price, appears to be preferable to no agreement for both consumers and overall social welfare alike on 'thin' markets, defined as low demand with weak initial profit margins. A numerical analysis applying the model to the airline industry demonstrates our findings under asymmetric and uncertain demand, suggesting that codesharing on parallel links may sometimes be preferable to the competitive outcome for multiple consumer types.
"Optimal Path Finding in Direction, Location and Time Dependent Environments "
Talk by Irina Dolinskaya, Northwestern University
April 28th, 2010
12:001:00 PM, room 561
Realtime determination of an optimal path in a changing medium (such as winds and ocean waves) requires explicit incorporation of this cost function location and time dependency into the model. Furthermore, the directiondependency of a cost function adds another layer of difficulty to the problem at hand. In this talk, we present methods to efficiently incorporate the complex structure of the cost function into the path planning process. We also integrate the system’s operability and dynamics constrains in the optimization model, hence combining traditionally separated optimalpath finding and pathfollowing stages of problem solving. An application to ship routing is introduced throughout the talk to motivate this research.
"Dynamic Inventory Competition with StockoutBased Substitution "
Talk by Rodney Parker, Chicago Booth
May 5th, 2010 12:001:00 PM, room 561
This paper continues the stream of literature observed in Olsen and Parker (2008) where retailers compete under a Markov equilibrium solution concept. In this presentation, we consider a duopoly where retailers compete by providing inventory under the circumstances where unsatisfied customers may seek satisfaction elsewhere or leave. A very general framework is formulated to address a variety of customer avenues when stock is unavailable. We find a basestock inventory policy is the equilibrium policy in the infinite horizon (open loop) under several mild conditions; this model's solution is known as an equilibrium in stationary strategies (ESS). We consequently determine conditions under which the parsimonious basestock policy is the Markov equilibrium (closed loop) in a discretetime dynamic game for a general time horizon, coinciding with the ESS basestock levels. Importantly, when these conditions do not apply, we have counterexamples where a firm has a unilateral incentive to deviate from the ESS, stocking at a higher level. These examples demonstrate a value of inventory commitment, where the retailer may extract a benefit over multiple periods through committing to a higher stocking level and forcing her rival to understock. Our conclusion is that we establish conditions for a Markov solution to coincide with the ESS and this policy is basestock, but other Markov solutions also exist.
"Optimal Preorder Discount and Information Release"
Talk by Leon Yang Chu, USC
May 12th, 2010
12:001:00 PM, room 561 In this paper, we investigate the information release and pricing strategies for a seller who can take customer
preorders before the release of a product. The preorder option enables the seller to sell a product at an
early date when consumers’ valuations are relatively homogeneous. We find that the optimal pricing strategy
is discontinuous with respect to the amount of information available at preorder, and a small change in
the amount of information may cause a dramatic change in the proportion of consumers who preorder
under the optimal pricing strategy. Furthermore, the seller’s optimal information release strategy depends
on a key measure, the normalized margin, which is the ratio between the expected margin and the standard
deviation of consumer valuations. While the seller may want to release some (or zero) amount of information,
depending on the normalized margin, the seller should never release all information. Finally, under the
optimal information release strategy and pricing strategy, the benefit of preorder is most pronounced when
the seller can successfully position the product as a “massmarket” product by withholding information.
"Transfer Pricing and Offshoring in Global Supply Chains"
Talk by Srinagesh Gavirneni, Cornell
May 19th, 2010
12:001:00 PM, room 561
Taking advantage of lower foreign tax rates using transfer pricing and taking advantage of lower production costs using offshoring are two strategies that global companies use to increase their profitability. Evidence suggests that firms employ these strategies independently. We study how global firms can jointly leverage tax and cost differences through coordinated transfer pricing and offshoring. We derive a tradeoff curve between tax and cost differences that can be used to design sourcing and transfer pricing strategies jointly. However, in a global firm the implementation of such jointly optimal strategies is often hindered by the following incentive problem. The headquarters is more concerned about the consolidated after tax profits than the local divisions. Local divisions, on the other hand, have a better view on the product cost structure and hence, have a better view on the appropriate sourcing strategies. Hence, we need to understand how different transfer price strategies and decentralization of sourcing and/or pricing decisions can be helpful. We find that when the tax differential is large, a fully centralized strategy works best. In other settings, a decentralized sourcing strategy (enabling the global firm to take advantage of the local cost information) should be considered. Finally, we show that when the cost of outsourcing increases, a decentralized company has more flexibility in transfer pricing and hence can achieve higher profits.
"A New Approach to Modeling Choice"
Talk by Vivek F. Farias, MIT
May 26th, 2010
12:001:00 PM, room 561
A central push in operations models over the last decade has been the incorporation of models of customer choice. Real world implementations of many of these models face the formidable (and very basic) stumbling block of simply selecting the ‘right’ model of choice to use. Thus motivated, we visit the following problem: For a ‘generic’ model of consumer choice (namely, distributions over preference lists) and a limited amount of data on
how consumers actually make decisions (such as marginal preference information), how may one predict revenues from offering a particular assortment of choices? We present a nonparametric framework to answer
such questions and design a number of tractable algorithms from a data and computational standpoint for the same. Our approach represents a novel and substantial departure from the typical attack on such basic questions. This departure is necessitated by problem scale and data availability.
In addition to laying out the basic theory, the practical value of the work will be demonstrated with a datadriven study. We will also briefly describe a current effort to build a 'product' based on our approach at Ford Motor.
Fall 2009
"Operations Management and History"
Talk by Roger Schmenner, Indiana University
September 30th, 2009
12:001:00 PM, room 561
It is my contention that we students of operations management ignore what history can teach us about our discipline. Economists routinely study economic history (e.g., Ben Bernanke’s own work on the origins of the Great Depression, economic history courses), but we operations management types blithely neglect the lessons from our manufacturing, service, and supply chain past. This seminar will try to make sense of some key features of our history.
"A new riskratio procedure for estimating multinomial logit models with unobservable nopurchases"
Talk by Kalyan Talluri, UPF
October 1st, 2009
12:001:00 PM, room 561
Revenue management models in the literature, and in many implementations, make some
important assumptions such as Poisson arrivals, independence, and multinomial logit customer
purchase behavior. In this talk we describe:
1. A large scale empirical study spanning four RM industries (traditional airline, lowcost
airline, cargo and hotel) to test these assumptions (Poisson, Logit, Independence) on
transactional data. The data however does not contain information on customers who
did not purchase (nopurchases). A novel feature of the study is that we device the tests
assuming that the nopurchases are not observed.
2. We examine the standard niteperiod, onearrivalperperiod dynamic program. We show
that this model is essentially unestimable as the number of periods T is a design param
eter. Specically we show that the maximum likelihood estimates are always biased for
large enough T. We augment the study with simulation experiments comparing the ML
estimates vs. true parameters.
3. We propose an alternate dynamic program that operates with arbitrary variance (uncer
tainty) in the forecasts and is still tractable (for a single resource).
4. One of the most challenging problems in RM is the estimation of customer behavior models
when one cannot observe nopurchases. We propose a new riskratio procedure that under
the assumption that the customers arrive over time deterministically, leads to an exact
unbiased estimator. We show conditions under which this estimator can be calculated by
solving a convex or quasiconvex program. We describe simulation experiments where the
method in many cases recovers the true parameters to the second decimal place, without
observing nopurchases.
"Innovation Tournaments"
Talk by Karl Ulrich, Wharton
October 6th, 2009
4:005:00 PM, Ford Motor Company Engineering Design Center (joint seminar with Segal Design Institute) [link to Segal site]*
Extremely valuable innovations are usually based on statistically exceptional opportunities. In most settings, organizations use tournaments to find these exceptional opportunities, by which I mean they generate many candidate opportunities and develop and filter them until only the very best remain. Although the basic idea of a tournament is common in industrial practice, very little science has been brought to bear on the problem of generating more, better opportunities and on more accurately evaluating and selecting the exceptional few. In this talk I lay out the beginnings of a science of innovation tournaments, illustrating how the somewhat random process of identifying and selecting opportunities can be managed more deliberately. I then link the concept of innovation tournaments to the popular notion of "design thinking," arguing that design thinking works well for some types of problems but not others.
This event is part of the Segal Seminar Series.
" "
Talk by Serhan Ziya, UNC
October 21st, 2009
12:001:00 PM, room 561
In many service systems, customers are not served in the order they arrive, but according to a priority scheme
that ranks them with respect to their relative “importance.” However, it may not be an easy task to determine
the importance level of customers, especially when decisions need to be made under limited information.
A typical example is from health care: When triage nurses classify patients into different priority groups, they
must promptly determine each patient’s criticality levels with only partial information on their conditions.
We consider such a service system where customers are from one of two possible types. The service time
and waiting cost for a customer depends on the customer’s type. Customers’ type identities are not directly
available to the service provider; however, each customer provides a signal, which is an imperfect indicator of
the customer’s identity. The service provider uses these signals to determine priority levels for the customers
with the objective of minimizing the longrun average waiting cost. In most of the paper, each customer’s signal
equals the probability that the customer belongs to the type that should have a higher priority and customers
incur waiting costs that are linear in time. We first show that increasing the number of priority classes decreases
costs, and the policy that gives the highest priority to the customer with the highest signal outperforms any
finite class priority policy . We then focus on twoclass priority policies and investigate how the optimal policy
changes with the system load. We also investigate the properties of “good” signals and find that signals that
are larger in convex ordering are more preferable. In a simulation study, we find that when the waiting cost
functions are nondecreasing, quadratic, and convex, the policythat assigns the highest priorityto the customer
with the highest signal performs poorlywhile the twoclass priority policy and an extension of the generalized
cµ rule perform well.
"Seasonal Storage Asset Valuation: Uncovering the Value of Limited Flexibility"
Talk by Owen Wu, University of Michigan
November 4th, 2009
12:001:00 PM, room 561
The value of a seasonal commodity storage asset depends not only on the seasonal price spread, but also on its operational flexibility: The maximum storing and delivering rates depend on the inventory level in the storage, and thus the firm has limited flexibility in choosing when and how much inventory to procure or sell. Using the heuristics in practice, the firm would pick the periods with the most favorable prices to procure and sell. We characterize the optimal strategy, analyze the underlying tradeoffs under limited flexibility, and decompose the value of flexibility. We show that, contrary to intuition and the heuristics, it may be suboptimal to buy or sell when all future prices are expected to be worse than the current price, because delaying operations captures the value of flexibility in the future. On the other hand, overdelaying operations would reduce flexibility and forgo the value of counterseason operations, and striking a balance is thus necessary. Also contrary to intuition, we show that even if the storage can be filled up or emptied at better prices later in the season, it may be optimal to buy or sell some inventory at the current least favorable price, because this allows the firm to buy less at the adverse price in the future. We also show that more flexibility is not necessarily beneficial when heuristic policies are used.
"Quick Response and Retailer Effort"
Talk by Harish Krishnan, UBC
November 11th, 2009
12:001:00 PM, room 561
The benefits of supply chain innovations such as quick response (QR) have been extensively investigated. This paper highlights a potentially damaging impact of QR on retailer effort. By lowering downstream inventories, QR may compromise retailer incentives to exert sales effort on a manufacturer’s product and may lead instead to greater sales effort on a competing product. Manufacturerinitiated quick response can therefore backfire, leading to lower sales of the manufacturer’s product and, in some cases, to higher sales of a competing product. Evidence from case studies and interviews confirms that some manufacturers view high retailer inventory as a means of increasing retailer commitment (“a loaded customer is a loyal customer”). By implication, manufacturers should recognize the effect we highlight in this paper: the potential of QR to lessen retailer commitment. We show that relatively simple distribution contracts such as minimumtake contracts, advancepurchase discounts, and exclusive dealing, when adopted in conjunction with QR, can remedy the distortionary impact of QR on retailers’ incentives. In two recent antitrust cases we find evidence that, consistent with our theory, manufacturers adopted exclusive dealing at almost the same time that they were making QRtype supply chain improvements.
"Blind Fair Routing in LargeScale ParallelServer Systems "
Talk by Amy Ward, USC
November 18th, 2009
12:001:00 PM, room 561
In a call center, there is a natural tradeoff between minimizing customer delay costs and fairly dividing the workload amongst agents of different skill levels. The relevant control is the routing and scheduling policy. The routing component specifies which agent should handle an arriving call when more than one agent is available, and the scheduling component decides which class a newly idle agent should serve when there are waiting customers in more than one class.
We formulate an optimization problem whose objective is to minimize the sum of classdependent convex delay costs subject to a constraint that requires a “fair” division of the total idle time amongst the agents. We solve this optimization problem in the HalfinWhitt manyserver heavytraffic limit regime. However, there is an important objection to the routing and scheduling policy that arises: its implementation requires extensive system parameter information. Therefore, we relax our original objective of finding a routing and scheduling policy that is optimal as the number of servers becomes large to finding a blind policy that is close to optimal. By blind, we mean that the implementation of the policy does not require system parameter information such as arrival and service rates.
* This is joint work with Mor Armony from NYU.
"Diagnostic Services Under Congestion"
Talk by Francis DeVericourt, ESMT
December 2nd, 2009
12:001:00 PM, room 561
In diagnostic services, agents typically need to weigh the benefit of running an additional test and improve the diagnosis accuracy against the cost of delaying the provision of service to others. Our paper analyzes how to dynamically manage this accuracy/congestion tradeoff. To that end, we study an elementary congested service facing an arriving stream of customers. The diagnostic process consists of a search problem in which the agent conducts a sequence of imperfect tests to determine whether a customer is of a given type. Our analysis yields counterintuitive insights into managing diagnostic services. First, we find that the maximum number of customers allowed in the system should initially increase with the number of performed tests. This result is in sharp contrast with the established literature on value/congestion tradeoffs, which consistently asserts that congestion levels should decrease with service times. In our diagnosis system, only after the agent has run enough tests without identifying the customer type should the level of congestion decrease. This nonmonotonic structure disappears when the base rate of the searched type is below a simple critical fraction, which captures the value of rightly identifying the customer type. Second, we find that the agent should sometimes diagnose the customer with the searched type, even when all tests are negative. This surprising result disappears when controlling for congestion, i.e. in a single diagnostic task.
"An Empirical Test of Management Involvement in Process Improvement"
Talk by Anita Tucker, HBS
December 9th, 2009
12:001:00 PM, room 561
Managers play a critical role in process improvement efforts. Despite potential gains in quality and efficiency, prior research has shown that many improvement efforts fail due to insufficient senior management involvement or a weak organizational climate for improvement. Less is known, however, about mechanisms that foster managers’ involvement with improvement efforts, which in turn may strengthen organizational climate. This paper addresses this gap with a field experiment of a bundle of process improvement activities suggested by “Management By Walking Around” (MBWA) (Peters and Waterman 1982) and the Toyota Production System. The three sequential activities were (1) interacting with frontline workers to learn about existing problems, (2) ensuring that action is taken to address these problems, and (3) communicating to frontline workers about actions taken. We compare before and after survey results from 20 hospitals randomly selected to engage in the activities for 18months with 49 hospitals that served as controls. We found that identifying problems had a negative impact on organizational climate for improvement while taking action had a positive impact. Together these results suggest a theoretical reason for the success of Toyota’s problem solving system, which solves problems as they arise rather than gathering large amounts of data about problems before solution efforts begin. Contrary to our expectations, providing feedback about actions taken negatively impacted frontline workers’ perceptions. Qualitative results suggest that feedback can backfire when managers go through the motions of process improvement activities without making a sincere effort to learn about and resolve staff concerns.
Spring 2009
"StockOuts and Customer Purchasing Behavior when Product Quality is Uncertain"
Talk by Laurens Debo, Chicago Booth
January 14th, 2009
1:002:00 PM, room 561
Inventory availability can influence consumer's perceptions of product quality, especially with new, unknown or innovative products. Consumers who find a product out of stock at several retailers may infer that many other consumers bought and therefore value it; this information may induce them to buy as well. In this paper, we study this phenomenon. Even though stockouts may generate the `buzz' that the product quality is high, increasing the consumer's willingness to buy, too many retailers that are out of stock may lead to lost sales. Hence, creating buzz through stockouts is tricky. We analyze a model in which potential
consumers observe privately a noisy signal of the product quality as well as the number of retailers that is out of stock within a subset of retailers. The initial inventory of each retailer, which may either be high or low, is not observable to the consumers. We study how the number of observed stockouts impacts the
consumer purchasing behavior and the realized sales. We find that (1) the equilibrium willingness to buy may increase when more stockouts are observed and (2) the ex ante expected realized sales may increase when the variance on the initial inventory at each retailer is high and increases (keeping the expected initial inventory constant). With our model, we explain when the buzz effect can be significant and how firms can leverage this effect.
"Approximate Dynamic Programming on HighDimensional Continuous Spaces Using Linear Programming"
Talk by Dan Adelman, Chicago Booth
Wednesday, January 21st, 2009
1:002:00 PM, room 561
Using the generalized joint replenishment (GJR) problem as an example, we devise an algorithm for solving the infinite dimensional linear programs that arise from general deterministic semiMarkov decision processes on Borel spaces. The innovative idea is to approximate the dual solution with continuous piecewise linear ridge functions that naturally represent functions defined on a high dimensional domain as linear combinations of functions defined on only a single dimension. The algorithm automatically generates a value function approximation basis built upon piecewiselinear ridge functions, by developing and exploiting a theoretical connection with the problem of finding optimal cyclic schedules. We provide a variant of the algorithm that is effective in practice, and exploit the special structure of the GJR problem to provide a coherent, implementable framework. Finally, we present numerical results demonstrating the performance of the resulting policy.
"Procurement Mechanism Design in a TwoEchelon Inventory System with PriceSensitive Demand "
Talk by Fuqiang Zhang, Washington University
Thursday, January 29th, 2009
12:001:00 PM, room 561
This paper studies a buyer's procurement strategies in a twoechelon inventory system with pricesensitive demand. The buyer procures a product from a supplier and then sells to the marketplace. Market demand is stochastic and depends on the buyer's selling price. The supplier's production cost is private information, and the buyer only knows the distribution of the cost. Both the buyer and the supplier can hold inventories to improve service, and a periodic review inventory system is considered. The buyer takes two attributes into consideration when designing the procurement mechanism: quantity attribute (i.e., the total purchase quantity) and servicelevel attribute (i.e., the supplier's delivery performance). We first identify the optimal procurement mechanism for the buyer, which consists of a menu of nonlinear contracts for each of the two attributes. It can be shown that the optimal mechanism induces both a lower market demand and a lower service level compared to the supply chain optimum. In view of the complexity of the optimal mechanism, we proceed to search for simpler mechanisms that perform well for the buyer. We find that the above two attributes have different implications for procurement mechanism design: The value of using complex contract terms is generally negligible for the servicelevel attribute, while it can be highly valuable for the quantity attribute. In particular, we demonstrate that a fixed servicelevel contract, which consists of a target service level and a pricequantity menu, yields nearly optimal profit for the buyer. Additionally, the pricequantity menu is essentially a quantity discount scheme widely observed in practice.
"Level, adjustment and observation biases in the newsvendor model"
Talk by Nils Rudi, INSEAD
Wednesday, February 4th, 2009
1:002:00 PM, room 561
In an experimental newsvendor setting where 310 subjects make 50 repeated newsvendor decisions, we investigate three forms of biases: Level bias  the average tendency of ordering away from the optimal order quantity; adjustment bias  the tendency to adjust order quantities; and observation bias  the tendency to let the degree of information available influence order quantities. We study these biases in terms of decisions (quantities) and performance (expected mismatch cost) and find evidence of all three of them as well as significant interaction between them.
We find that the portion of mismatch cost due to adjustment bias exceeds the portion of mismatch cost due to level bias in three out of four conditions, highlighting the importance of considering adjustment bias in addition to the more commonly studied level bias. Observation bias is studied through censored demands, a situation which arguably represents the majority of newsvendor settings. When demands are uncensored, subjects tend to order below the normative quantity when facing high margin and above the normative quantity when facing low margin, but in neither case beyond mean demand (a.k.a. pulltocenter effect). Censoring in general leads to lower quantities, magnifying the downward adjustment when facing high margin but partially counterbalancing the upwards adjustment when facing low margin, and in both cases actually violating the pulltocenter effect.
"Control of systems with flexible multiserver pools: A shadow routing approach "
Talk by Tolga Tezcan, UIUC
Wednesday, February 18th, 2009
1:002:00 PM, room 561
A general model with multiple input flows (classes) and several flexible multiserver pools is considered. Applications of this model arise in service systems such as call centers, health care systems and closedloop supply chain systems. Motivated by such modern service systems that face time dependent and random demand, we focus on systems under arrival rate uncertainty. Our goal is to construct robust control policies that would require minimum information about arrival rates. We first show that commonly used control policies such as FIFO, static priority and longest queue first (and other queue length based policies) are not robust. Indeed, we show that they are unstable in certain systems under arbitrarily low loads.
In the second part of this talk we propose a robust, generic scheme for routing new arrivals, which optimally balances server pools’ loads, without the knowledge of the flow input rates and without solving any optimization problem. The scheme is based on Shadow routing in a virtual queueing system. We study the behavior of our scheme in the HalfinWhitt (or, QED) asymptotic regime, when server pool sizes and the input rates are scaledup simultaneously by a factor r growing to infinity, while keeping the system load within O(√r) of its capacity.
Specifically, we first show that, in general, a system in a stationary regime has at least O(√r) average queue lengths; strategies achieving this O(√r) growth rate we call orderoptimal. Next, we show that some natural algorithms, such as MaxWeight, that guarantee stability, are not orderoptimal. Under the complete resource pooling condition, we show the orderoptimality of the Shadow routing algorithm. We present simulation results to demonstrate the good performance and robustness of our scheme.
Joint work with Alexander L. Stolyar, Bel l Labs
"Strategies for a single product M/G/1 multiclass maketostock queue "
Talk by Opher Baron, University of Toronto
Wednesday, February 25th, 2009
1:002:00 PM, room 561
We consider a supplier with a centralized production facility that serves distinguishable markets for a single product. We study two continuously reviewed inventory systems controlled by a basestock policy: centralized and decentralized. If different markets are prioritized, a product allocation problem arises on whether to make dispatching decisions at the beginning or end of the production. We provide the exact analysis of the decentralized priority policy with dispatching decisions postponed to the end of production. This yields the optimal basestock levels and cost. For centralized systems, the inventory rationing and Strict Priority (SP) policies were previously considered. While previous work expressed the corresponding optimal rationing levels and base stock level when service times are exponential, we express them for the much more realistic and practical case of general service times. Via an extensive numerical study, we show that assuming exponential (or Erlangian) service times can be very costly. We numerically demonstrate that the centralized inventory rationing policy minimizes costs and that the slightly costlier SP policy might still be useful due to its simplicity. Moreover, when, due to external factors, inventory pooling is not feasible the decentralized priority policy we suggest becomes a viable option.
"Revenue Management with Partially Refundable Fares "
Talk by Ozge Sahin, University of Michigan
Thursday, March 5th, 2009
12:001:00 PM, room 561
In the first part of the talk, we introduce and analyze an intertemporal choice model where customer valuations are uncertain and evolve over time. The model leads directly to the study of partially refundable fares. We analyze a multipleperiod fluid model, and obtain structural results on the optimal fare and revenue. We show that offering partially refundable fares can significantly improve expected revenues for a monopolist selling perishable capacity. In addition, we show that partially refundable fares are socially optimal as they maximize the sum of the expected profit for the monopolist plus the expected consumers' surplus.
In the second part of the talk we consider a duopoly Stackelberg game under proportional rationing to find out if the benefits of partially refundable fares prevail under competition. Our results are mixed but sharp. First, competition with partially refundable fares is the only stable equilibrium. Moreover, the expected profits for both the follower and the leader Pareto dominate the expected profits under the restricted equilibrium where both players use fully refundable fares. The equilibrium under partially refundable fares also Pareto dominates the nonstable equilibrium over nonrefundable fares except when the capacities are very large. On the negative side, the equilibrium under partially refundable fares is no longer socially optimal.
(Joint work with Guillermo Gallego).
"QualitySpeed Conundrum: Tradeoffs in LaborIntensive Services"
Talk by Senthil Veeraraghavan, Wharton
Wednesday, March 11th, 2009
1:002:00 PM, room 561
In laborintensive services such as primary health care, hospitality and education, the quality or value provided by the service increases with the time spent with the customer (with diminishing returns). However, longer service times (i.e., slower speed of service) also result in longer waits for customers. Thus, laborintensive services need to make the tradeoff between service quality and service speed.
The interaction between quality and speed is critical for laborintensive services. In a queueing framework, we parameterize the degree of laborintensity of the service. The service speed chosen by the serviceprovider affects the quality of the service through its laborintensity. Customers queue for the service based on the quality of the service, delay costs and price. We study how a service provider can make the optimal "qualityspeed tradeoff" in the face of such selfinterested, rational customers. Our results demonstrate that the laborintensity of the service is a critical driver of equilibrium price, service speed, demand, congestion in the queue and service provider revenues. We also model service rate competition among multiple servers, whose effects, we find, are very different from price competition. For instance, as the number of servers increases, the price increases and the servers become slower.
Key Words: Service Quality, Customer Behavior, LaborIntensive Services, Queues, Cost Disease.
(Joint work with Krishnan Anand and Fazil Pac).
"The Potential for Cannibalization of New Product Sales by Remanufactured Products"
Talk by Dan Guide, Penn State
Wednesday, March 18th, 2009
1:002:00 PM, room 561
The potential for the cannibalization of new product sales by remanufactured versions of the same product is a central issue in the continuing development of closedloop supply chains. We investigate the cannibalization question via auctions of new and remanufactured consumer and commercial goods. These auctions allow us to explore the impact of offering new and remanufactured products at the same time, which provides insights into the potential for cannibalization. Our results indicate that for the consumer and commercial products auctioned, there is a clear difference in the willingnesstopay for new and remanufactured goods. For the consumer product, there is scant overlap in bidders between the new and remanufactured products, suggesting that the risk of cannibalization in this case is minimal. The commercial product exhibits some evidence of overlap in bidding behavior, exposing a greater potential for cannibalization.
"Managing Software Operations: Productivity, Task Variety and Resource Planning"
Talk by Jayashankar Swaminathan, UNC
Wednesday, April 15th, 2009
1:002:00 PM, room 561
In this talk, I will first present our work the explores factors that affect productivity of engineers in software maintenance operations using a dataset covering 88 individuals who worked on 5711 maintenance tasks in offshore software support services. We show that task variety and turnover have novel and interesting impact on individual productivity. Next, we model the software operations in the form of a queue. Using a combination of empirical and analytical methods, we study threshold type policies in software maintenance and demonstrate their utility in resource planning.
Joint work with Sriram Narayanan (Michigan State University) and Sridhar Balasubramanian (University of North Carolina)
"Capacity Sharing and Cost Allocation among Independent Firms in the Presence of Congestion"
Talk by Saif Benjaafar, University of Minnesota
Wednesday, April 22nd, 2009
1:002:00 PM, room 561
The sharing of production/service capacity among independent firms is increasingly common in industry. Capacity sharing allows firms to hedge against demand uncertainty and to achieve economies of scale. The benefits are in the form of lower costs, improved service quality, or both. Capacity sharing among independent firms raises several important questions. Is capacity sharing always beneficial to all firms? Does it always lead to a reduction in total capacity in the system? How should capacity costs be allocated among the different firms? Is capacity sharing among all the firms the best arrangement or would sharing among smaller subsets of the firms be more beneficial to particular firms? Can capacity sharing be beneficial when firms do not report truthfully private information? Is it possible to induce firms, via cost allocation alone, to disclose truthfully their private information? In this talk, we address these and other related questions in settings where production/service facilities can be modeled as queueing systems. Firms decide on capacity levels to minimize delay costs and capacity investment costs subject to service level constraints. We formulate the problem as a cooperative game among independent agents, in which is embedded a noncooperative information reporting game. We identify various settings where the core of the game is nonempty (no subsets of firms prefer seceding from the grand coalition) and show that it is possible to design a cost allocation rule that is not only in the core but also guarantees truth telling witb truth telling being a do dominant strategy. (This work is joint with Yimin Yu, University of Minnesota, and Yigal Gerchak, Tel Aviv University)
"Recent Developments in MultiSourcing Inventory Models"
Talk by Jeannette Song, Duke
Wednesday, April 29th, 2009
1:002:00 PM, room 561
This talk reviews some recent developments in inventory models with multiple supply sources, including multiple suppliers, multiple transportation modes, and expediting options. Multisource inventory problems are fundamental problems in inventory management, and have been studied ever since the early stage of inventory theory. Unfortunately, these problems are intrinsically complex. Despite several decades of effort, the theory is still limited. In recent years, due to unprecedented forces of globalization and advancement of technology, companies are presented tremendous opportunities to source globally. The ability to take advantage of the available resources from any location in the world has become vital for companies to stay competitive. As such, prudent decisions in supply management are of strategic importance. In response to this, we have seen a resurgence of research interests on these issues. The purpose of this overview is to facilitate our understanding of the stateoftheart research in this area and shed light on a few future research directions.
"Competitors as Whistleblowers in Enforcement of Product Standards"
Talk by Erica Plambeck, Stanford GSB
Thursday, May 7th, 2009
12:001:00 PM, room 561
Many countries are requiring that products sold in their markets meet new safety and environmental standards. Testing products for compliance is expensive, so enforcement and compliance are necessarily imperfect. Firms have an incentive to test competitors' products, reveal violations to the regulatory authorities, and thus gain market share. This article shows that regulators should rely on competitive testing and whistleblowing (rather than test products directly) when the social disutility from sale of a noncompliant product is moderately high. Then, each firm's compliance effort increases with its product quality and with market concentration. However, a product standard, enforced through competitive testing, encourages entry by small, lowquality firms and reduces investment by highquality incumbents, which weakens compliance with the product standard in the long run.
(joint research with Terry A. Taylor)
"Improving the numerical performance of BLP static and dynamic discrete choice random coefficients demand estimation"
Talk by CheLin Su, Chicago Booth
Wednesday, May 13th, 2009
1:002:00 PM, room 561
The widelyused estimator of Berry, Levinsohn and Pakes (1995) produces consistent,
instrumentalvariables estimates of consumer preferences from a discretechoice demand
model with random coefficients, marketlevel demand shocks and
endogenous regressors (prices). We derive numerical theory results characterizing
the properties of the nested fixed point algorithm used to evaluate the objective
function of BLP's estimator. We discuss several related problems with typical
implementations and, in particular, cases which can lead to incorrect parameter estimates.
As a solution, we recast estimation as a mathematical program
with equilibrium constraints, which can be faster and which avoids the numerical
issues associated with nested inner loops. The advantages are even more pronounced
for forwardlooking demand models where Bellman's equation must also
be solved repeatedly. Several Monte Carlo and realdata experiments support our
numerical concerns about the nested fixed point approach and the advantages of
constrained optimization.
Joint work with J.P. Dube and J. Fox.
"Robust Revenue Management"
Talk by Guillaume Roels, UCLA
Wednesday, May 20th, 2009
1:002:00 PM, room 561
Revenue management models traditionally assume that future demand is unknown but can be described by a stochastic process or a probability distribution. Demand is however often difficult to characterize, especially in new or nonstationary markets. In this talk, I develop robust formulations for the capacity allocation problem in revenue management using the maximin and the minimax regret criteria under general polyhedral uncertainty sets. Our analysis reveals that the minimax regret controls perform very well on average despite their worstcase focus, and outperform the traditional controls when demand is correlated or censored. In particular, on real largescale problem sets, the minimax regret approach outperforms by up to 2% the traditional heuristics. Our models are scalable to solve practical problems because they combine efficient (exact or heuristic) solution methods with very modest data requirements.
Joint work with Georgia Perakis.
"Approximations for Markov Perfect Industry Dynamics"
Talk by Gabriel Y. Weintraub, Columbia
Wednesday, May 27th, 2009
1:002:00 PM, room 561
Dynamic oligopoly models are used in industrial organization and the management sciences to analyze diverse dynamic phenomena such as investments in R&D, advertising, or capacity, the entry and exit of firms, learningbydoing, and dynamic pricing. The applicability of these models has been severely limited, however, by the curse of dimensionality involved in the Markov perfect equilibrium (MPE) computation. In previous work, we introduced oblivious equilibrium (OE); a new solution concept for approximating MPE that alleviates the curse of dimensionality. In this work we introduce several important extensions to OE. First, in order to capture shortrun transitional dynamics that may result, for example, from shocks or policy changes, we develop a nonstationary version of OE. A great advantage of nonstationary OE (NOE) is that they are much easier to compute than MPE. We present an asymptotic result that provides a theoretical justification for the use of NOE as an approximation. We also present algorithms for bounding approximation error for each problem instance. We report results from computational case studies that serve to assess the accuracy of our approximation and to illustrate applications. Our results suggest that our method greatly increase the set of dynamic oligopoly models that can be analyzed computationally. Second, we extend the definition of OE, originally proposed for models with only firmspecific idiosyncratic random shocks, to accommodate models with aggregate random shocks. This extension is important when analyzing the dynamic effects of industrywide business cycles. We also discuss extensions of our methods to concentrated industries.
(This is joint work with C. Lanier Benkard, Przemyslaw Jeziorski, and Benjamin Van Roy)
"Inventory Rationing for a System with Heterogeneous Customer Classes"
Talk by Alan SchellerWolf, CMU
Wednesday, June 3rd, 2009
12:001:00 PM, room 561
Many retailers find it useful to partition customers into multiple classes based on certain characteristics.
We consider the case in which customers are primarily distinguished by whether they are willing to wait for
backordered demand. A firm that faces demand from customers that are differentiated in this way may want
to adopt an inventory management policy that takes advantage of this differentiation. We propose doing
so by imposing a critical level (CL) policy: When inventory is at or below the critical level demand from
those customers that are willing to wait is backordered, while demand from customers unwilling to wait will
still be served as long as there is any inventory available. This policy reserves inventory for possible future
demands from impatient customers by having other, patient, customers wait.
We consider a system that operates a continuous review replenishment policy, in which a base stock policy
is used for replenishments. Demands as well as lead times are stochastic. We model the system as a continuous
time Markov Chain. We develop an efficient solution procedure, based on decomposition and aggregation techniques,
to determine the average infinite horizon performance of a given CL policy. Our procedure is precise to an arbitrary
level of accuracy, and thus we use it as a basis for an efficient algorithm to determine the optimal CL policy parameters.
We use our algorithm in a numerical study to compare the cost of the optimal CL policy to the globally optimal statedependent policy along with two alternative, more naive, policies. The CL policy is slightly over 2% from optimal, whereas the alternative policies are 7% and 27% from optimal. We also study the sensitivity of our policy to the coefficient of variation of the lead time distribution, and find that the optimal CL policy is fairly insensitive, which is not the case for the globally optimal policy.
Fall 2008
"Progress, Perspectives, and Opportunities"
Talk by Dimitris Bertsimas, MIT
September 24th, 2008
In recent years the availability of massive amounts of electronically available data involving millions of people and the development of new data mining algorithms present an exciting new opportunity for Operations Research to have a significant impact in health care. We discuss our research efforts in assessing the risk of patients, their quality of care and also propose a new data based assessment for cancer research. We further discuss further research directions.
"Let the Pirates Patch? An Economic Analysis of Software Security Patch Restrictions"
Talk by Terrence August, UCSD
September 29, 2008
We study the question of whether a software vendor should allow users of unlicensed (pirated) copies of a software product to apply security patches. We present a joint model of network software security and software piracy and contrast two policies that a software vendor can enforce: (i) restriction of security patches only to legitimate users or (ii) provision of access to security patches to all users whether their copies are licensed or not. We find that when the software security risk is high and the piracy enforcement level is low, or when tendency for piracy in the consumer population is high, it is optimal for the vendor to restrict unlicensed users from applying security patches. When piracy tendency in the consumer population is low, applying software security patch restrictions is optimal for the vendor only when the piracy enforcement level is high. If patching costs are sufficiently low, however, an unrestricted patch release policy maximizes vendor profits. We also show that the vendor can use security patch restrictions as a substitute to investment in software security, and this effect can significantly reduce welfare. Furthermore, in certain cases, increased piracy enforcement levels can actually hurt vendor profits. We also show that governments can increase social surplus and intellectual property protection simultaneously by increasing piracy enforcement and utilizing the strategic interaction of piracy patch restrictions and network security. Finally, we demonstrate that, although unrestricted patching can maximize welfare when the piracy enforcement level is low, contrary to what one might expect, when the piracy enforcement level is high, restricting security patches only to licensed users can be socially optimal.
"Revenue Management for Online Advertising"
Talk by Kristin Fridgeirsdottir, LBS
October 6th, 2008
The Internet is currently the fastest growing advertising medium. Online advertising brings new opportunities and has many different characteristics from advertising in traditional media that support more quantitative decision making. We consider an operational problem of a web publisher that generates revenues by selling advertising space on its website.
The web publisher faces the problems of pricing and managing capacity of the advertising space with the objective of maximizing the revenues generated. The advertisers approach the web publisher, request their ad to be displayed to a certain number of visitors to the website, and are charged according to the socalled payperimpression pricing scheme. We suggest a queueing model for the operation of the web publisher considering the uncertainty of both the demand (the advertisers) and the supply (the visitors) with the advertising slots acting as servers. We consider two cases: i) the advertisers are willing to wait before their advertising campaign is started; ii) the advertisers are not willing to wait. For the first case we show that the resulting multiserver queueing model has the same properties as a known single server queueing model. We derive an approximation for the waiting time that performs significantly better than existing ones. For the second case we derive a closedform solution of the probability distribution for the number of advertisers in the system, which enables us to fully characterize the steadystate properties of the system. In both cases, the queueing models developed bring some new distinctive features and we compare them to the corresponding models in the literature.
Having characterized the operation of the web publisher, we study its revenue maximization problem and determine the optimal advertising price. We provide managerial insights such that from an operational point of view the optimal price should be higher when advertisers request more impressions, which goes against the quantity discount common in practice. Finally, we extend our models to incorporate multiple types of advertisers.
"Evidence of Biases in the Adoption of Energy Efficiency Initiatives by Small and Medium Sized Firms"
Talk by Charles Corbett, UCLA
October 20th, 2008
This study finds evidence of biases in the adoption of energy efficiency initiatives. We identify the biases using field level data on over 100,000 recommendations made to more than 13,000 small and medium sized firms. Managers are observed to be myopic in evaluating energy efficiency initiatives. They are influenced by initial costs instead of overall returns and they use high investment hurdle rates when evaluating such initiatives. A probit instrumental variables model is used to find that the adoption of a recommendation depends not only on the economic drivers and the characteristics of a recommendation but also on the sequence in which the recommendations are presented: adoption rates are higher for those initiatives appearing early on in a list of recommendations. Further, theory predicts that adoption rates will fall when decision makers are provided a large number of recommendations, however we find that adoption is not influenced by the number of recommendations provided. The study draws implications for enhancing adoption of energy efficiency initiatives and for other decision contexts where a collection of process improvement recommendations are made to firms. This study highlights previously unobserved decision biases in the OM literature. Additionally, the study uses field level data to highlight behavioral issues and thus differs from the majority of behavioral operations literature which uses experiments.
"Capacity Planning in Service Systems with Arrival Rate Uncertainty: Safety Staffing Principles Revisited "
Talk by Ramandeep Randhawa, UT Austin
October 27th, 2008
We study a capacity sizing problem in service systems with uncertain arrival rates; telephone call centers are canonical examples of such systems. The objective is to choose a staffing level that minimizes the sum of personnel costs and abandonment/waiting time costs. We formulate a simple fluid analogue, which is in essence a newsvendor problem, and demonstrate that the solution it prescribes performs remarkably well. In particular, the gap between the performance of the optimal staffing level and that of our proposed prescription is independent of the "size" of the system, i.e., it remains bounded as the system size (demand volume) increases. This stands in contrast to the more conventional theory that applies when arrival rates are known, and commonly used rulesofthumb predicated on it. Specifically, in that setting the difference between the optimal performance and that of the fluid solution diverges at a rate proportional to squareroot of the size of the system. One manifestation of this is the celebrated square root safety staffing principle that dates back to work of Erlang, which augments solutions of the deterministic analysis with additional servers of order square root the volume of demand. In our work, we establish that this type of prescription is needed only when arrival rates are suitably "predictable."
"Dynamic pricing with financial milestones: feedback pricing policies"
Talk by Costis Maglaras, Columbia
November 3rd, 2008
We study a revenue maximization problem for a seller that is subject to a set of financial and sales milestone constraints. The goal is to choose a pricing policy that satisfies these constraints over time in a revenue maximizing manner, and the focus is in settings with limited or no market information. The motivating application is pricing of large scale real estate projects.
"Estimating HIV incidence in the United States"
Talk by Ed Kaplan, Yale School of Management
November 10th, 2008
Prior estimates of HIV incidence in the United States derived from assumptions that, though not unreasonable at the time first employed, have become increasingly untenable. For years, the CDC reported 40,000 new HIV infections annually in the US. We developed new probability models and statistical procedures to estimate the chance that a newlyinfected person will be detected as recently infected. Together with the development of an HIV test that detects recent infection, these procedures have enabled the CDC to estimate annual HIV incidence in the United States from HIV/AIDS surveillance data.
"Drivers of Finished Goods Inventory in the U.S. Automobile Industry"
Talk by Gerard Cachon, Wharton
November 17th 2008
Automobile manufacturers in the U.S. supply chain exhibit significant
differences in their daysofsupply of finished vehicles (average inventory divided by average daily sales rate). For example, from 1995 to 2004, Toyota consistently carried approximately 30 fewer daysofsupply than General Motors. This suggests that Toyota’s well documented advantage in manufacturing efficiency, product design and upstream supply chain management extends to their finishedgoods inventory in their downstream supply chain from their assembly plants to their dealerships. Our objective in this research is to measure for this industry the effect of several factors on inventory holdings. We find that two factors, the number of dealerships in a manufacturer’s distribution network and a manufacturer’s production flexibility, explain essentially all of the difference in finished goods inventory between Toyota and three other makes, Chrysler, Ford and General Motors.
"Dynamic pricing to learn and earn in a logit model context"
Talk by Michael Harrison, Stanford
December 1st, 2008
Motivated by applications in financial services, we consider the following customized pricing problem. A seller of some good or service (like auto loans or small business loans) confronts a sequence of potential customers numbered 1, 2, …, T. These customers are drawn at random from a population characterized by logit parameters a and b, where b > 0: if the seller offers price p, the probability of a successful sale is
r(p) = 1/(1+e^(a+bp) ) ;
and if the sale is successful, the profit realized by the seller is p(p) = p – c, where c > 0 is known. If the parameters a and b were also known, then the problem of finding a price p* to maximize r(p)p(p) would be simple, and the seller would offer price p* to each of the T customers. We consider the more complicated case where a and b are fixed but initially unknown: given a prior joint distribution for a and b, the decision maker wants to choose a sequence of prices to maximize expected profit earned from the T potential customers. Of course, each price decision involves a tradeoff between refined parameter estimation (learning) and expected immediate profit (earning).

Fall 2007  Spring 2008
"Bounded Rationality in Newsvendor Models"
Talk by Xuanming Su, UC Berkeley
September 26th, 2007
Many theoretical models adopt a normative approach and assume that decisionmakers are perfect optimizers. In contrast, this paper takes a descriptive approach and considers bounded rationality, in the sense that decisionmakers are prone to errors and biases. Our decision model builds upon the quantal choice model: while the best decision need not always be made, better decisions are made more often. We apply this framework to the classic newsvendor model and characterize the ordering decisions made by a boundedly rational decisionmaker. We identify systematic biases and offer insight into when overordering and underordering may occur. We also investigate the impact of these biases on several other inventory settings that have traditionally been studied using the newsvendor model as a building block, such as supply chain contracting, the bullwhip effect, and inventory pooling. We find that incorporating decision noise and optimization error yields results that are consistent with some anomalies highlighted by recent experimental findings.
"Inventory Assortment and Substitution Problems"
Talk by Yehuda Bassok, USC
October 10th, 2007
We consider general substitution problem, in which consumers can choose one of N variants. We start with a choice model that ranks the preference of each consumer. The preferences of each of the consumers are not known to the retailer and thus, he/she must assume that the demand for each variant is random. We are able to calculate the retailer’s optimal stocking policy. We show that the role of safety stock is to hedge against the uncertainty in the market size but not the uncertainty in the demand for each of the variants. We then move the describe competition between retailer. Again, we are able to characterize the equilibrium inventory levels and assortments. When competition is considered safety stock may be carried even if market size is known. But, in most practical cases if market size is known but consumer choice is random no safety stock is carried with a very high probability.
"Randomized Methods for Solving Convex Problems: Some Theory and Some Computational Experience"
Talk by Robert M. Freund, MIT
October 17th, 2007
In contrast to conventional continuous optimization algorithms whose iterates are computed and analyzed deterministically, randomized methods rely on stochastic processes and random number/vector generation as part of the algorithm and/or its analysis. Whereas randomization in algorithms has been a part of research in discrete optimization for at least the last 20 years, randomization has played at most a minor role in algorithms for continuous convex optimization, at least until recently. This talk will focus on two recent randomizationbased algorithms for convex optimization: a method by Bertsimas and Vempala based on cuts at the center of mass, and a new method by Belloni and Freund that “preconditions” a standard interiorpoint algorithm using random walks. For the latter, we report very promising computational results on mediumsized conic problems.
"Optimal Policies for the Acceptance of Living and CadavericDonor Livers"
Talk by Oguzhan Alagoz UC Berkeley
November 14th, 2007
The talk is based on the papers: "Determining the Acceptance of Cadaveric Livers Using an Implicit Model of the Waiting List" and "The Optimal Timing of LivingDonor Liver Transplantation"
Transplantation is the only viable therapy for endstage liver diseases (ESLD) such as hepatitis B. In the United States, patients with ESLD are placed on a waiting list. When organs become available, they are offered to the patients on this waiting list. This study focuses on the decision problem faced by these patients: which offer to accept and which to refuse? A recent analysis of liver transplant data indicates that 60% of all livers offered to patients for transplantation are declined.
We formulate this problem as a discretetime Markov decision process (MDP). We analyze three MDP models, each representing a different situation. The LivingDonorOnly Model considers the problem of optimal timing of livingdonor liver transplantation, which is accomplished by removing an entire lobe of a living donor's liver and implanting it into the recipient. The CadavericDonorOnly Model considers the problem of accepting/refusing a cadaveric liver offer when the patient is on the waiting list but has no available living donor. The LivingandCadavericDonor Model is the most general model, which combines the first two models, in that the patient is both listed on the waiting list and also has an available living donor. The patient can accept the cadaveric liver offer, decline the cadaveric liver offer and use the livingdonor liver, or decline both and continue to wait.
We derive
structural properties of all three models, including several
sets of conditions that ensure the existence of intuitively
structured policies such as controllimit policies. The computational
experiments use clinical data, and show that the optimal policy
is typically of controllimit type.
"Incentives
for Retailer Forecasting: Rebates versus Returns"
Talk by Terry Taylor, UC Berkeley
November 28th 2007
Abstract:
This paper studies a manufacturer that sells to a newsvendor
retailer who can improve the quality of her demand information
by exerting costly forecasting effort. In such a setting,
contracts play two roles: providing incentives to influence
the retailer's forecasting decision, and eliciting information
obtained by forecasting to inform production decisions. We
focus on two forms of contracts that are widely used in such
settings and are mirror images of one another: a rebates contract
which compensates the retailer for the units she sells to
end consumers, and a returns contract which compensates the
retailer for the units that are unsold. We characterize the
optimal rebates contract, the optimal returns contract, and
the manufacturer's preferred contractual form. We show that
the retailer, manufacturer and total system may benefit from
the retailer having inferior forecasting technology. (Joint
work with Wenqiang Xiao.)
"An Empirical Investigation into the Tradeoffs that Impact OnTime Performance in the Airline Industry"
Talk by Kamalini Ramdas, Virginia
April 9th 2008
We investigate the tradeoff between aircraft capacity utilization and ontime performance, a key measure of airline quality. Building on prior theory and empirical work we expect that airlines that are close to their productivity or asset frontiers would face steeper tradeoffs between utilization and performance, than those that are further away.
We test this idea using a detailed 10year airline industry data set, drawing on queuing theory to disentangle the confounding effects of variance in travel time and capacity flexibility along an aircraft's route. We find that greater aircraft utilization results in higher delays, with this effect being worse for airlines that are close to their asset frontiers in terms of already being at high levels of aircraft utilization. Also, we find that the negative effect of utilization on delays is greater for aircraft that face higher variability in travel time along their routes, and is lower for aircraft on routes with higher capacity flexibility  in terms of the ability to substitute in a different aircraft for a particular flight than the one that was originally scheduled. Additionally, we examine how load factor, a measure of how full an airline's flights are and therefore a key revenue driver, affects ontime performance. Our analysis enables us to explain differences in ontime performance across airlines as a function of key operational variables, and to provide insight on how airlines can better manage their ontime performance levels and aircraft utilization.
"Profit Loss and Loss of Efficiency due to Competition"
Talk by Georgia Perakis, MIT
April 16th 2008
We consider an oligopoly setting where more than two firms are competing on products that are grosssubstitutes or complements. We study the profit loss due to competition (i.e., comparison of the total profit in the industry between centralized and decentralized settings) for Bertrand (pricesetting) competition and for Cournot (quantitysetting) competition. Our goal is to understand how the presence of competition affects the overall profit as well as the total surplus in the industry and what the key drivers of the inefficiencies that arise due to competition are.
Our research to date suggests that for gross substitutes the "market power" of each firm (in terms of how much they can each affect the total demand in the market with their decisions) play an important role. On the other hand, for complement products, the number of firms competing in the market and the number of products produced by each firm also play a role. To achieve this we develop bounds on how bad the total profit in the industry can become due to competition. We further discuss a setting where each firm is selling several products and is faced with a variety of constraints on the prices or quantities of the products it offers. We provide general bounds that are independent of the constraints of the game and as a result apply to a large class of settings. Our results for example, apply to competitive settings where firms sell various versions of the same product line and want to make sure the prices between each version of the product does not vary by a lot.
Furthermore, we consider more general measures of efficiency such as the total surplus in the market. We further generalize our results to classes of nonlinear demand functions.
(joint work with A. Farahat and J. Kluberg)
"Inadvertent Disclosure—Information Risk and Governance in the Financial Supply Chain"
Talk by Eric Johnson, Dartmouth
April 30th 2008
Abstract: Firms face many different types of information security risk. Inadvertent disclosure of sensitive business information represents one of the largest classes of recent security breaches. We examine a specific instance of this problem – inadvertent disclosures through peertopeer filesharing networks. We characterize the extent of the security risk for a group of large financial institutions using a direct analysis of leaked documents. We also characterize the threat of loss by examining search patterns in peertopeer networks. Our analysis demonstrates both a substantial threat and vulnerability for large financial firms. We find a statistically significant link between leakage and firm employment base. Further, we address information governance, which is an underlying factor in inadvertent disclosure. We propose a governance structure based on controls and incentives, where employees’ selfinterested behavior can result in firmoptimal use of information. Using a gametheoretic approach, we show that an incentivesbased policy with escalation can control both overentitlement and underentitlement while maintaining the flexibility needed in dynamic business environments.
"A Model of Fair Process and Its Limits"
Talk by Ludo Van der Heyden, INSEAD
May 21st 2008
Fair process research has shown that people care not only about outcomes, but also about the process that produces these outcomes. For a decision process to be seen as fair, the people affected must have the opportunity to give input and possibly to influence the decision, and the decision process and rationale must be transparent and clear. Existing research has shown empirically that fair process enhances both employee motivation and performance in execution.
In this talk, we review the fair process literature and present a more operational definition of fair process, that is motivated both by the literature on fair process and that on decision making. We present empirical evidence that supports this definition in hte framework of a study of innovation practices in 15 German manufacturing sites.
We conclude with presenting an analytical model of fair process in a principalagent (i.e., manageremployee) context, rooted in psychological preferences for autonomy and fairness. This model addresses the question as to why fair process is so often violated in practice. The associated paper breaks new ground by analytically examining the subtle tradeoffs involved.
paper #1 paper#2
back to top 
Spring 2007
View
past seminars
"Inventory Management of a FastFashion Retail Network"
Talk by Jeremie Gallien, MIT
March 28, 2007  2:15 PM, Room 561
Fastfashion retailers (e.g. Zara, H&M) have met some success responding to volatile demand trends through frequent introductions of new garments produced in small series. An important associated operational problem is the allocation over time of a limited amount of inventory across all stores in their network. I will present stochastic and deterministic models developed in collaboration with Zara to address this challenge, then discuss the implementation and impact of this work.
"Incorporating Risk Considerations into Inventory Models and Supply Contracts"
Talk by Candace Yano, U of C at Berkeley
April 11, 2007  1:30 PM, Room 561
When retailers purchase goods, they often make decisions about purchase quantities or enter into supply contracts that aim to mitigate their risk. Very little of the research literature on inventory models and supply contracts explicitly considers risk, however. In this talk, we discuss two procurement models in which the decisionmakers are concerned about risk. The first model considers a buyer and supplier, both of whom are risk averse but have some tolerance for risk. We show that several popular contract forms do not take full advantage of both parties tolerance for risk and therefore cannot optimize supply chain profits. We then propose a contract structure that overcomes these limitations.
In the second model, we consider a retailer who buys and then sells a product. The retailer is concerned about meeting corporate profit targets and this is the cause of his risk aversion. We show that under the standard accrual method of accounting, optimal inventory policies that properly account for his risk aversion have a relatively simple structure, but under the cashbasis method of accounting, optimal inventory policies may be extremely complicated and may have unintended consequences.
(Various parts of this research were done in collaboration with Shiming Deng of Oracle, Inc., Houmin Yan of the Chinese University of Hong Kong, and Hanqin Zhang of the Chinese Academy of Sciences.)
"Coordination of Marketing and Production for Price and Leadtime Decisions" and "Centralized vs. Decentralized Competition for Price and Leadtime Sensitive Demand"
Talk by Pinar Keskinocak, Georgia Tech
April 25, 2007  1:30 PM, Room 561
We study decentralized price and leadtime decisions made by the marketing and production departments, respectively, where the customers are sensitive both to the quoted price and the lead time.
First, we consider a single firm (in a monopoly setting) and analyze the inefficiencies that are due to the decentralization of price and leadtime decisions. In the decentralized setting, the total demand generated is larger, leadtimes are longer, quoted prices are lower, and the firm profits are lower as compared to the centralized setting. We show that coordination can be achieved using a transfer price contract with bonus payments. We also provide insights on the sensitivity of the optimal decisions with respect to market characteristics, sequence of decisions, and the firm’s capacity level.
Next, we extend our analysis to a competitive setting. We study two firms that compete based on their price and leadtime decisions in a common market. We explore the impact of the decentralization under competition comparing three scenarios: (i) Both firms are centralized, (ii) only one firm is centralized, (iii) both firms are decentralized. We find that under intense price competition, firms may suffer from a decentralized strategy, particularly under high flexibility induced by high capacity, where revenue based sales incentives motivate sales/marketing for more aggressive price cuts resulting in eroding margins. On the other hand, when price competition in the market is less intense than leadtime competition, a decentralized decision making strategy may dominate a centralized decision making strategy.
This is joint work with Pelin Pekgun and Paul Griffin.
"Complementarity in Improvement Programs"
Talk by Phil Lederer, University of Rochester
May 16th, 2007  1:30 PM, Jacobs Room 165
During the past two decades firms have adopted many types of functional improvement programs. In operations programs such as TQM, MRP, JIT have been adopted to reduce cost, increase quality and improve customer service. In marketing, marketing research programs have been used to better understand customer tastes. In accounting, ABC, and other programs to determine more accurate product costs have been implemented. Although many of these programs have been studied there has been little work on the joint effects (if any) between these activities. Indeed, the broadest claim of the continuous improvement movement is that all improvement activities are complementary, meaning that there are increasing gains to improvement programs. Papers such as Milgrom and Roberts (1990) seem to imply that most modern manufacturing innovations are complementary to each other.
This research studies the impact of combinations of improvement activities on firm performance. We study three types of improvement programs: process improvement, marketing research and cost estimation. Process improvements lower the variable cost of production, increases product quality, or cuts lead time, marketing research programs help us to identify segments and price accordingly, and cost estimation programs allow better cost estimation for pricing and control. We show that improvement programs may be complements or substitutes. This means that improvement programs can increase or decrease the desirability of the other programs. In particular, we show conditions that imply that marketing research and cost estimation programs are substitutes to each other. One of these conditions is that production technology displays increasing returns to scale, a characteristic found in queuing and inventory driven production systems. A model of a production system with queuing demonstrates the results. Further, our work demonstrates that process improvement programs reduce losses due to decentralized control of firms. The implication is that process improvement programs encourage decentralized organization of other improvement efforts.
Fall 2006
Back
to top of Past Seminars page
"Inventory
Turnover Performance in the U.S. Retail Sector"
Talk by Vishal
Gaur, NYU
September 27, 1:30 PM, Room 561
We report results from published and ongoing empirical studies
in inventory turnover in the U.S. retail sector, employing
financial data for about 400 publiclisted retailers across
ten retail segments for the years 19852004. In the first
part of this research, we develop benchmarks for inventory
turns for different types of retailers, and show the correlation
of inventory turns with other performance measures such as
gross margin and capital intensity. We also show the effects
of demand uncertainty, sales growth rate, and size on inventory
turns. In the second part of this research, we show the effect
of inventory turnover performance of retailers on their financial
performance, measured by their stock returns. Using a case
study, we also discuss possible reasons why inventory turnover
may be a predictor of financial performance.
back to top
"Safeguarding Strategic Supplies: Planning for Disaster"
Talk by Awi
Federgruen, Columbia
October 18, 1:00 PM, Room 561
Standard supply chain management texts discuss the benefits
of consolidating the set of suppliers in the chain. These
benefits include economies of scale in the production costs
as well as statistical economies of scale due to the pooling
of demand risks. Recently, many corporations and governments,
alike, have recognized a variety of risks associated with
external disruptions of the supply process. These provide
a powerful argument against (maximal) consolidation. Such
disruptions may arise because of “natural” disasters,
e.g. fires in production plants or the need to shut down a
facility because of violations of quality regulations or standards.
Disruptions may also occur because of labor strikes, or planned
acts of sabotage, resulting from terror attacks among others.
While these disruptions may be rare, their consequences can
be catastrophic for an individual firm as well as for a region
or a country as a whole.
We analyze a planning model for a firm or public organization
which needs to cover uncertain demand for a given item by
procuring supplies from multiple sources. The necessity to
employ multiple suppliers arises from the fact that when an
order is placed with any of the suppliers, only a random fraction
of the order size is useable. The model considers a single
demand season, with a given demand distribution, where all
supplies need to be ordered simultaneously before the start
of the season. The suppliers differ from each other in terms
of their yield distributions, their procurement costs and
capacity levels.
The planning model determines which of the potential suppliers
are to be retained and what size order is to be placed with
each. We consider two versions of the planning model: in the
first, (SCM), the orders must be such that the available supply
of useable units covers the random demand during the season
with (at least) a given probability. In the second version
of the model, (TCM), the orders are determined so as to minimize
the aggregate of procurement costs and endofthe season inventory
and shortage costs. In the classical inventory model with
a single, fully reliable, supplier, these two models are known
to be equivalent, but the equivalency breaks down under multiple
suppliers with unreliable yields.
Determining the optimal set of suppliers, the aggregate order
and its allocation among the suppliers, on the basis of the
exact shortfall distribution, is prohibitively difficult.
We have therefore developed two approximations for the shortfall
distribution. While both approximations are shown to be highly
accurate, the first, based on a Large Deviations Technique
(LDT), has the advantage of resulting in a rigorous upper
bound for the required total order. The second approximation
is based on a Central Limit Theorem (CLT) and is shown to
be asymptotically accurate, while the order quantities determined
by this method are asymptotically optimal, as the number of
suppliers grows. Most importantly, this CLTbased approximation
permits many important qualitative insights.
Based on the CLTapproximation, we develop, for both the (SCM)
and (TCM), a highly efficient procedure which generates the
optimal set of suppliers as well as the optimal orders to
be assigned to each. Most importantly, these procedures generate
a variety of important qualitative insights, for example,
regarding which sets of suppliers allow for a feasible solution,
both when they have ample supply, and when they are capacitated.
back to top
"Service Level Agreements
in Call Centers: Perils and Prescriptions" (pdf/Kellogg
ID required)
Talk
by Tava
Olsen, WashingtonSt. Louis
October 25, 1:30 PM, Room 561
A call center with both contract and noncontract customers
was giving priority to the contract customers only in offpeak
hours, precisely when having priority was least important.
Using asymptotic analysis we show why this is indeed rational
behavior on the part of the call center and what the implications
are for customers. We then suggest other contracts that do
not result in this type of undesirable behavior from a contract
customer’s perspective. We compare the performance of
the different contracts in terms of mean, variance, and outer
percentiles of delay for both customer types using both numerical
and asymptotic heavytraffic analyses.
back to top
"Investment and Market Structure in Congestible Services"
Talk by Ramesh
Johari, Stanford
November 1, 1:30 PM, Room 561
We consider investment and market structure in a model of
congestionsensitive service provision. Our starting point
is a simple model of network routing that has received a great
deal of attention in the engineering community, the socalled
"selfish routing" model. A continuum of users wish
to send data from source to destination, and can choose from
several parallel routes. Each route is owned by an independent
network provider that sets a price per unit flow along the
route. A user's overall disutility is measured as the sum
of price and congestion experienced along the chosen route.
In contrast to previous work, we consider a model where providers
can invest in their routes, to minimize the impact of this
congestion externality.
We
investigate this model through the Nash equilibria of the
pricing and investment game played by providers. We find that
returns to investment and the timing of strategic decisions
are critical determinants of the outcome of the game. For
a broad range of models for which (1) providers choose prices
and investments simultaneously, and (2) the model exhibits
nonincreasing returns to investment, we show that if a pure
strategy Nash equilibrium exists, it is unique, symmetric,
and efficient; we also establish conditions for existence
of pure strategy Nash equilibrium in special cases. This result
does not hold if either (1) or (2) are violated, and we discuss
these scenarios as well. We also investigate several extensions,
including modeling the entry of providers into the market.
We will emphasize the implications of our results for key
issues in telecommunications, including wireless Internet
service penetration and the viability of sourcedirected routing.
This
is joint work with Gabriel Weintraub and Ben Van Roy.
back to top
"Revenue Management Models in Media Broadcasting"
Talk by Ioana
Popescu, INSEAD
November 9, 1:30 PM, Room 561
An important challenge faced by media broadcasting companies is how to allocate limited advertising space across multiple clients and markets (upfront/scatter) in order to maximize profits. We develop stylized optimization models of inventory allocation under audience uncertainty. At the strategic planning level, we provide simple solutions for upfront market allocation and contracting for multiple clients. In a dynamic setting, we investigate makegoods allocation during the scatter market, under reversible and irreversible commitment regimes. Our results hold under general performance metrics, and bring out interesting parallels with standard inventory and revenue management frameworks.
back to top
"Competing on Time: A Framework for New Product Introduction Decision in the High Technology Industry"
Talk by Ozalp Ozer, NYU
January 17, 1:30 PM, Room 561
In this presentation, we will outline the challenges and uncertainties associated with bringing a new product to market. To do so, we will focus on a major global hightechnology company located in the Bay Area and discuss their challenges related to new product introductions (NPI). The high technology industry is characterized by lightning speed in technology innovation, intense competition and relentless price erosion. It is, therefore, critical to bring the new product to the market at the right time to ensure profitability.
We will present our OR based modeling framework that is used to help a major global hightechnology company make effective timetomarket decisions. Our model solves the problem in two nested phases: a design phase and a mass production phase. The design phase is modeled as an optimal stopping problem where decision to "enter or not" is made. The solution of the design stage affects the mass production phase. This second phase is modeled as a stochastic production control problem where production decisions are made. We will characterize an optimal policy for market timing, an optimal policy for production decisions and how and why they are amenable for implementation. We will also discuss the techniques used to solve this largescale stochastic dynamic program, including how structural results enabled us to improve computational efficiency.
Finally, we will discuss how this project and the resulting software enabled various functional areas, such as Finance, Manufacturing, Marketing and R&D within the firm to communicate and jointly address this strategic question. If time permits, we will share our perspectives on the challenges and key success factors of working at the university/industry boundary
Spring 2006
The
following visited NU in Spring 2006 for the Kellogg Operations
Seminar series. Please click on a date for more information
about a particular talk.
Back
to top of Past Seminars page
David
Yao 
Columbia 
April
7 
Bert
De Reyck and Janne Gustafsson 
London
Business School 
April
14 
Marty
Reiman 
Bell
Labs 
May
3 
Assaf
Zeevi 
Columbia 
May
10 
Victor
DeMiguel 
London
Business School 
June
7 
Please
check individual listings for locations of the talks.
"Dynamic
Resource Control in a Stochastic Network: Limiting Regimes
and Asymptotic Optimality"
Talk by David Yao,
Columbia
April 7, 12:05 PM, Jacobs Rm. 561
We study a class of stochastic networks with concurrent occupancy
of resources, which, in turn, are shared among jobs. For example,
streaming a video on the Internet requires bandwidth from
all the links that connect the source of the video to its
destination; and the capacity of each link is shared, according
to a certain protocol, among all sourcedestination connections
that involve this link. Another example is a multileg flight
on an airline reservation system: to book the flight, seats
on all legs must be committed simultaneously. We focus on
a class of dynamic resource control on this
type of networks, where the link capacities are allocated
among the job classes, in each state of the network, according
to the solution to a
utility maximization problem. We derive fluid and diffusion
limits of the network under this type of (myopic) control
policy. Furthermore, we identify a cost function that is minimized
in the diffusion regime, thereby justifying the asymptotic
optimality of the control. (Joint work with Hengqing Ye.)
back to top
back to main seminars page
"Valuing
Risky Projects using Mixed Asset Portfolio Selection Models"
Talk by Bert
De Reyck, London Business School;
and Janne Gustafsson,
Cheyne Capital Management Limited
April 14, 11 AM, Jacobs Rm. 561
We examine the valuation of projects in a setting where an
investor can invest in a portfolio of private projects as
well as in securities in financial markets, but where exact
replication of project cash flows in financial markets is
not necessarily possible. We consider both singleperiod and
multiperiod models, and develop an inverse optimization procedure
for valuing projects in this setting. We show that the valuation
procedure exhibits several important analytical properties,
for example, that project values for a meanvariance investor
converge towards prices given by the capital asset pricing
model. We also conduct several numerical experiments.
back to top
back to main seminars page
"An
AsymptoticallyOptimal Dynamic Admission Policy for a Revenue
Management Problem"
Talk by Marty
Reiman, Bell Labs
May 3, 4:00 PM, Jacobs Rm. 166
We consider the following canonical revenue management problem,
which has been analyzed in the context of airline seat inventory
control and has applications to other service industries and
supply chain management. There are several resource types
(legs), each of which has a fixed capacity (number of seats).
There are several customer classes (routes), each with an
associated arrival process, price and resource consumption
vector. The aim is to make dynamic accept/reject decisions
at customer arrival epochs to maximize the total expected
revenue obtained over the finite horizon [0,T] subject to
not exceeding the capacity of any of the resources.
We introduce a control policy motivated by fluid and
diffusion limits (as the resource capacities and arrival rates
grow large). Our control policy makes an initial resource
allocation decision based on solving a linear program (LP).
The solution of the LP yields the fraction of arrivals of
each class to accept. We then form a ‘trigger function’
based on the difference between the actual and expected number
of accepted customers of each class. When this trigger function
exceeds a preset threshold a reoptimization is performed:
An LP involving the remaining resource capacities and remaining
time is solved. The solution of this LP is then used to control
admissions over the remainder of the horizon. We show that
this policy is asymptotically optimal on diffusion scale,
a property that is not shared by other approaches such as
booking limits and bid price control.
back
to top
back to main seminars page
"Blind
Nonparametric Revenue Management"
Talk by Assaf
Zeevi, Columbia
May 10, 12:00 PM, Jacobs Rm. 561
In most revenue management studies one assumes the
decision maker knows the manner in which consumers react to
prices. The most typical way to express this fact is to assume
the demand function is known, and it is just as common to
posit that it admits a simple parametric structure. So what
happens if none of this holds?
To investigate this question we consider a general class of
network revenue management problems, where the objective is
to price multiple products so as to maximize expected revenues
over a finite sales horizon. The decision maker observes realized
demand over time, but is otherwise ``blind'' to the underlying
demand function which maps prices into the instantaneous demand
rate. Few structural assumptions are made with regard to the
demand function, in particular, it need not admit any parametric
representation. We introduce a general method for solving
such blind revenue management problems which involves the
classical trade off between exploration and exploitation.
To evaluate the performance of the proposed method we compare
the revenues it generates to those corresponding to the optimal
dynamic pricing policy that knows the demand function a priori.
While that may seem as a lofty benchmark, we prove that as
the sales volume grows large the revenue loss is guaranteed
to be small. A more loose interpretation might run as follows:
in problems that involve high sales volume, the value of “full
information” (or penalty for “blind” decision
making) is not as significant as one might guess.
back to top
back to main seminars page
"Portfolio
Selection with Robust Estimates of Risk"
Talk by Victor
DeMiguel, London Business School
June 7, 12:00 PM, Jacobs Rm. 561
It is wellknown that meanvariance portfolios constructed
using the sample mean and covariance matrix of asset returns
perform poorly outofsample due to estimation error. Moreover,
it has been demonstrated that estimation error in the sample
mean is, for most realworld datasets, much larger than that
in the sample covariance matrix. For this reason, recent research
has focused on the minimumvariance portfolio, which relies
only on estimates of the covariance matrix and thus usually
performs better outofsample than meanvariance portfolios.
But even minimumvariance portfolios are still quite sensitive
to estimation error and have unstable weights that fluctuate
substantially over time.
Jagannathan and Ma (2003} show that imposing shortselling
constraints can help to alleviate this difficulty. In this
paper, we explore a different mechanism to combat estimation
error. Concretely, we show how to compute the policy that
minimizescertain robust estimator of portfolio risk by solving
a nonlinear program. We also give an analytical bound on the
sensitivity of the resulting portfolio weights to changes
in the distributional assumptions. Finally, our outofsample
numerical results show that the portfolio weights of the proposed
policies are more stable than those of the minimumvariance
policy and that they usually perform better in terms of Sharpe
ratio. Moreover, although the imposition of shortselling constraints
does improve the performance of the minimumvariance policy,
the proposed robust policies are more stable and usually perform
better even in the presence of constraints.
back to top
back to main seminars page
Winter
2006
The
following speakers visited NU in winter 2006 for the Kellogg
Operations Seminar series. Please click on a date for more
information about a particular talk.
Back to top of Past Seminars
page
"Resource
flexibility in manufacturing systems"
Talk by Mark
Lewis, Cornell
Jan. 18, 12 PM, Jacobs Rm. 561
Recent interest in an agile workforce and machine flexibility
has lead to a new
wave of challenges in manufacturing systems. Classic questions
need to be revisited such as where should flexible machines
be allocated? how often should they be utilized? and can they
be used to mitigate the challenge of machine failures or worker
availability? In this talk we consider each of these problems
as they relate to tandem queues. We begin by answering the
question of where machines should be allocated and show that
the classic
c\murule from parallel systems applies in the tandem queue
setting. We then show that this extends to the case when workers
might be available only temporarily. We also show that when
there is complete control of the capacity decision this control
is monotone in the number of customers in each queue. We conclude
(time permitting) by showing that an "almost" monotone
switching curve describes the optimal policy when machine
reliability is considered.
back to top
back to main seminars page
"Managing Customer
Outrage: Focus Organizational Learning Efforts on Service
Failure or Recovery?"
Talk by Michael
Lapre, Vanderbilt
Jan. 25, 12 PM, Jacobs Rm. 561
As service failures are inevitable, firms must be prepared
to recover from service failures, thereby turning angry, frustrated
customers into loyal customers. Despite the compelling economics
of customer loyalty, firms continue to struggle with service
recovery. Should firms focus organizational learning efforts
on reducing service failure or on reducing dissatisfaction
with recovery? Drawing from the literatures on organizational
learning, learning curves, and marketing, I hypothesize that
dissatisfaction with recovery contributes more to the variation
in customer outrage across firms than service failure does
(H1), that a Ushaped function of operating experience explains
more variation in dissatisfaction with recovery than in service
failure (H2), and that heterogeneity in organizational learning
curves explains more variation in dissatisfaction with recovery
than in service failure (H3). The hypotheses are tested with
quarterly data for nine major U.S. airlines over 11 years.
All three hypotheses are supported. In the context of mishandling
baggage, dissatisfaction with recovery explains 88% of the
variation in customer outrage, whereas service failure explains
only 12%. The empirical results suggest firms should pay more
attention to organizational learning curves for service recovery.
back to top
back to main seminars page
"Excess
Inventory and LongTerm Stock Price Performance"
Talk by Vinod
Singhal, Georgia Tech
Feb. 8, 12 PM, Jacobs Rm. 586
This paper estimates the longrun stock price effects of excess
inventory using nearly 900 excess inventory announcements
made by publicly traded firms during 19902002. It examines
the stock price effects starting one year before through two
years after the excess inventory announcement date. Statistically
significant abnormal returns are observed during the year
before the announcement and on announcement. There is no evidence
of statistically significant abnormal return during the two
years after the announcement. I estimate that the mean (median)
abnormal return due to excess inventory is 37.22% (27.03%).
Negative abnormal returns are observed across industries,
calendar time, firm size, and actions taken to deal with excess
inventory. The evidence suggests that the stock market partially
anticipates excess inventory situations, firms do not recover
quickly from the negative effect of excess inventory, and
the negative effect of excess inventory is economically and
statistically significant.
back to top
back to main seminars page
"Pricing Dynamics of Competitors Who
Ignore Competition"
Talk by Anton
Kleywegt, Georgia Tech
Feb. 15, 1:30 PM, Jacobs Rm. 586
A variety of demand models are widely used in revenue management,
all of which are known to be inaccurate. We are interested
in the dynamic behavior of systems in which revenue managers
use these inaccurate models, they make decisions based on
the models, observe data, and attempt to refine the models
with the observed data. Most of this talk will describe a
duopoly in which each seller models demand as a function of
the prices of that seller only. That is, the demand models
that sellers estimate with data, model the quantity demanded
as a function of the prices of the seller, and these models
do not include the prices of the other seller. Such simplified
models are often used in revenue management practice, even
when revenue managers are aware of the competition. We compare
the resulting dynamical behavior with the outcomes in other
settings, such as the equilibria of well informed competitors,
the outcomes of collaborators, and the outcomes when there
is asymmetric information.
This is joint work with Tito HomemdeMello at Northwestern
University and Bill Cooper at the University of Minnesota.
back to top
back to main seminars page
"Asymptotic Results in Single and Multiclass
Type Queueing Networks"
Talk by David
Gamarnik, MIT
Feb. 22, 1:30 PM, Jacobs Rm. 561
Stochastic queueing networks have a variety of industrial
applications including services, call centers, data and communication
networks, manufacturing and more recently business processes.
We will begin with some motivating examples of business workflow
processes and the underlying performance analysis issues.
Then we will continue by introducing stochastic single class
and multiclass queueing networks. The principal question is
whether the probability distribution of the queue lengths
has exponentially fast decaying tails in steadystate. We
establish that for single class queueing networks this is
indeed the case. Moreover, we establish that the stationary
distribution of the associated reflected diffusion process
provides a valid heavytraffic approximation of the underlying
queueing network in steadystate.
The
presence of multiclass structure makes the picture rather
different. We
present an example of a network where the queue length exhibit
an unexpected subexponential behavior. Thus, we show the slow
decay of the tails can be a purely network effect. We will
discuss the implication of these results for control designs.
back to top
back to main seminars page
"Simple Models
of Discreet Choice and Their Performance in Bandit Experiments"
Talk by Noah
Gans, Wharton
March 8, 12 PM, Jacobs Rm. 561
Recent operations management papers model customers as solving
multiarmed bandit problems, positing that consumers use a
particular heuristic when choosing among suppliers. These
papers then analyze the resulting competition among suppliers
and mathematically characterize the equilibrium actions. There
remains a question, however, as to whether the original customer
models upon which the analyses are built are reasonable representations
of actual consumer choice.
In this paper, we empirically investigate how well these choice
rules match actual performance as people solve twoarmed Bernoulli
bandit problems. We find that some of the most analytically
tractable models perform best in tests of model fit. We also
find that the expected number of consecutive trials of a given
supplier is increasing and convex its expected quality level,
a result that is consistent with the models' predictions,
as well as with loyalty effects described in the popular management
literature.
back to top
back to main seminars page
"Operational
Benefits of Subscription Services"
Talk by Sunil
Kumar, Stanford
March 15, 12 PM, Jacobs Rm. 561
In this talk we study a monopolistic firm that offers reusable
products, or a service, to price and qualityofservice sensitive
customers  a rental firm can be thought of as the canonical
example. Customers' perception of quality is determined by
their likelihood of obtaining the product or service immediately
upon request. We study the alternatives of offering either
a subscription option or a payperuse option from a profitmaximizing
perspective. In order to do this we propose a Markovian model
of how subscribers generate requests and use a standard Poisson
model for the payperuse option. In a large market setting,
under the assumption of exponential demand, we show that using
the
subscription option is more profitable for the firm. Further,
via a numerical study, we show that this assumption is not
essential for the result to hold. However, we show that it
is not necessarily true that the subscription option dominates
the payperuse option on qualityofservice. The firm is
able to manage the tradeoff between price and qualityofservice
better in the subscription option. Moreover, we show that
the social welfare and the consumer surplus can also be higher
in the subscription option, indicating that both the firm
and the consumers can benefit from the subscription option.
back to top
back to main seminars page
Fall
2005 Seminars
The
following speakers visited NU for the Fall 2005 Kellogg Operations
Seminar series. Please click on a date for more information
about a particular talk.
Back
to top of Past Seminars page
Randy
Berry 
NU
Engineering 
Sept.
21 
Feryal
Erhun 
Stanford 
Oct.
5 
Rene
Caldentey 
NYU 
Oct.
19 
Sasha
Stolyar 
Bell
Laboratories 
Oct.
26 
Don
Eisenstein 
University
of Chicago 
Nov.
9 
Dave
Hartvigsen 
Notre
Dame 
Nov.
30 
"Spectrum
Sharing Games"
Talk by Randy
Berry, NU Engineering Dept.
Sept. 21
In wireless networks a key consideration is how multiple users
can share the available spectrum. This is especially true
in unlicensed or open bands, where users may be deployed without
any centralized frequency planning or control.
In
this talk, we describe some simple models for sharing a given
spectrum band. We discuss both a case where a "spectrum
manager" controls access and a case where there is no
manager and users implement a distributed algorithm to manage
access. In the first case, we describe auction mechanisms
where the users bid for spectrum access. We characterize the
resulting equilibria and discuss iterative algorithms for
reaching these.
In
the second case, we give a distributed algorithm, in which
users announce "price" signals that indicate their
"cost" of interference. We relate this algorithm
to a "fictitious" game, which in certain cases is
supermodular. We use this relation to characterize the algorithms
convergence. Extensions to multichannel networks may also
be discussed.
back to top
back to main seminars page
"Managing
Demand Uncertainty with Dual Supply Contracts"
Presentation by Feryal
Erhun, Stanford
Oct. 5
We consider a single product, dual supply problem under a
periodicallyreviewed finite planning horizon. The downstream
party, manufacturer, receives supply from two upstream parties,
local and global suppliers with consecutive leadtimes (i.e.,
the leadtime of the global supplier is one period longer than
that of the local supplier). The suppliers offer complementary
contracts in terms of transfer prices and leadtimes; thus,
the manufacturer faces a tradeoff between the responsive
local supplier and the costefficient global supplier. We
model the manufacturer’s problem in two stages: (i)
she first chooses a portfolio of contracts (one from each
supplier) and reserves capacity levels (at the prices specified
by the contracts) for the whole planning horizon; (ii) she
then orders from the suppliers according to the terms of the
contracts chosen in the previous stage. In our secondstage
problem, we prove that a twolevel modified basestock policy
is optimal for a wide range of transfer prices. With various
analytical results and numerical analysis, we illustrate how
the optimal policy parameters change with respect to problem
parameters. A reserveupto policy is shown to be optimal
for the
manufacturer’s capacity reservation problem. We also
develop a methodology that can be used to explain diverse
sourcing strategies (such as inhouse vs. offshore production)
practiced by many companies in various industries.
back to top
back to main seminars page
"The
Martingale Approach to Operational and Financial Hedging"
Talk by Rene
Caldentey, NYU
Oct. 19
We consider the problem of maximizing the profits of a corporation
when these profits depend in part on movements in the financial
markets and/or economic indices. We propose a methodology
for the optimal selection of dynamic operating and financial
hedging strategies when the decision maker is risk averse
or budget constrained.
Risk aversion
is imposed through constraints on the feasible policies such
as VaR, CVaR and budget constraints, among others. We apply
our methodology to some standard operations problems including
the popular newsvendor model and a supply chain procurement/inventory
problem. We also identify circumstances in which the risk
management constraints can effectively be ignored when solving
for the optimal operating policy.
back to top
back to main seminars page
"Maximizing
Queueing Network Utility Subject to Stability"
Talk by Sasha
Stolyar, Bell Laboratories
Oct. 26 at 11:00 AM
Jacobs Center Rm. 1246
We
study a model which accommodates a wide range of seemingly
very different resource allocation problems in communication
networks. Some examples: utility based congestion control
of complex timevarying (wireless) networks, minimizing average
power consumption in wireless networks, scheduling in wireless
systems subject to power consumption and/or traffic rate constraints.
The model is a controlled queueing network, where controls
have dual effect. In addition to determining exogenous customer
arrival rates, service rates at the nodes, and (possibly random)
routing of customers among the nodes, each control decision
produces a certain vector of "commodities." The
set of available control choices depends on the underlying
random network mode. Network "utility" is a concave
function of the vector of longterm average rates at which
commodities are produced. The goal is maximize utility while
keeping network queues stable. We introduce a very parsimonious
dynamic control policy, called Greedy PrimalDual algorithm,
and prove its asymptotic
optimality. Although the model is formulated in terms of a
queueing network, the algorithm can be viewed as a dynamic
mechanism for solving rather general convex optimization problems.
back to top
back to main seminars page
"SelfOrganizing
Cyclic Logistics Systems"
Talk by Don
Eisenstein, University of Chicago
Nov. 9 at 1:00 PM
Jacobs Center Rm. G05
A
selforganizing system is one in which the actions of decentralized
entities combine to elicit stable global behavior. We seek
rules that make systems "gravitate" to a balance
point after a shock or perturbation.
We review our ideas on selfbalancing production lines, and
how they can lead to newer models of selforganization of
cyclic systems.
back to top
back to main seminars page
"Optimal
Vote Trading"
Talk by Dave
Hartvigsen, Notre Dame
Nov. 30 at 11:00 AM
Jacobs Center Rm. 561
During
the 2000 U.S. Presidential race an apparently new idea, called
vote trading, was introduced to help one of the two majorparty
candidates (Gore) win. The idea was, through an Internet mechanism,
to induce voters who supported a minorparty candidate (Nader)
to vote for Gore in states where this would help Gore and
to induce an equal number of voters who supported Gore to
vote for Nader in states where this would not hurt Gore. Thus
Nader would receive the same number of popular votes as he
would have received without the trading (providing an incentive
for Nader voters to participate). Vote trading was implemented
at a number of Web sites in 2000 (and again in 2004). In this
talk, we formalize this idea, present several variations,
and present an optimal way for Web sites to implement it (so
as to best help the majorparty candidate get elected) in
both deterministic and stochastic settings.
back to top
back to main seminars page
