Robert L. Bray 
Kellogg School of Management Research Interests Supply Chaining and Empirical Operations Management Curriculum Vitae 
Articles 
Abstract: This work distinguishes between two related conceptsthe bullwhip effect and production smoothing. These phenomena appear antithetical because they share opposing empirical tests: production variability exceeding sales variability for bullwhip, and vice versa for smoothing. But this is a false dichotomy. We differentiate between the two with a new production smoothing measure, which estimates how much more volatile production would be absent production volatility costs. We apply this metric to an automotive manufacturing sample comprising 162 car models. We find 75% of our sample smooths production by at least 5%, despite the fact that 99% exhibits the bullwhip effect; indeed, we estimate both a strong bullwhip (on average, production is 220% as variable as sales) and a strong degree of smoothing (on average, production would be 22% more variable without deliberate stabilization). We find firms smooth both production variability and production uncertainty. We measure production smoothing with a structural econometric production scheduling model, based on the Generalized OrderUpTo Policy.
Abstract: We model how a judge schedules cases as a multiarmed bandit problem. The model indicates that a firstinfirstout (FIFO) scheduling policy is optimal when the case completion hazard rate function is monotonic. But there are two ways to implement FIFO in this context: at the hearing level or at the case level. Our model indicates that the former policy, prioritizing the oldest hearing, is optimal when the case completion hazard rate function decreases, and the latter policy, prioritizing the oldest case, is optimal when the case completion hazard rate function increases. This result convinced six judges of the Roman Labor Court of Appealsa court that exhibits increasing hazard ratesto switch from hearinglevel FIFO to caselevel FIFO. Tracking these judges for eight years, we estimate that our intervention decreased the average case duration by 12% and the probability of a decision being appealed to the Italian supreme court by 3.8%, relative to a 44judge control sample.
Abstract: I present two algorithms for solving dynamic programs with exogenous variables: endogenous value iteration and endogenous policy iteration. These algorithms are always at least as fast as relative value iteration and relative policy iteration, and are faster when the endogenous variables converge to their stationary distributions sooner than the exogenous variables. Abstract: We estimate the effect of supply chain proximity on product quality. Merging four automotive datasets, we create a supply chain sample that reports the failure rate of 27,807 auto components, the location of 529 upstream component factories, and the location of 275 downstream assembly plants. We find that defect rates are higher when the upstream and downstream factories are farther apart. Specifically, we estimate that increasing the distance between an upstream component factory and downstream assembly plant by on order of magnitude increases the component's expected defect rate by 3.9%, and also that quality improves more slowly across geographically dispersed supply chains. We also find that supply chain distance is more detrimental to quality when automakers produce earlygeneration models or highend products, when they buy components with more complex configurations, or when they source from suppliers who invest relatively little in research and development. Abstract: We model a manufacturer's and regulator's joint recall decisions as an asymmetric dynamic discrete choice game. In each quarter, the agents observe a product's defect reports and update their beliefs about its failure rate. The agents face an optimal stopping problem: they decide whether or not to recall the product. The agents trade off between current recall costs and future failure costs; they respond to these intertemporal costs with Markovperfect equilibrium. We estimate our model with auto industry data comprising 14,124 recalls and 976,062 defect reports. We reverseengineer the structural primitives that underlie our model: (i) the evolution of the failure rates, and (ii) the failure and recall cost parameters. Since our model is a regenerative processa recall resets the future failure ratewe implement a myopic policy estimator to circumvent the curse of dimensionality. Our counterfactual study establishes that both agents initiate recalls to avoid future failures but not to preempt the other agent's anticipated recalls. Indeed, we find that the regulator's recalls have no significant deterrence effect on the manufacturers.
Abstract: I study how customers respond to operational transparency with parcel delivery data from Cainiao Network, the logistics arm of Alibaba. The sample describes 4.68 million deliveries. Each delivery has between four and ten trackpackage activities, which customers can check in real time, and a delivery service score, which customers leave after receiving the package. I show that the delivery service scores are higher when the trackpackage activities cluster toward the end of the shipping horizon. For example, if a shipment lasts 100 hours, then delaying the time of the average action from hour 20 to hour 80 increases the expected delivery score by approximately the same amount as expediting the package's arrival from hour 100 to hour 73.
Abstract: We study the supply chain implications of dynamic pricing. Specifically, we estimate how reducing menu costsâ€”the operational burden of adjusting pricesâ€”would affect supply chain stability. We theorize that reducing menu costs would reduce the bullwhip effect by mitigating Lee et al.'s (1997) first bullwhip driver, demand signal processing. We test this prediction by fitting a structural econometric inventory model to data from a large Chinese supermarket. We estimate that removing menu costs would stabilize the supply chain, but not by mitigating Lee et al.'s first bullwhip driver as we had predicted, but by mitigating Lee et al.'s third bullwhip driver, order batching. Specifically, we estimate that removing menu costs would cut the average batch size by 5.0%, which would decrease the average standard deviation of orders by 3.9%.
Abstract: A conditional choice probability (CCP) estimator of a dynamic empirical model solves both a dynamic programming problem and a maximum likelihood problem. The estimator can dispatch the former problem before tackling the latter when the utility function is linearly parameterized, otherwise it must nest the former within the latter. This "nested fixed point" bogs down the estimator, requiring it to solve hundreds or thousands of difficult value function equations. I develop a method to disentangle the two problems under any utility function (and thus circumvent the onerous value function calculations). My estimator is asymptotically efficient, and has a closedform characterization when the utility function is linearly parameterized.
Abstract: Rust (1997) developed a randomized algorithm that can solve discretechoice dynamic programs to any degree of accuracy in polynomial time, under some assumptions. Rust believed these assumptions are relatively permissive, and thus suggested that randomness can be a powerful remedy against the curse of dimensionality of dynamic programs. However, I show that Rust's assumptions rule out all but a trivial class of dynamic programs. I argue, therefore, that we should not consider randomness as a countermeasure against the curse of dimensionality.
Abstract: Arlotto and Gurvich (2019) showed that the regret in the multisecretary problem is uniformly bounded in the number of job openings, n, and the number of applicants, k, provided that the applicant valuations are drawn from a distribution with finite support. I show that this result does not hold when applicant valuations are drawn from a standard uniform distribution. In this case, the regret is between (log(n)  2)/32 and log(n) + 1, when k = n/2 and n > 3.
