Robert L. Bray |
Kellogg School of Management Research Interests Supply Chaining and Empirical Operations Management Curriculum Vitae |
Articles |
Abstract: This work distinguishes between two related concepts---the bullwhip effect and production smoothing. These phenomena appear antithetical because they share opposing empirical tests: production variability exceeding sales variability for bullwhip, and vice versa for smoothing. But this is a false dichotomy. We differentiate between the two with a new production smoothing measure, which estimates how much more volatile production would be absent production volatility costs. We apply this metric to an automotive manufacturing sample comprising 162 car models. We find 75% of our sample smooths production by at least 5%, despite the fact that 99% exhibits the bullwhip effect; indeed, we estimate both a strong bullwhip (on average, production is 220% as variable as sales) and a strong degree of smoothing (on average, production would be 22% more variable without deliberate stabilization). We find firms smooth both production variability and production uncertainty. We measure production smoothing with a structural econometric production scheduling model, based on the Generalized Order-Up-To Policy.
Abstract: We model how a judge schedules cases as a multi-armed bandit problem. The model indicates that a first-in-first-out (FIFO) scheduling policy is optimal when the case completion hazard rate function is monotonic. But there are two ways to implement FIFO in this context: at the hearing level or at the case level. Our model indicates that the former policy, prioritizing the oldest hearing, is optimal when the case completion hazard rate function decreases, and the latter policy, prioritizing the oldest case, is optimal when the case completion hazard rate function increases. This result convinced six judges of the Roman Labor Court of Appeals---a court that exhibits increasing hazard rates---to switch from hearing-level FIFO to case-level FIFO. Tracking these judges for eight years, we estimate that our intervention decreased the average case duration by 12% and the probability of a decision being appealed to the Italian supreme court by 3.8%, relative to a 44-judge control sample.
Abstract: We model a manufacturer's and regulator's joint recall decisions as an asymmetric dynamic discrete choice game. In each quarter, the agents observe a product's defect reports and update their beliefs about its failure rate. The agents face an optimal stopping problem: they decide whether or not to recall the product. The agents trade off between current recall costs and future failure costs; they respond to these intertemporal costs with Markov-perfect equilibrium. We estimate our model with auto industry data comprising 14,124 recalls and 976,062 defect reports. We reverse-engineer the structural primitives that underlie our model: (i) the evolution of the failure rates, and (ii) the failure and recall cost parameters. Since our model is a regenerative process---a recall resets the future failure rate---we implement a myopic policy estimator to circumvent the curse of dimensionality. Our counterfactual study establishes that both agents initiate recalls to avoid future failures but not to preempt the other agent's anticipated recalls. Indeed, we find that the regulator's recalls have no significant deterrence effect on the manufacturers.
Abstract: We find that auto parts are more reliable when automakers position their factories near component suppliers. Specifically, we estimate that scaling the distance between a component factory and vehicle assembly plant by an order of magnitude increases the part's expected failure rate by 1.36%. To establish this result, we use four independent automotive data sources, gleaning (i) auto part failure rates, (ii) upstream component factory locations, (iii) downstream assembly plant locations, and (iv) product-level links connecting the upstream and downstream factories. We observe 27,807 supply chains, comprising 175 assembly plants and 529 supplier factories. Abstract: The empirical likelihood of a Markov decision process depends only on the differenced value function. And the differenced value function depends only on the payoffs received until the underlying Markov chain reaches its stationary distribution. Thus, whereas the value function converges with Bellman contractions at the rate of cash flow discounting, the differenced value function---and hence the empirical likelihood---converges at the rate of cash flow discounting times the rate of Markov chain mixing (the spectral sub-radius of the state transition matrix). I use this strong convergence result to speed up Rust's (1987) nested fixed point (NFXP) and Aguirregabiria and Mira's (2002) nested pseudo-likelihood (NPL) estimators. The approach is especially useful when estimating high-frequency models. Abstract: I present two new dynamic program solution methods: endogenous value iteration and endogenous policy iteration. I define the exogenous space of a Markov decision process as the vector space comprising the set of value function deviations that never influence the policy function. The exogenous space always spans at least one dimension and spans more when the dynamic program incorporates exogenous state variables (variables the decision maker cannot influence). My algorithms iteratively apply Bellman contractions, projecting the portion of the value function that lies in the exogenous space to zero at each step. When the exogenous space is one-dimensional, my algorithms are equivalent to relative value iteration and relative policy iteration; but when the exogenous space spans more dimensions, my algorithms can be faster. Their solution times are proportional to the inverse logarithm of the largest state transition matrix eigenvalue whose corresponding eigenvector does not lie in the exogenous space. |
Teaching |