Abstract
Can a principal still offer optimal dynamic contracts that are linear in end-of-period outcomes when the agent controls a process that exhibits memory? We provide a positive answer by considering a general Gaussian setting where the output dynamics are not necessarily semi-martingales or Markov processes. We introduce a rich class of principal-agent models that encompasses dynamic agency models with memory. From the mathematical point of view, we develop a methodology to deal with the possible non-Markovianity and non-semimartingality of the control problem, which can no longer be directly solved by means of the usual Hamilton-Jacobi-Bellman equation. Our main contribution is to show that, for one-dimensional models, this setting always allows for optimal linear contracts in end-of-period observable outcomes with a deterministic optimal level of effort. In higher dimension, we show that linear contracts are still optimal when the effort cost function is radial and we quantify the gap betweenlinear contracts and optimal contracts for more general quadratic costs of efforts.
Keywords
Principal-Agent models; Continuous-time control problems;
Replaced by
Eduardo Abi Jaber, and Stéphane Villeneuve, “Gaussian Agency problems with memory and Linear Contracts”, Finance and Stochastics, September 2024.
Reference
Eduardo Abi Jaber, and Stéphane Villeneuve, “Gaussian Agency problems with memory and Linear Contracts”, TSE Working Paper, n. 22-1363, September 2022.
See also
Published in
TSE Working Paper, n. 22-1363, September 2022