Article

Gaussian Agency problems with memory and Linear Contracts

Eduardo Abi Jaber, and Stéphane Villeneuve

Abstract

Can a principal still offer optimal dynamic contracts that are linear in end-of-period outcomes when the agent controls a process that exhibits memory? We provide a positive answer by considering a general Gaussian setting where the output dynamics are not necessarily semimartingales or Markov processes. We introduce a rich class of principal–agent models that encompasses dynamic agency models with memory. From a mathematical point of view, we show how contracting problems with Gaussian Volterra outcomes can be transformed into those of semimartingale outcomes by some change of variables to use the martingale optimality principle. Our main contribution is to show that for one-dimensional models, this setting always allows optimal linear contracts in end-of-period observable outcomes with a deterministic optimal level of effort. In higher dimensions, we show that linear contracts are still optimal when the effort cost function is radial, and we quantify the gap between linear contracts and optimal contracts for more general quadratic costs of efforts.

Keywords

Principal–agent models; Continuous-time control problems;

JEL codes

  • C61: Optimization Techniques • Programming Models • Dynamic Analysis
  • C73: Stochastic and Dynamic Games • Evolutionary Games • Repeated Games

Replaces

Eduardo Abi Jaber, and Stéphane Villeneuve, Gaussian Agency problems with memory and Linear Contracts, TSE Working Paper, n. 22-1363, September 2022.

Reference

Eduardo Abi Jaber, and Stéphane Villeneuve, Gaussian Agency problems with memory and Linear Contracts, Finance and Stochastics, September 2024.

Published in

Finance and Stochastics, September 2024