Abstract: We study the optimal design of a menu of funds by a manager who is required to use linear pricing and does not observe the beliefs of investors regarding one of the risky assets. The optimal menu involves bundling of assets and can be explicitly constructed from the solution to a calculus of variations problem that optimizes over the indirect utility that each type of investor receives. We provide a complete characterization of the optimal menu and show that the need to maintain incentive compatibility leads the manager to offer funds that are inefficiently tilted towards the asset that is not subject to the information friction.
The talk is based on joint works with Julien Hugonnier.
Title: Entropy Regularization, Boltzmann Exploration, and Langevin Diffusions
Abstract: Many optimization models suffer from the same problem of being stuck in suboptimal traps, such as over-fitted solutions in multi-armed bandit problems and local optima in nonconvex optimization. A way out is to engage exploration to broaden the search space by randomizing the actions/controls. We provide a theoretical foundation of the widely employed heuristic method in reinforcement learning called the Boltzmann exploration, by solving an entropy regularized, optimal stochastic relaxed control problem. We then apply the general results to the temperature control for Langevin diffusions in the context of nonconvex optimization. We derive a state-dependent, truncated exponential distribution that can be used to sample temperatures in a Langevin algorithm. Numerical experiments indicate promising performance compared with existing algorithms.
Speaker: Martin Larsson (Carnegie Mellon University)
Title: Finance and Statistics: Trading Analogies for Sequential Learning
Abstract: The goal of sequential learning is to draw inference from data that is gathered gradually through time. This is a typical situation in many applications, including finance. A sequential inference procedure is ‘anytime-valid’ if the decision to stop or continue an experiment can depend on anything that has been observed so far, without compromising statistical error guarantees. A recent approach to anytime-valid inference views a test statistic as a bet against the null hypothesis. These bets are constrained to be supermartingales – hence unprofitable – under the null, but designed to be profitable under the relevant alternative hypotheses. This perspective opens the door to tools from financial mathematics. In this talk I will discuss how notions such as supermartingale measures, log-optimality, and the optional decomposition theorem shed new light on anytime-valid sequential learning. (This talk is based on joint work with Wouter Koolen (CWI), Aaditya Ramdas (CMU) and Johannes Ruf (LSE).)
Speaker: Elisa Alos (Barcelona Graduate School of Economics)
Title: On the difference between volatility swaps and the ATM implied volatility
Abstract: This talk focuses on the difference between the fair strike of a volatility swap and the at-the-money implied volatility (ATMI) of a European call option. It is well known that the difference between these two quantities converges to zero as the time to maturity decreases. In this talk, we make use of a Malliavin calculus approach to derive an exact expression for this difference. This representation allows us to establish that the order of convergence is different in the correlated and uncorrelated cases, and that it depends on the behavior of the Malliavin derivative of the volatility process. In particular, we see that for volatilities driven by a fractional Brownian motion, this order depends on the corresponding Hurst parameter H. Moreover, in the case H ≥ 1/2, we develop a model-free approximation formula for the volatility swap in terms of the ATMI and its skew.
(Joint work with Kenichiro Shiraya)
Thursday, 22 October 2020, 19:00 CEST (Central European Summer Time)
Speaker: Damir Filipović (EPFL and Swiss Finance Institute)
Title: Machine Learning With Kernels for Portfolio Valuation and Risk Management
Abstract: We introduce a simulation method for dynamic portfolio valuation and risk management building on machine learning with kernels. We learn the dynamic value process of a portfolio from a finite sample of its cumulative cash flow. The learned value process is given in closed form thanks to a suitable choice of the kernel. We show asymptotic consistency and derive finite sample error bounds under conditions that are suitable for finance applications. Numerical experiments show good results in large dimensions for a moderate training sample size.
Thursday, 08 October 2020, 19:00 CEST (Central European Summer Time)
Title: Backward propagation of chaos and large population games asymptotics
Abstract: In this talk we will present a generalization of the theory of propagation of chaos to backward (weakly) interacting diffusions. The focus will be on cases allowing for explicit convergence rates and concentration inequalities in Wasserstein distance for the empirical measures. As the main application, we derive results on the convergence of large population stochastic differential games to mean field games, both in the Markovian and the non-Markovian cases. The talk is based on joint works with M. Laurière and Dylan Possamaï.
Thursday, 24 September 2020, 19:00 CEST (Central European Summer Time)
Title: Universality of affine and polynomial processes
Abstract: We elaborate on universal properties of affine and polynomial processes. In several recent works we could show that many models which are at first sight not recognized as affine or polynomial can nevertheless be embedded in this framework. For instance, essentially all examples of (rough) stochastic volatility models can be viewed as (infinite dimensional) affine or polynomial processes. Moreover, all well-known measure-valued diffusions such as the Fleming–Viot process, the Super–Brownian motion, and the Dawson–Watanabe superprocess are affine or polynomial. This suggests an inherent universality of these model classes. We try to make this mathematically precise by showing that generic classes of diffusion models are projections of infinite dimensional affine processes (which in this setup coincide with polynomial processes). A key ingredient to establish this result is the signature process, well known from rough paths theory.
The talk is based on joint works with Sara Svaluto-Ferro and Josef Teichmann.
Thursday, 10 September 2020, 19:00 CEST (Central European Summer Time)
Title: The Joint S&P 500/VIX Smile Calibration Puzzle Solved
Abstract: Since VIX options started trading in 2006, many researchers have tried to build a model that jointly and exactly calibrates to the prices of S&P 500 (SPX) options, VIX futures and VIX options. So far the best attempts, which used parametric continuous-time jump-diffusion models on the SPX, could only produce approximate fits. In this talk we solve this longstanding puzzle using a completely different approach: a nonparametric discrete-time model. The model is cast as a dispersion-constrained martingale transport problem which is solved using the Sinkhorn algorithm. We prove by duality that the existence of such model means that the SPX and VIX markets are jointly arbitrage-free. The algorithm identifies joint SPX/VIX arbitrages should they arise. Our numerical experiments show that the algorithm performs very well in both low and high volatility environments. Finally, we briefly discuss: (i) how our technique extends to continuous-time stochastic volatility models; (ii) a remarkable feature of the SPX and VIX markets: the inversion of convex ordering, and how classical stochastic volatility models can reproduce it; (iii) why, due to this inversion of convex ordering, and contrary to what has often been stated, among the continuous stochastic volatility models calibrated to the market smile, the Dupire local volatility model does not maximize the price of VIX futures.
Short bio: Julien Guyon is a senior quantitative analyst in the Quantitative Research group at Bloomberg L.P., New York. He is also an adjunct professor in the Department of Mathematics at Columbia University and at the Courant Institute of Mathematical Sciences, NYU. Before joining Bloomberg, Julien worked in the Global Markets Quantitative Research team at Société Générale in Paris for six years (2006-2012), and was an adjunct professor at Université Paris Diderot and Ecole des Ponts ParisTech. He co-authored the book Nonlinear Option Pricing (Chapman & Hall, CRC Financial Mathematics Series, 2014) with Pierre Henry-Labordère. His main research interests include nonlinear problems, volatility and correlation modeling, and numerical probabilistic methods. Julien holds a Ph.D. in Probability Theory and Statistics from Ecole des Ponts ParisTech. He graduated from Ecole Polytechnique (Paris), Université Pierre-et-Marie-Curie (Paris), and Ecole des Ponts ParisTech. A big soccer fan, Julien has also developed a strong interest in sports analytics, and has published several articles on the FIFA World Cup, the UEFA Champions League, and the UEFA Euro in top-tier newspapers such as The New York Times, The Times, Le Monde, and El Pais. Some of his suggestions for draws and competition formats have already been implemented by FIFA and UEFA.
Thursday, 16 July 2020, 19:00 CEST (Central European Summer Time)
Speaker: Xin Guo (University of California Berkeley)
Title: Understanding GANs through MFGs and SDEs approximations
Abstract: Generative Adversarial Networks (GANs) have celebrated great empirical success, especially in image generation and processing. There is a recent surge of interest in GANs to financial applications, including asset pricing, portfolio optimization, and multi-agent market simulation. In this talk, we will discuss some recent progress in mathematical understanding of GANs. The first is the theoretical connection between GANs and MFGs: interpreting MFGs as GANs, on one hand, allows us to devise GANs-based algorithm to solve high dimensional MFGs. Interpreting GANs as MFGs, on the other hand, provides a new and probabilistic foundation for GANs. The second is on approximating GANs training in the form of SDEs. This SDEs approximation provides, for the first time, an analytical tool for resolving some well-recognized issues in the machine learning community for GANs training.
Thursday, 2 July 2020, 19:00 CEST (Central European Summer Time)
Title: Environmental Impact Investing: how green-minded investors spur companies to reduce their emissions
Abstract: We develop a dynamic equilibrium model to explain how green investing spurs companies to reduce their greenhouse gas emissions by raising their cost of capital. In the model, two groups of CARA investors with different views on future environmental risks determine the cost of capital for a group of polluting companies, which then play an emission-reduction game to maximize their profits. As a result, companies’ emissions decrease when the proportion of green investors and their environmental stringency increase, as well as when the marginal abatement cost decreases. However, heightened uncertainty about future environmental risks alleviates the pressure on the cost of capital for the most carbon-intensive companies and pushes them to increase their emissions. Consistent with the nature of environmental risks, this uncertainty is modeled as non-Gaussian. We provide empirical evidence supporting our results by focusing on United States stocks and using green fund holdings between 2006 and 2018 to proxy for green investors’ beliefs. When the fraction of assets managed by green investors doubles, companies’ carbon intensity drops by 5% per year.
Joint work with Tiziano De Angelis and Olivier David Zerbib.
Thursday, 18 June 2020, 19:00 CEST (Central European Summer Time)
Title: Is there a Golden Parachute in Sannikov’s principal-agent problem?
Abstract: This paper provides a complete review of the continuous-time optimal contracting problem introduced by Sannikov (2008) in the extended context allowing for possible different discount factors of both parties. The agent’s problem is to seek for optimal effort, given the compensation scheme proposed by the principal over a random horizon. Then, given the optimal agent’s response, the principal determines the best compensation scheme in terms of running payment, retirement, and lump-sum payment at retirement.
A Golden parachute is a situation where the agent ceases any efforts at some positive stopping time, and receives a payment afterwards, possibly consisting of a lump sum and/or a continuous stream of payments. We show that a Golden Parachute only exists in certain specific circumstances. This is in contrast with the results claimed by Sannikov (2008) where the only requirement is a positive agent’s marginal cost of effort at zero. Namely, we show that there is no Golden Parachute if this parameter is too large. Similarly, in the context of a concave marginal utility, there is no Golden Parachute if the agent’s utility function has a too negative curvature at zero.
In the general case, we provide a rigorous analysis of this problem and we prove that an agent with positive reservation utility is either never retired by the principal, or retired above some given threshold (as in Sannikov (2008)’s solution). In particular, different discount factors induce naturally a face-lifted utility function, which allows to reduce the whole analysis to the equal-discount factors setting. Finally, we also confirm that an agent with small reservation utility does have an informational rent, meaning that the principal optimally offers him a contract with strictly higher utility value.
Thursday, 4 June 2020, 19:00 CEST (Central European Summer Time)
Title: Data driven robustness and uncertainty sensitivity analysis
Abstract: In this talk, I will showcase how methods from optimal transport and distributionally robust optimisation allow to capture and quantify sensitivity to model uncertainty for a myriad of problems. We consider a generic stochastic optimisation problem. This could be a mean-variance or a utility maximisation portfolio allocation problem, an optimised certainty equivalent or a risk measure computation, a standard regression or a deep learning problem. At the heart of the optimisation is a probability measure, or a model, which describes the system. It could come from data, simulation or a modelling effort but there is always a degree of uncertainty about it. We take a non-parametric approach and capture model uncertainty using Wasserstein balls around the postulated measure. Our main results provide explicit formulae for the first order correction to both the value function and the optimiser. We further extend our results to optimisation under linear constraints. Our sensitivity analysis of the distributionally robust optimisation problems finds applications in statistics, machine learning, mathematical finance and uncertainty quantification. In the talk, I will discuss several financial examples anchored in a one-step financial model and compute their sensitivity to model uncertainty. These will include: option pricing, mean-variance portfolio selection, optimised certainty equivalent and similar risk assessments as well as a robust version of Davis’ marginal utility option pricing. I will also discuss briefly other applications, such as explicit formulae for first-order approximations of square-root LASSO and square-root Ridge optimisers and measures of NN architecture robustness wrt to adversarial data. I will also showcase the link with building data-driven estimators of risk measures. Talk based on joint works with Daniel Bartl, Samuel Drapeau and Johannes Wiesel.
Thursday, 21 May 2020, 19:00 CEST (Central European Summer Time)
Speaker: Paul Embrechts (Professor emeritus ETH Zurich)
Title: Operational Risk revisited: from Basel to the coronavirus
Abstract: In the company of market and credit risk, from a more mathematical point of view, operational risk has always been viewed as the “little brother or sister”. And yet as the 2007-2009 Financial Crisis has shown and as we no doubt will find out from the coronavirus crisis sometime in the future, operational risk is an eminently important, and surely technically demanding risk category within the regulatory frameworks of insurance (Solvency 2, say) and banking (the various Basel frameworks). In this talk I will sketch some of its history, some of the main technical modeling tools used and comment on methodological ways forward. I will also discuss some of the early lessons (hopefully) learned from the current coronavirus pandemic.
Thursday, 7 May 2020, 19:00 CEST (Central European Summer Time)
Title: Super-Heston rough volatility, Zumbach effect and the Guyon’s conjecture
Abstract: The rough Heston model is known to reproduce accurately the behavior of historical volatility time series as well as the dynamics of the implied volatility surface. However, some argue that actual volatility tails are even fatter than that generated in the rough Heston model. Furthermore, it fails to reproduce a very subtle property of historical data referred to as Zumbach effect. In this talk we address these two concerns introducing so-called super-Heston rough volatility models. It turns out that these models enable us to obtain joint calibration of both SPX and VIX implied volatility surfaces, hence providing a counter-example to a long-standing conjecture by Julien Guyon (this is joint work with Aditi Dandapani, Jim Gatheral and Paul Jusselin).
Thursday, 23 April 2020, 19:00 CEST (Central European Summer Time)