Bill Moreland, BankRegData.com

Bill Moreland is a partner in a company/website called BankRegData.com. The Austin, Texas company BankRegData started in 2010 to provide a web interface to brdpic2_400x400 aggregated Federal Depository Insurance Company (FDIC) insured Bank Quarterly Call Report data and additionally offer a narrative/interpretation on the bank book dynamics (see 2016 Q4 Asset Review or Winners & Losers: Citigroup). Since 2012 this data is taken from the FFIEC Central Data Repository. FFIEC is the acronym for the Federal Financial Institution Examination Council. The FFIEC is an interagency standards  committee across five banking regulators including: Federal Reserve Board of Governors , Federal Deposit Insurance Corporation, National Credit Union Administration, Office of the Comptroller of the Currency, and the Consumer Financial Protection Bureau (note the Securities Exchange Commission is not listed as one of the regulators in the FFIEC.). What BankRegData does is aggregate, and to some degree normalize, the Quarterly Call Report Data. BankRegData will get you (to a good first approximation) the distribution of securities ( deposits, mortgages, credit card accounts, loans, and tradable securities) at a quarterly aggregation level across the Banking Book  of the major Bank Holding Companies. Jesse Eisinger outlined some of the problems with the consistency of this data relative to Wells Fargo SEC filings (for example) in a 2011 DealBook article, Tackling Reams of Bank Data Can Take Diligence, and Trust. Obvious error sources include: correct and complete linkage of banks to BHCs, correct and complete enumeration of BHC accrual portfolios, incomplete individual security information, and aggregated/individual security cash flow quantitative modeling error. On the other hand BankRegData is placing the location of 16.779T USD Assets that don’t move all that much and produce cashflows according to fairly standard approximation algorithms  on a relatively small set of market and econometric  levels. Think of it like Thorp counting cards at the casino back in day,  but in this new case of “allBHC” simulation the casino has limited ability to shuffle the cards or even change the decks as play continues. BankRegData is counting the cards for you in allBHC (with the assistance of the FFIEC and each of the individual banks)  and gives you a web interface to check the current and historical card counts.

Understand that each BHC prepares regulatory reports by monitoring  the individual accounts holding each of the bank book securities. In a bank the size of JPM there may be Retail data files disclosing the actual monthly realized cash flows, late fees accrued, and default write downs for several 100 million loan accounts. This account (or security level) information is generally aggregated for regulatory reporting, but it is known at a very granular account level within the BHC itself. So within the BHC the error sources  can be limited to the the individual account quantitative modeling error. For the lack to a better name let’s call this the perBHC problem.  All the other error sources can be eliminated (or controlled) for the BHCs accrual inventory, otherwise the BHC has big internal control issues. The BHC does not know the other BHC positions at the same  account granularity as their own internal positions. The PBC hypothesis is that the error in allBHC problem can be controlled or smoothed out in computing NIM explanatories of forward simulation so you can see a fit with the perBHC data.

 

Blythe Masters, CEO Digital Asset Holdings

Blythe Masters is the CEO of Digital Asset Holdings, a FinTech startup. FinTech represents technical and business developments in payments, consumer loans, and blythe_mastersautomated advisory services. Over the past several years there has been significant venture capital interest in FinTech  (see The Pulse of FinTech Q3 2016) and even large Bank Holding Companies are showing more and more interest (see March 2016, Digital Disruption: How FinTech is Forcing Banking to a Tipping Point). Masters is well known as an early founder of the global Credit Derivative market (or see Gillian Tett’s 2009 Fool’s Gold) and then moved to an even more lucrative position running Global Commodities at J.P. Morgan until 2014. When Masters starts up a FinTech company you can be reasonably sure “that’s where the money is.”

Wall Street has a modulated appetite for new technology in banking. Google and Bank Holding Companies harvest available tech at very different rates. Google leads technology while BHCs tend to lag technology.  Sure there are technology inflection points on Wall Street where new technology enables trading that would otherwise have been impossible. Ranieri in the 80’s using desktops to build the CMBS market is an example. James Simons’ quantitative trading at Renaissance is an example.  More recently, high frequency equity trading KCG, Jump Trading, Tower Research, Citadel, Hudson River Trading, and Virtu are other examples of automating trading (banking). Generally, Wall Street will take as much technology as needed, but not much more, to take care of shareholders and employees. Typically a modest pick up of low hanging fruit with technology has been enough to make a solid ROI on the Street. So technology can race ahead growing performance exponentially and Wall Street typically follows at a measured pace pocketing the easy money. The friction introduced since 2008 with the Credit crisis, Volker, Dodd-Frank, negative interest rates, Brexit and historically low Net Interest Margins make it harder for Financial Services Firms to pocket the easy money. Blythe Masters is the face of yet another Wall Street inflection point where Financial Services firms need substantially more from technology to generate the profit needed for shareholders and employees.  The FinTech technology pick up hits cashflow settlement time, consumer loan origination, and capital plan automation. Perfect for Net Interest Margin optimization.

Elwyn Berlekamp, UC Berkeley

Elwyn Berlekamp is a professor emeritus at Berkeley. His 1984 revision of the  book Algebraic Coding Theory rests on my shelf at arm’s distance from where I write as do the 2 volumes of Winning Ways  written with Conway and Guy and published in 1982. Twophoto things I did not remember about Berlekamp are he was Ken Thompson’s thesis advisor back in the day and he managed all the trading for the original 1986 Medallion Fund later to be Jim Simons’ flagship fund at Renaissance Technologies. But the reason Berlekamp’s name came up here is for a book review he wrote in 2005, titled Bettor Math, for William Poundstone’s book Fortune’s Formula. I was compelled to read the book immediately after reading the review. It is a vivid fast read recounting among other things Claude Shannon’s and Edward Thorp’s early careers at Bell Labs and MIT. Ultimately the book is about the tradeoffs between two views of optimal money management the “Kelly Criterion” and the Efficient Market Hypothesis. Berlekamp writes in his review (recall 2005 is after the LTCM event) :

No one who has made a legitimate fortune in the markets believes the efficient-market hypothesis. And conversely, no one who believes the efficient-market hypothesis has ever made a large fortune investing the the financial markets, unless she began with a moderately large fortune.

From the Net Interest Margin Optimization perspective we are somewhat indifferent to the trade-offs and arguments about Kelly and EMH since the optimization we refer to is in the selection, timing, and placement of buy and hold positions in a banking book.  This is not a trading book or a hedge fund portfolio, we are looking at the $15T in the aggregate US Bank accrual portfolios. We are simply using trading book analytics to automate and numerically  optimize the execution of the capital allocation plan in the banking book for the current market and its implied expectations. We are not suggesting any optimization relative to forward market expectations, those are unknowns to our simulations. Forward simulation with NIM Optimization  automates the implementation of the capital allocation plan and provides a most efficient implementation (of an exogenously given plan) that meets regulatory targets as well as maximizing the Net Interest Margin. Backward simulation of NIM Optimization explaining the actual market moves relative to the actual capital allocation plan implementation provides a detailed measure of the overall Treasury/Transfer Pricing system performance.

Joel Emer, NVIDIA/MIT

Joel Emer is a researcher on processor architecture from the VAX and the Alpha to the x86. Emer was the Director of Microarchitecture Research at Intel prior to joiningpicture-7743 NVIDIA and MIT. Emer is perhaps most well known for a paper he wrote with Doug Clark back in 84, A characterization of processor performance in the VAX-11/780. The paper presented data from a  hardware monitor capturing the cycle counts and frequencies of instruction execution through the VAX 11/780  pipeline.  It helped establish the performance metric of Cycles Per Instruction (CPI) and reporting average time per instruction execution on real application code execution. Here is the MIT SAIL website for Emer with his more recent publications.

Much of Wall Street analytics is basically expression evaluation inside of Monte Carlo simulation. That is where most of the compute execution cycles will go  for NIM optimization, if coded efficiently. Using Emer’s performance characterization we know on a contemporary x86  we will need about 10 picoseconds on average for each IEEE 754 double add and multiply. To get a back of the envelope approximation on the execution time for an expression evaluation code you simply need to count the number of double precision adds and multiplies required and multiply by 10 picoseconds. The dominant part of the error in this approximation is the number of processor cycles required to read the operands through the cache hierarchy. The approximation assumes all the operands are available on average each 1.0 processor cycle. In real executions the operand fetches might require 1.x processor cycles. It typically depends on how many non compulsory L2 misses the code execution will suffer.   The combination of multicore and low cost grids with the 10 picosecond operation average execution time  brings these multi-billion security accrual portfolio simulation computations into feasible execution scope.

Kunle Olukotun, Stanford University

Kunle Olukotun is the Cadence Design Systems Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is relevant to NIM Optimization for kunlesystems and application analysis. First, the Stanford Pervasive Parallelism Lab  runs in the systems analysis style described in the book  Hennessy and Patterson, Computer Architecture: A Quantitative Approach.  The systems analysis formerly applied to  competing instruction set architectures, memory hierarchies, and network interconnects are now applied to parallel and distributed systems, GPUs, and even Domain Specific Languages for FPGA racks.  The fundamental underlying assumption is that the primary way Moore’s Law will continue to improve application performance year-over-year is through greater parallel system execution efficiency. From a NIM Optimization perspective if you are going to run a multibillion security portfolio through a Monte Carlo code to get the expected time series of cash flows you probably want to run in parallel so you can execute in a reasonable amount of time. An educated guess at this point is you want a high end cloud like Oracle or AWS, Xeon Phi, or GPU/ CUDA system to work from. FPGA compute servers stumbled several years ago on Wall Street and I would expect they will have difficulty getting traction with banks.

Second, the publications and talks from his lab sift through the various types of parallelism in real applications and prioritize which approaches and tools work best for which codes. Map/Reduce, graph algorithms,  transactional memory, and machine learning are the big money making applications. Scalability! But at what Cost? highlights some of the hazards of application analysis. NIM Optimization is an old school floating point intensive Monte Carlo followed by LP application. Seems like it is going to be an OpenMP and OpenMPI application on top of some highly vectorized code.

Alexander Shapiro, Georgia Tech

Alexander Shapiro is a professor at Georgia Tech teaching and researching stochastic programming. Here is the Publication list from his personal website. Here is the Google r39038Scholar citation page for Shapiro. Here is a 2013 introduction to stochastic programming series of talks delivered in Moscow by Shapiro. The main text is A. Shapiro, D. Dentcheva, A. Ruszczyński: Lecture Notes on Stochastic Programming Modeling and Theory, SIAM and MPS, 2009. There also is a shorter Tutorial on Stochastic Programming w. Philpott.

Commercially there are several vendors offering Stochastic Programming services on Wall Street. Quantitative Risk Management (QRM), Kamakura, Oracle, and IBM come to mind. QRM appears to have originated in Chicago in 1987 and also seems to be a tool used by Capital One Financial (look at their open jobs reqs).  Kamakura is a van Deventer and Robert Jarrow (the J in HJM) company out of Hawaii starting in 1990. Oracle Financial Services  provides a series of Asset Liability Management and Risk services. IBM Algo  advertises managing risk and compliance.

Robert Bixby, CSO Gurobi Optimization

Robert Bixby is the Chief Strategy Officer and cofounder of Gurobi. In 1987 he co-founded CPLEX optimization. These are two of the main commercial packages for LP anrobertbixbyd
MIP. Bixby published a 2012 survey paper titled, A Brief History of Linear and Mixed-Integer Programming Computation.

One finds that of the three listed algorithms, primal simplex is now rarely the winner. Dual and barrier dominate; moreover, because of current trends in computing machinery, with individual processors making relatively little progress, and most increased power coming from increasing the number of cores per CPU chip, the ability to exploit parallelism is becoming more and more important. Barrier algorithms can and have been very effectively parallelized, while there has been essentially no success in parallelizing simplex algorithms. The result is that barrier algorithms are increasingly the winning algorithm when solving large linear programs from scratch. However, since crossover to a basis is an essential part of barrier algorithms, and this step is fundamentally a simplex computation and hence sequential, the fraction of time taken by crossover is ever increasing.

Here is Bixby’s dblp webpage and linkedIn page (I always like how odd the expertise section of LinkedIn is, look at Bixby’s score in Optimization, maybe it doesn’t mean what I think it means). Google runs an open source LP project GLOP as does GNU GLPK. Geraldo Veiga has a nice set of slides for a 2015 talk titled A Brief History of Computational Linear Programming. Hans Mittleman at ASU maintains a spectacular LP performance benchmark site Decision Tree for Optimization Software. Greg Glockner has a set of 2015 talk slides Parallel and Distributed Optimization with Gurobi Optimizer that seem useful for larger optimization problems. You can run optimization problems against the contemporary codes at neos, see neos guide and neos server. Standard texts include: Nocedal and Wright, 2000, Numerical Optimization; Luenberger and Ye, Linear and Nonlinear Programming; Ruszczynski, Nonlinear Optimization; Bertsekas, Convex Optimization Algorithms; and Williams, Model Building in Mathematical Programming.

How does all this optimization play in Wall Street currently? Risk Magazine will train you up in ALM & Balance Sheet Optimization last Sep 2016, predictably from a risk management perspective. Betsy Graseck, the wholesale Banking Equity Analyst at MS, contributed to Morgan Stanley/Oliver Wyman, Mar. 2014, Wholesale & Investment Banking Outlook: Mis-allocated Resources: Why Banks Need to Optimize Now and Mar. 2016 Learning to Live with Less Liquidity. We will be revisiting Ms. Graseck’s publications later in the blog considering she is the senior analyst for Capital One Financial. McKinsey/Bueller et.al., 2013, Between deluge and drought: The future of US bank liquidity and funding has a good discussion transfer pricing.  Deloitte, Apr 2015, Capital efficiency and optimization spends some effort to discuss “optimizing the capital structure.” In all these publications I think the usage of the word “Optimization”  leans more in the direction of improvement or “do better than you did last year” than numerically optimize relative to some objective function. Nevertheless the publications are useful in describing the infrastructure needed to run a (multi)trillion dollar accrual portfolio.  JPM’s Collateral optimization is somewhat more numerical – see  Rivett, The Optimization Imperative, JP Morgan, 2012. Lichtfous and Hefner at Deloite in 2014 wrote Optimisation Under Regulatory Constraints outlining an approach that sounds more rigorous and numerical.  There are optimization texts online and oriented to finance  for example, Cornuejols and Tutuncu, 2006, Optimization Methods in Finance. Finally, Optimization in the context of innovation, FinTech, and Digital Banking seems yet again distinct from simple numerical optimization of the accrual portfolio, see for example PWC, Retail Banking 2020.

 

Lewis Ranieri, The Godfather of Securitization

Lewis Ranieri is famous for bringing securitization to the market in the 1980s at 4068784839_c3002005010100Salomon Brothers with the team of “fat guys.” You can read about him in Michael Lewis’ 1989 book Liar’s Poker.  One reason he comes up in the context of NIM Optimization is mortgage prepayment models, we will get to that. Here is Ranieri’s Harvard GSD talk “Revolution in Mortgage Finance” on the Credit Crisis aftermath. Since the 80’s there has been a liquid two way market in mortgage backed securities meaning that there are buyers and sellers for MBS securities. Look at the 2014 Fed Notes from Campbell, Li, and Im Measuring Agency MBS Market Liquidity with Transaction Data, they show $7.5T outstanding MBS notional in 2013 trading at spreads between 5 and 7.5 bps between 2011 and 2013 using TRACE data. For comparison U.S. Treasuries traded at around a 2 bps spread in the same period versus Corporate bonds at 80 to 160 bps. For the MBS spreads to be that tight in that large a market it is plausible to assume the underlying quantitative modeling, if not perfect, is at least tradable. The mortgage prepayment models are part of that story.

Here is John Geanakoplos, James Tobin Professor of Economics at Yale discussing mortgage prepayment models in his lecture Modeling Mortgage Prepayments and Valuing Mortgages. Lakhbir Hayre is one of the original and main contributors to the development of prepayment models: see Anatomy of Prepayments 2000, Salomon Smith Barney Guide to Mortgage-Backed and Asset-Backed Securities, and  Citigroup 2004 Hayre and Young’s Guide to Mortgage-Backed Securities. Why is this relevant to NIM Optimization? Prepayment models are the MBS market’s quantitative method for determining the expected loss write-downs (in addition to optional accelerated pay-downs) from market data and econometric data. They are tradable expectations of mortgage loan losses that are applicable to stochastic programming approaches to NIM Optimization. YieldBook and Intex  are commercial suppliers of analytics for MBS and ABS valuation and cashflow analytics for liquid two-way structured finance markets. Certainly you could argue that the Credit Crisis showed that the prepayment models are not perfect. On the other hand, the accrual portfolio loan loss provisions in the $15T BHC assets are hardly modeled as rigorously as the $7.5T MBS, so it is better to do nothing and hope the VAR reserve levels capture the autocorrelation of write downs? Perhaps not.

 

John Birge, University of Chicago

John Birge is the Jerry W. and Carol L. Levin Distinguished Service Professor of Operations Management at the University of Chicago. He studies stochastic jbirgeprogramming and large scale optimization.  His June 2012 paper with Pedro Judice, Long-term bank balance sheet management: estimation and simulation of risk-factors is very relevant to Net Interest Margin Optimization and is worth discussion. Some earlier slides from  a Birge talk on Stochastic Optimization in Asset Liability Management are here. He published a book titled Introduction to Stochastic Programming with Francois Louveaux in 2011. His google scholar page attests to a wide range of expertise and a rich publication history.

The 2012 paper proceeds from a one period model used to determine optimal bank policy under credit risk to a multiple period model defining stochastic processes for risk factors  evolving from the interest rate and credit cycles. The idea is to enable balance sheet simulation over time and provide a capital allocation plan that achieves a predefined set of objectives. There is a good summary of the literature in this paper focussing on risk management applications and models. The authors present a Vasicek/Kupiec like  process definition of the charge off rate, which is critical in formulating an interest rate/credit dynamics suitable for optimization.  The Kupiec 2009 FDIC tech report,  How Well Does the Vasicek-Basel AIRBModel Fit the Data? Evidence from a Long Time Seriesof Corporate Credit Rating Data deals with the autocorrelation of corporate bond default rates from 1920  to 2008 in setting regulatory capital adequacy levels in a bank risk framework.  The key takeaway from Birge and Judice is one of the principal risk factors to model stochastically in bank ALM (possibly the main one) is the interaction of the interest rate and credit cycles relative to the underlying securities. The slippery part is getting a grip on the correlation of loss write-downs on the underlying securities (see Duffie et.al.,2009, Frailty Correlated Default).

James Vickery, Federal Reserve Bank of New York

James Vickery is at the NY Fed in the Research and Statistics Group and is the author or coauthor on many valuable Federal Reserve publications relevant to Net Interest Margin Optimization james_vickeryand generally to the domestic and global banking system. The Federal Reserve plays a significant role in regulating, defining, monitoring, and reporting domestic banking activities. Three important pieces of Federal Reserve legislation significantly determine the structure of the current domestic banking system: Bank Holding Company Act of 1956; Financial Services Modernization Act of 1999, commonly called Gramm-Leach-Bliley; and Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010. The Bank Holding Company Act and Gramm-Leach-Bliley allows the aggregation of $15+T of assets and liabilities onto the balance sheets of a relatively few Bank Holding Companies (BHCs). Post the Global Credit Crisis, Dodd-Frank introduced constraints on BHC proprietary trading and mandated increased transparency on the composition of BHC banking books. Think of it like Gramm-Leach-Bliley pulled all the assets together and Dodd-Frank made the assets simple and published them.  If the Net Interest Margin Optimization  was like a standard applied math simulation like Taylor-Couette flow, the Federal Reserve would: regulate the speeds of the rotating cylinders to avoid turbulence,  define the Reynolds number, monitor the Navier-Stokes simulation stability/convergence, and  be the primary source for mandatory quarterly historical data reporting. The interesting part is that advances in technology, quantitative financial modeling, and stochastic programming have interleaved with evolving banking legislation to make NIM Optimization feasible (see Richard Fairbank) on a large scale. Vickery is the talented tour guide to the Federal Reserve Bank activities, legislation, and data infrastructure.

Through the lens of Net Interest Margin Optimization, I am particularly interested in four publications from Vickery’s NYFRB website:

  1. A Structural View of U.S. Bank Holding Companies, Jul. 2012.
  2. Do Big Banks Have Lower Operating Costs, Dec. 2014.
  3. Available For Sale? Understanding Bank Securities Portfolios, Feb. 2015.
  4. A Look at Bank Loan Performance, Oct. 2013.

The first paper describes the organizational structure of contemporary Bank Holding Companies. Complex BHCs can hold thousands of subsidiaries, but the majority of the assets are held in the commercial banking subs. The paper points out for example that JPM controls 3300+ subs but only four are domestic commercial banks. The four subs hold 86% of JPM’s assets. Appendix A in the paper provides an enumeration and synopsis of the BHC structure and banking book composition reports. The second paper, Do Big Banks Have Lower Operating Costs shows that the Non-Interest Expense ratios are inversely proportional to the size of the assets held by the respective banks. The reason to study this problem is to provide evidence of economies of scale in commercial banking. The form of the research is interesting and the references include  Cornett, McNutt, and Tehranian’s 2006  paper Performance Change Around Bank Mergers: Revenue Enhancement versus Cost Reductions. This reference is obviously relevant to the possibility of optimized NIM revenue scaling with the size of the bank. The third blog post on the Available For Sale portfolio examines the security composition of the typical banking book. Contemporary bank accrual portfolios contain more complex securities than simply deposits, Fed funds, Cards, and Loans. Only some of these banking book instruments are Hold To Maturity (HTM) a significant fraction are Available For Sale (AFS). AFS Securities add complexity and runtime to the Net Interest Margin simulation and optimization process. The last blog post takes a cross sectional tour of the historical impairment write off of Bank loan portfolios since the Global Credit Crisis. The aggregated data is taken from the Quarterly Trends for Consolidated U.S. Banking Organizations. For the Net Interest Margin optimization problem this draws attention for a couple reasons. One, the Fed is almost certainly approaching all the the analysis from a risk perspective. With NIM Optimization we analyze this data from the P&L perspective, of which risk is a component. Two, the NIM P&L attribution model has to pay attention to the Credit failure to pay model in addition the fluctuation of interest and exchange rates. The credit spread to the funding rate has to be one of the major P&L drivers of the banking book.  Similarly the correlation of default (failure to pay) between loans must play a significant role in any simulation. Failure to account for the relevant risk factors can lead to a downward biased estimate of tail losses, similar to what happened to the polls in the recent U.S. election (at the Princeton Election Consortium or 538 see GLL) or in the Credit Crisis (see MacKenzie, MacKenzie and Spears, or Salmon). The current practice of loan loss provisioning  will be a subject we come back to.