How Wall Street Lied to Its Computers

CORRECTED 5 p.m.: Spelling of Leslie Rahl.

So where were the quants?

NYSE(Credit: Fred R. Conrad/The New York Times)

That’s what has been running through my head as I watch some of the oldest and seemingly best-run firms on Wall Street implode because of what turned out to be really bad bets on mortgage securities.

Before I started covering the Internet in 1997, I spent 13 years covering trading and finance. I covered my share of trading disasters from junk bonds, mortgage securities and the financial blank canvas known as derivatives. And I got to know bunch of quantitative analysts (“quants”): mathematicians, computer scientists and economists who were working on Wall Street to develop the art and science of risk management.

They were developing systems that would comb through all of a firm’s positions, analyze everything that might go wrong and estimate how much it might lose on a really bad day.

We’ve had some bad days lately, and it turns out Bear Stearns, Lehman Brothers and maybe some others bet far too much. Their quants didn’t save them.

I called some old timers in the risk-management world to see what went wrong.

I fully expected them to tell me that the problem was that the alarms were blaring and red lights were flashing on the risk machines and greedy Wall Street bosses ignored the warnings to keep the profits flowing.

Ultimately, the people who ran the firms must take responsibility, but it wasn’t quite that simple.

In fact, most Wall Street computer models radically underestimated the risk of the complex mortgage securities, they said. That is partly because the level of financial distress is “the equivalent of the 100-year flood,” in the words of Leslie Rahl, the president of Capital Market Risk Advisors, a consulting firm.

But she and others say there is more to it: The people who ran the financial firms chose to program their risk-management systems with overly optimistic assumptions and to feed them oversimplified data. This kept them from sounding the alarm early enough.

Top bankers couldn’t simply ignore the computer models, because after the last round of big financial losses, regulators now require them to monitor their risk positions. Indeed, if the models say a firm’s risk has increased, the firm must either reduce its bets or set aside more capital as a cushion in case things go wrong.

In other words, the computer is supposed to monitor the temperature of the party and drain the punch bowl as things get hot. And just as drunken revelers may want to put the thermostat in the freezer, Wall Street executives had lots of incentives to make sure their risk systems didn’t see much risk.

“There was a willful designing of the systems to measure the risks in a certain way that would not necessarily pick up all the right risks,” said Gregg Berman, the co-head of the risk-management group at RiskMetrics, a software company spun out of JPMorgan. “They wanted to keep their capital base as stable as possible so that the limits they imposed on their trading desks and portfolio managers would be stable.”

One way they did this, Mr. Berman said, was to make sure the computer models looked at several years of trading history instead of just the last few months. The most important models calculate a measure known as Value at Risk — the amount of money you might lose in the worst plausible situation. They try to figure out what that worst case is by looking at how volatile markets have been in the past.

But since the markets were placid for several years (as mortgage bankers busily lent money to anyone with a pulse), the computers were slow to say that risk had increased as defaults started to rise.

It was like a weather forecaster in Houston last weekend talking about the onset of Hurricane Ike by giving the average wind speed for the previous month.

But many on Wall Street did even worse, as Mr. Berman describes it. They continued to trade very complex securities concocted by their most creative bankers even though their risk management systems weren’t able to understand the details of what they owned.

A lot of deals were nonstandard in many ways, “so you really had to go through the entire prospectus and read every single line to pick up all the nuances,” Mr. Berman said. “And that slows down the process when mortgage yields looked very attractive.”

So some trading desks took the most arcane security, made of slices of mortgages, and entered it into the computer if it were a simple bond with a set interest rate and duration. This seemed only like a tiny bit of corner-cutting because the credit-rating agencies declared that some of these securities were triple-A. (20/20 hindsight: not!) But once the mortgage market started to deteriorate, the computers were not able to identify all the parts of the portfolio that might be hurt.

Lying to your risk-management computer is like lying to your doctor. You just aren’t going to get the help you really need.

All this is not to say that the models would have gotten things right if only they were fed the most accurate information. Ms. Rahl said that it was now clear that the computers needed to assume extra risk in owning a newfangled security that had never been seen before.

“New products, by definition, carry more risk,” she said. The models should penalize investments that are complex, hard to understand and infrequently traded, she said. They didn’t.

“One of the things that has caused great pain is complex products,” Ms. Rahl said.

That made me think back to some of the great trading debacles of the last century, such as the collapse of Askin Capital Management, a hedge fund that fell apart because of complex mortgage security investments gone bad. Wasn’t the moral of those stories that you shouldn’t put your money (or your client’s money) in something you didn’t understand? Furthermore, even if you are convinced you do understand it, you’re not going to be able to sell it when you need the money if no one else does.

“In some ways there is nothing new,” said Ms. Rahl, who helped investigate what went wrong at Askin. “The big deals are front-page news, then they go into the recesses of people’s memories.”

And, ultimately, the most important risk-management systems are the ones that have gray hair. “It’s not just the Ph.D.’s who must run risk management,” Ms. Rahl said. “It is the people who know the markets and have lifelong perspective.” And at too many firms it is those people who failed to make sure the quants really did their jobs.

Comments are no longer being accepted.

I have personally seen CFOs stand late nights arguing with the risk analysts to make default ratio assumptions more optimistic just before loan is securitized and sold off.Risk analytics in portfolios of loans is all about default rate assumptions. The reason Goldman Sachs escaped unscathed was because their models were not fiddled with. In other large companies, while one arm was lending to retail and securitizing it, the other arm was investing funds in some other securitized assets. Eventually music stopped. Give 80 billion in a tax rebate directly to defaulting mortgages and default rate would be zero ish. Volatility is also an opportunity for healthy companies to make big big money (like JP Morgan exotic securities) by snapping financial assets cheaply, at the retail end , the taxpayer has to lose his job or go bankrupt after losing his house.Something is missing in this speedy events , and that is data, unbiased from anyone.
If Goldman Saachs could exit early why couldn’t the Fed Reserve or even the NSA pick up these cues.How come no investigations in governance , or lawsuits happened to demand more data.

And wasn’t this mess also caused by the widespread delusion that American property, in aggregate, never falls in value?

Reminds me of the old adage from IT, garbage in=garbage out. Feed your systems incorrect data and you’ll get incorrect data back out. It still appalls me that large institutions aren’t more stringently audited as to their operating practices in the accounting arena. How many times have we’ve seen news reports of businesses failing because of “creative accounting” practices? Why isn’t there a set of accounting models(they would pick one) that public companies would be forced to adhere to, which in-turn independent auditors could then check and determine if they are compliant with the accounting model? Seems like an idea that is overdue.

One big problem was having wealthy people building these models. Anyone down in the trenchs could see that only a small percentage of the population could actually afford homes. It was obviously unsustainable, but the wealthy where insulated from this.

The good news is that the financial sector is getting a graduate course in reality. You know, Greed is not good, the dead hand isn’t going to save you, government really is not the problem, etc… Hopefully, this will be remembered for as long as some of us remember the depression.

Well written, plain english. The quants need to be supervised by senior honest ethical veterans just like everyone else. Experience and common sense are very important. The risk models are just tools. Such tools in the hands of the wrong people can be very dangerous.

I do think that there was some element of greed and/or neglect involved at the senior manangement level. The huge salaries and bonuses that many of the managers on Wall Street enjoye are often also a huge conflict of interest. They forget who their customer is. Did the Boards of Directors practise due dilagensce. If not, they are equally guilty and should be held accountable.

all models are wrong, some are useful.

There is book that discusses this topic directly written by Nassim Nicholas Taleb and named “The Black Swan”.

I agree that bad data was mostly likely used is calculating these risk models however Taleb actually attacks the mathematical models themselves that are used in these financial risk assessment scenarios. He claims that the models are based on poor mathematical techniques which he coins “The Great Intellectual Fraud”.

Rather than I continuing to mis/represent his ideas I suggest anyone interested in these financial risk models to check out his writings.

Larry, even the wealthy could see this was coming. Been in rooms where as an IT guy, where we had some portfolio manager explain their process, so we could automate it. After he left the room, there was dead silence, we looked at each other in shock and bewilderment, and not a word was said. Needless to say, we are no longer working there. I agree to the willful simplicity of the models, many knew that the assumptions were crazy. There is a mathematical measure that can be used to determine the health of a mortgage borrower, it is called the FICO score, for many portfolios there was no way of incorporating that into the model. The FICO score measures the health of a borrower, and the goodness of the score it is updated frequently to account for the borrower’s financial health.

Insider, has more of a realistic view than the experts in the article. What Insider does not mention, that the quant who keeps on arguing soon gets the label of being difficult, and “not a team player”, or worse “does not understand the business”, labels that are death knell to a career. Many people in positions of authority, chose not to know.

The housing market was just a big bubble that got popped. Just like the .com market the values of homes were not realistic. It is a trickle down effect from the economy and the amount of money people have to spend just to drive to work. I got news for you as well if you don’t already know the home values for most states are going to continue to decline for the next 2 years. After that it will stable out and then values will go back up. If you are looking to buy a house wait 2 years then you should be in a good position and if you own a home wait to sell it in 4 years as the value will be realistic and you might even make a small profit.

Newly unemployed Wall Street quants can look for jobs in other industries here: QUANTster.com.

Almost all of these risk models significantly underestimate the probability and magnitude of rare tail events. Also, they mistakenly assume that assets that have appeared to have low correlations during normal times will continue those relationships during times of crisis. The biggest difference between what is happening now and the Long-Term Capital Management collapse is the size and scope. In both cases there were fundamental assumptions which broke down leading to massive re-pricing in the market place. The price changes are magnified by panic induced flight to security.

The Computer Models you describe were obviously designed to look only at the previous Quarter. As the link you provided to the Askin debacle demonstrates there was no interest in providing even a modicum of common sense into these models. It’s not like 14 years (Askin) is a lifetime ago and there were plenty of real life examples in the previous 20 years that easily predicted the current problem.
But if you don’t want to see it; you won’t look.
The sad thing is that the people running these institutions are probably considered the best and the brightest of our times, because it certainly is not our politicians.

The whole thing can be boiled down to an old adage on the Street: Pigs Get Slaughtered.
And they were all greedy pigs.

Well.

When you KNOW that the taxpayers are going to bail you out, you take BIG RISKS.

It’s Not Your Money, they said.

WHO CARES if we don’t understand how the computers are trading !

LAX Republican REGULATION = Taxpayer pain.

Bush & Republicans: “Let the markets regulate themselves!”

How’s that working out for ya’ ?

GIGO has been around since the beginning of mathematical analysis and computers in particular. Any statistician will tell you; and that includes “quants”, that you can make any model tell you want you wish!!
We are just paying the price of Greed!!!

Ken B (#3) has a point. The powers-that-be wanted to keep real-estate values going up … no matter what it took. The original practice of “sub-prime lending” was intended to bring new money into real estate once the market had been saturated. This kept the real-estate bubble from bursting, for a few years anyway.

But eventually it did burst, and we are now worse off, perhaps, than we would have been had the powers-that-be not kept real-estate values artificially high.

We’ve now learned two very important lessonsfrom this escapade: The markets are sophisticated enough to create and (for a while anyway) maintain bubbles, any time they want; but also, even an artificially-created bubble cannot last forever, no matter what machinations are used to keep it in place.

Look up “Coherent Measures of Risk”, Artzner, Delbaen, Eber, & Heath. These people showed that Value at Risk (VaR) is not a good measure of risk in 1999. Should I say discredited? The correct measure is CVaR, Conditional Value at Risk.

I’m not sure Taleb’s work conforms to Coherent Measures of Risk. I didn’t read it but of the little I’ve come across about it, it seemed like Extreme Value Theory, which is difficult to test in the real world. CVaR is what I prefer, with a large does of common sense.

There is a lot of give and take when building models, and it is not easy to build these models, because on the one hand you have to use historical data to build models (to conform to reality) and on the other hand you have to forecast outlier conditions (market collapse). But outlier conditions are usually also outside the tested range of the models (insufficient data).

There are pros & cons to several years of history versus a few months. If you use too long a time series, then models get unresponsive (i.e market is collapsing and your model says it is not). If you use too short a time series then models over react (market is collapsing every other week). Or the opposite is true depending on how you build your models. Ideal situation is some where in between, and experienced quants/modelers given time can find a good solution. What a waste of talent to see quants go.

About FICO scores. In my opinion they are good predictors of success, but when one looks at the default and delinquency data, one finds that all defaulters behave the same whether they previously had 800 or 550 FICO scores. My hunch is that FICO score are not good predictors of failures because the statistical distribution of the underlying behavior has changed on default, but more work needs to be done in this area.

It is not the models or the quants, it is how they are used.

I worked in a company where some strange accounting activity was going on with a specific division. The Division President was asking the head accountant to push something farther than was allowable by General Accounting Principles. When the head accountant would not do it, he was fired. Next guy got fired. When the Division President tried to fire the third guy, there was finally some attention from higher up. At that point the Division President & his cronies got fired. Meanwhile, two head accountants fired for a reason that no one will discuss. I am sure they had a hard time getting their next job. Unfortunatelly, this is the kind of personal ethics & risk to one’s self & faminly that it takes for these systems to work properly. This is the nasty truth of our system.

If you are a incredibly wealthy investment banking executive, what do you care if the economy eventually tanks? Having everyone around you poorer only makes you relatively wealthier. What’s occurring now was by design.

Hands down, this is the smartest , most useful piece I’ve seen out there on the causes of this mess. The devil is in the details, and in this case he sure was! I hope many people read this. Thank you.

The lesson here is that the guardians of the hen house can’t be in the employment of the fox. True for risk managers, accountants, compensation committees, internal affairs committees, etc.

And now they wonder why we don’t believe Wall Street etc when they tell us all is well?

@Jack Colt #9, Beyond authoring the books “Black Swan” and “Fooled by Randomness,” Taleb, who is Distinguished Professor of Risk Engineering at New York University’s Polytechnic Institute, has a brand-new essay on the question of how to make decisions in a world we don’t understand very well:

THE FOURTH QUADRANT: A MAP OF THE LIMITS OF STATISTICS //www.edge.org/3rd_culture/taleb08/taleb08_index.html

Why isn’t the Times interviewing and quoting *this* guy!

Mr. Hansell,

Perhaps this is not so much a failure of the models but a failure only of people and their integrity.

I recall an amusing incident from my days as a Systems Programmer. I was working in the early 1970s for a large university’s data center when a good friend of my mother’s took a job as an administrator at a different division of the same university.

One day, as she was preparing a budget presentation, she showed-up at my office with a folder full of data and asked me to use the computer to analyze same. I typed-up all of the data on punched cards, fed it through the standard statistical software of the time (Statpack and Indiana BMD), produced very authoritative printouts, with lots of graphs and charts, my mother’s friend used this to beef-up her presentation and her budget passed handily.

But, then, the next time I saw her, she admitted that all of the data, every nibble and byte, was fabricated as her predecessor had kept miserable records.

Regards,
Jeff Broido,
Morristown, NJ