The Numbers Don’t Lie… Or Do They?

Author: | Date:

“There are three kinds of lies: lies, damned lies, and statistics”

Mark Twain

If you’ve been reading our articles for a while, you’d have noticed we try to do things that are backed by evidence, research and numbers.

We let the numbers do the talking.

After all, numbers and hard facts don’t lie… right?

Garbage In, Garbage Out

Black Farmed Eyeglasses in Front of Laptop Computer

We live in an age where technology has made it so easy for us to collect and analyze large sets of data.

Unfortunately, a whole host of things can go wrong when doing so.

Data can be collected in a biased manner…

People can incorrectly analyze the data leading to incorrect conclusions…

…or worse – people can misconstrue the data to come up with blatantly false results.

What Has This Got to Do With Me?

As a retail investor, this is super important – because our investment methodologies and frameworks all come from somewhere.

Be it a trading strategy you learnt from a $4000 course…

Or Warren Buffett’s style of value investing…

Or even Dr Wealth’s way of factor investing

How do we know if we can truly trust a strategy? Just because it has earned 30% returns per year based on past track record? Just because an academic paper tested and said so?

It isn’t wrong to defer to “subject matter experts” like academics or practitioners.

Especially since most of us aren’t statisticians or full-time investors.

However, I would caution that blindly following advice “just because” they are experts or they have “results” is extremely dangerous.

Wholeheartedly believing a risky investment strategy “just because” it has shown to work in the past and made lots of money for other people is the worst thing you can do for your portfolio.

We need to be skeptical about every piece of information or advice we come across – and especially in investing, we should never forget Warren Buffett’s first two investment rules,

Rule Number 1: Never lose money. 

Rule Number 2: Never forget Rule Number 1.

This comes from lots of critical thinking… and asking tons of pertinent questions – both of which the majority of us aren’t doing enough.

Even the gahmen has to step in with a law to stop the origination of fake news because people simply don’t know how to discern what’s legit and what’s not.

No photo description available.
One of the more hilarious and viral fake news involved sharing a link in order to claim a free $100 NTUC FairPrice coupon. [Source]

If this has triggered you – I have done my job.

What we need to do now, as smart and “woke” investors – is that we must make sure the investment strategies we are being exposed to, or the data we are being presented, is robust and accurate enough, and free from any sort of biases, for us to consider using. 

We can never find the “golden goose” investment methodology that is sure to work 100% in the future.

But we can definitely learn how to sieve out those that might look feasible on the surface – but simply don’t work, are biased or are outrightly deceptive…

…and choose the ones that have been most rigorously tested and free from bias.

Even then – they might not perform as well as you’d expect (and I’ll explain why later).

This is why we should always use lots of good (rational) judgement and risk management to protect ourselves – so that we keep to rule number one and “never lose money”.

The Multiple Stupidities of Investment Research

To learn how to discern a robust strategy from a bad one – we need to tap into some concepts in statistics.

Don’t worry – I will try to make it as simple and easy to understand as possible.

I will also use examples relating to, or in the context of, Factor investing (sometimes called quantitative investing) – because it is an investment framework that heavily relies on data, lots of fancy testing and it’s a relatively new way of investing.

So – let’s get right into it!

Correlation Is Not Causation

When I was in university, this was one of the first few concepts I learned. I will illustrate with an example. 

A study found that young children with obesity problems tended to have “controlling” mothers. 

Image result for angry  mother

It claimed that controlling behaviors interrupted a child’s self-regulation habits and could cause overeating later on, which could result in obesity problems.

This was picked up by a 1994 San Francisco newspaper, and they concluded that these parents should “lighten up” – advising that these parents should relax and be less controlling.

It’s good advice, right?

Unfortunately, if we follow this advice and expecting our kids will not overeat and get fat… some of us might be disappointed a few years down the road.

The fact that here is an “association” between mothers’ behaviors and obesity problems does NOT necessarily imply that mothers’ behaviors “cause” children’s obesity problems.

This is true in investing research too.

Look at this graph below. This is a 1995 study that has purportedly found 3 very reliable “factors” to explain (predict) the S&P 500 returns. 

Hand in hand, they would explain 99% of the stock market returns. 

The study showed that when the underlying indicators were up 1%, the S&P 500 gained 2% the following year. If the indicators were down 10%, you could be almost certain the S&P 500 would be down 20% the following year.

Want to make a guess what these indicators or factors are?

Nope – it’s not GDP, interest rates or inflation rates…

…it’s butter production in Bangladesh, US cheese production, and sheep population.

Whattt…?!

Here’s the full picture uncropped…

Source: “Stupid Data Miner Tricks” (Leinweber, 1995)

Yep. Now go make lots of money.

The researcher, David Leinweber, obviously published this as a joke – and to make a point about data mining.

Just because butter production (or sheep population) and S&P 500 are correlated – doesn’t mean that it predicts (“cause”) S&P 500 future returns.

Rationally, you know that it is blatantly impossible.

However, if I told you the factors were “GDP, interest rates or inflation rates”… would you have believed me, then?

Leinweber concluded with this warning,

“If someone showed up in your office with a model relating stock prices to interest rates, GDP, trade, housing starts and the like, it might have statistics that looked as good as this nonsense, and it might make as much sense, even though it sounded much more plausible.”

I’ve got one more… and this time, these researchers actually believe their own bullsh*t.

The researchers claim “US population of 9 year olds is an EXACT PREDICTOR of future stock returns”. [Source]

Using the above graph, we could also explain the concept of overfitting and in-sample testing.

Notice that each of the factors – “Bangladesh butter production”, “US cheese production” and “sheep population” have no relationship at all with each other.

In statistics-speak, they are uncorrelated.

Initially, Leinweber only managed to show a 75% degree of association using one factor – butter production in Bangladesh.

By adding an additional and uncorrelated factor, US cheese production, he managed to up it to 95%.

The final 99% was achieved when he used all 3 factors.

Leinweber essentially showed that by adding multiple, uncorrelated factors into a model – we can get any model to work (ie. outperform the S&P 500) if we want to.

He appropriately calls this “torturing the data until it screams”.

In real life, active fund managers might back-test different configurations of factors or indicators until one manages to consistently beat the S&P 500, while showing high return-to-risk ratios, or a low drawdown rate.

However, as good as it might look in the tests, it may not work in different time periods or using data from different stock markets.

In fact, with factor investing – this is very common…

Take a look at this chart.

Source: SocGen

This is one of Societe Generale’s alpha-generating strategies which was launched in 2008. The backtest prior to 2008 had shown a Compounded Annual Growth Rate (CAGR) of over 15%.

The sample size and time period was not small. 

They used returns data from different asset classes not correlated with each other, and tested since 1994 – which would have given them 14 years worth of data.

The backtests showed an outperformance of the S&P 500 and held up well in the dot-com crash of 2000.

Have we hit the jackpot? Nope.

After it was launched… you can see how the returns had flatlined and delivered an annualized -1%.

The backtests had tested using only what statisticians call “in-sample” data.

This is data that was within the testing period (1994-2008) and the model had not been applied with any other data, or with any other time period.

Again, correlation doesn’t equal causation.

When used on post-2008 (or “out-of-sample”) data, the model failed to deliver.

Investor beware!

So… What Should An Investor Do?

The simple answer – Always be critical about an investment strategy, advice or research you come across!

Make sure the methods are ROBUST – that means, put under rigorous testing by different people and with different datasets, in different time periods.

Even if it has been rigorously tested and shown to perform fantastically, you should be aware that by using the model or factors in your investing…

…you are still taking on a bunch of static assumptions that might not stay the same in the future.

For instance, you implicitly assume that…

1) Market conditions and compositions stay the same

Markets are always changing.

That model that you backtested might not work in markets of a near-distant future…  even if you have used 200 years of back-tested data.

See this chart created by the Visual Capitalist.

We see that over the last few decades, markets were made up of mostly finance and transportation firms which are asset-heavy.

Markets have been changing and today – we have more service-oriented firms with highly valuable intangible assets.

Therefore, a fundamental indicator or criteria that used to predict the returns of the asset-heavy stock market in the past might not work as well today and in the future.

2) Interest rates continue to be at low levels

Look at US interest rates since 1976 (43 years ago).

Investment methodologies or factors are usually tested with a time period of mostly 20 years – where interest rates have been on the decline.

We never know for sure if rates will continue to go down – or it will move upwards (the US has contemplated raising rates)…

If the latter happens, any backtests will no longer hold water.

3) Factor premiums don’t erode over time

In factor investing, we fall prey in thinking that discovered factors are immutable and evergreen. 

This is not true.

For instance, as more people use the “value” factor…

…more people might invest in those few “value” companies, which will drive up the price and close up the “value” gap.

Meaning that there might be lesser “value” opportunities to exploit in the future.

4) You stay equally diversified at all future rebalancing periods

As mentioned in my previous article, factor investing is typically thought of as a diversification strategy.

Any backtesting done must ensure that there are enough numbers of stocks within each factor for the factor to be robust.

If not, you’d run the risk of just one or two good stocks that contribute to the outperformance of that factor… also known as “selection bias”.

5) There is no “friction” in the models

Many times, backtests don’t perform as expected primarily due to this thing called “friction”.

Some frictions include things like brokerage fees, 30% withholding taxes, slippage (different trade price when buying) or a delay in rebalancing.

In backtests, you have none of these things – because it is all simulated.

In real-life, however, all these frictions can add up and compounded over time – causing your portfolio returns to be significantly lower.

I’ve Never Seen A Bad Backtest

In conclusion, we as investors need to be skeptical of claims of outperformance…

That said, models and tests are not entirely “useless”.

My Business Analytics professor in NUS once quoted famed statistician George Box, who said that

“All models are wrong, but some are useful”.

It is our job as smart investors to sieve out which ones to stay away from – and which we can trust with a certain degree of confidence.

This way, we can save ourselves (and our portfolios) from much heartache.

If you like this article, please share it using the buttons below!

The following two tabs change content below.
Equity investment analyst at Dr Wealth. Value investor and online marketer.
>