5
Oct
2017

Difficult opportunities

“True words are not fine sounding
Fine sounding words are not true.” – Lao Tzu, Tao T Ching

I’ve been a sceptical for a while about how anyone might make money from using technology to analyse sentiment. The hedge fund set up in late 2010 to implement the Twitter mood strategy, Derwent Capital Markets, failed and closed in early 2012. It just seemed such an unlikely idea – that there a computer could just sit there reading twitter arguments and make money.
Instead I’ve been doing something a bit different. Using the programming language R to analyse what company management say in their outlook statements. Management ought to know about the prospects for their business. I don’t understand why no one else has tried this. Actually I do….it was hard work.
So, with the help of a friend I webscraped a random sample of 50 small companies results from 5 years ago (first half of 2012). Thanks Tom! I wanted to use small companies because that’s where the opportunities are. If you are Bridgewater with hundreds of billions of dollars under management, you are not interested in a company that is much smaller than £100m but which might be a multibagger. It’s like a film studio is not interested in a low budget film like El Mariachi, which cost $7000 to make but grossed $2m.
https://en.wikipedia.org/wiki/Robert_Rodriguez
But these are precisely the type of payoffs that I am interested in. And that I want to apply some text analysis and machine learning to.

Extreme returns

Of the sample, 4 out of 5 of the companies had a positive return, the median return was 53%. But you have to be careful with median averages, the distributions were heavily skewed and wildly random. The worst company of the 50 (called Trading Emissions) was down 89% between July 2012 and July 2017 (which was the time period I measured). The best (ironically called Best of the Best) increased in value 20x. So although the median return was just under 9% compounding over the years there were extreme outcomes on both sides of that.
I wanted to know if R could help me identify the good companies – and steer clear of the bad ones. That seems like a valuable opportunity. But I didn’t want to look at the numbers, everyone focuses on the numbers, so I wanted to look at the text…the outlooks statements where management talk about the future prospects. I really don’t think that there were any numbers that you could look at in Robert Rodriguez $7000 film budget which would have given you a clue about the future prospects of his film.  But maybe reading the script you could have had an intuition that you were on to a winner.

Top performance

Also I didn’t want to split the sample half good / half bad. I’m interested in top decile performance, the companies that increase 20x. Well in a random sample of 50 small companies there was only one of them. So I just decided that my criteria would be anything that had quadrupled over the last 5 years versus the rest.
So there were 9 multi baggers, versus the rest. That’s 18% of the sample. By coincidence that is exactly the same proportion as Kevin Martelli at Martek Partners found when he did his work on multibaggers. Martelli looked at 10-baggers over 15 years. He screened 21,000 listed stocks and found 18% returned 10x.
I suppose I could have split the sample the other way, to see if there were any obvious red flags for companies that were the worst performing. But I’m more interested in upside, rather than reducing mistakes. That is making 20x return seems like more fun than merely avoiding 90% collapse in value.

Naïve Bayes

I wanted to apply this new fangled machine learning to my sample of 50 small company outlook statements. So I split the data into a training set (37 companies) and analysed it using a “naïve Bayes” package in R. This is the way that email spam filters work on text – words like “free”, “viagra” and “Payment Protection Insurance” are much more likely to appear in email spam. I wanted to apply the same techniques to Chief Executive statements, maybe there are equivalent words that signal a company has a lot of potential.
But it didn’t work. When I trained the software on my sample, and then got it to read the remaining 13 companies and make predictions which companies would be “baggers” – it didn’t think that any companies would be, despite 3 out of the 13 (Triad +837%, Servoca +700%, Titon +388%) actually being baggers. So back to the desktop.

Eyeballing the data

I wanted to see if there was anything that perhaps the computer couldn’t see, but which might make sense to me.  So here are the results – as simple frequency graphs. The red graph is the poorer performers and the blue are companies that quadrupled or move.

Click on the images to enlarge

This is a simple “bag of words model”, just looking at the most frequently occurring words. I would have prefered to look at phrases (this is called N-grams) but the text mining software (tm package) has been “updated” and now doesn’t seem to work properly.

There’s quite a lot of shared overlap: for instance both lots of companies used the word “new” most frequently.  That’s why the naive Bayes approach struggled, the text was too similar and the sample size too small.  But there were still differences, that might be significant: “Opportunities” was the most fourth equal most frequently appearing word for the baggers – whereas it ranks a further down the list for the poorer performers. Ironically the poorer performers used the word “confident” a lot more than the baggers (it’s just outside the top 20). Perhaps that indicates that bad management tend to be over confident.

Negative sentiment

But the other aspect that jumped out at me was when I looked for words that only the high performers used frequently. That is words that baggers used in their outlook statements – that the poorer companies didn’t use. These tended to NEGATIVE sentiment words like “difficult” and “reduce”. Presumably because the bad news was in the price. But it’s remarkable that in 2012 even the 20 bagging BOTB was very cautious, warning about the “loss of BAA contracts (which represented 48 per cent of our income from physical sites).” This also confirms the hypothesis that I started out with, that better management tend to be more realistic about their challenges and opportunities they face. Bad or unlucky management and their sycophantic PR advisers (I have worked with some in the past) tend to be full of fine sounding words, perennially optimistic and confident. But as the sage says: fine sounding words are rarely true.

So although it didn’t work perfectly, there are quite a few different things I could do – for instance different machine learning such as K Nearest Neighbour or Support Vector Machines. Different techniques in naïve Bayes (I used single words, but what about multiple word phrases, called N-grams?) And lastly any expert would tell me that my sample size of 50 companies was far too small for machine learning.
So these are the difficult opportunities.