Off to the Races: A Universal Metastrategy
Leveraging a simple concept to select best performing strategies in real-time.

We often have baskets of assets that we turn into trading strategies. But also, we have baskets of trading strategies that we need to allocate our capital into. In my last post (here), I demonstrated how to use generative AI to create a theoretically limitless supply of trading strategies. But, this is no good unless those strategies actually make money.
Backtests tell us what happened, but introduce data snooping. A metastrategy, an adaptive system that selects for us the best system(s) is a much better solution. I developed a bespoke algorithm based on deep research that gets us very close to the performance of the best strategy, or our theoretical outperformance.
Here’s how it works:
Instead of focusing on analyzing all systems all the time, I let the good ones tell me themselves who’s the winner. Much like a horse race, I let the strategies ‘run’ on the track, and in the middle I see the ones that are pulling ahead. And while this seems surprisingly simple, it’s even more surprisingly effective.
A Visual Example
Watch the video below:
What you see are 100 random strategies. Their performance is just randomly generated percentage gains and losses. They move down the track towards the finish line when their maximum equity increases. Thus, when each line is not moving, the strategy is in drawdown. The black lines highlight the top-10 performers at every given step.
Notice something peculiar: although these strategies are just random noise, at about half way though the race, the current leaders don’t change all too much. They continue to lead. The laggards are stuck in the back not contributing anything.
This is the same concept that I used to develop my universal metastrategy. I let the race run, but with real strategies, and then after a certain amount of time, take the top performers and only bet on them. It’s like letting the market decide for me who the winners are.
A Simple Application
In my last blog post, I generated 164 solid strategies which had 8 configurations for each (4 window lookbacks × 2 rebalance periods). To remove the manual aspect to it, I reverted to my original basket of 259 and calculated the returns for all 259 x 8 strategies (removing 2 for a total of 2070).
I then defined a ‘burn’ rate which is the amount of time we will let the strategies run before we select our to experts. Letting strategies run for a longer period of time gives the losers more time to wash away.
Above is that process. I select the top 20 experts after 2500 days of letting all 2070 run. Afterwards, only those top 20 are invested in (equally weighted). You can see the performance of that metastrategy versus an equally weighted metastrategy of all systems combined.
Now, we need to establish some baseline metrics to understand how good this approach is in general.
We can assume the worst metastrategy performs at an equally weighted portfolio of all strategies because that is the naive option. The best metastrategy is investing in the top strategy since day 1, as we cannot make any more money than that*.
* we can but I’m not going to go down that route this time.
Anyways, here’s how things would work with a burn cutoff period of 1,000 days and then short listing the top 20 assets, investing in all 20 equally.
Wealth multiples (t ≥ 1000)
Best single expert (hindsight) : 71.94×
Uniform across all experts : 23.34×
Uniform until t=1000, then 1/20 shortlist : 13.82×
Wow, this strategy is terrible. It performed worse than if we just equally invested in all 2070. I realized at this point that maybe my cute little pony-show concept was dead in the water. Until, I considered something else:
Why invest in all of the top strategies equally when we know that the performance between them all drifts? Some perform better some periods of time over others.
Thus, the capstone to my strategy was envisioned.
Betting on the ‘Horses’ Live
So, sure, I had a little setback. But that is never going to stop me from achieving my goals. It makes little sense to stick with a top-k equally weighted metastrategy if we know that the strategies ebb and flow in their performance.
I devised a simple rotational system that went something like this:
Every 75 days calculate which strategy did the best during the previous quarter out of our top-k shortlist.
Switch allocation 100% into that system and hold it for the next quarter.
This allows for a bit of adaptability without massive turnover that could hinder performance.
I should also remind you that these 2070 strategies are all highly correlated and share the same 10 - 12 assets in their universes. This also makes it a lot harder to come up with a metastrategy because there is not really a true uncorrelated ‘hedge’ when all strategies are similar. I will research diversification next.
The Results
Wealth multiples (t ≥ 1000)
Best single expert (hindsight) : 71.94×
Uniform across all experts : 23.34×
Uniform until t=1000, then 1/20 shortlist : 13.82×
Uniform until t=1000, then adaptive shortlist : 84.15×
Now we see spectacular performance. A little bit of adaptability allowed our metastrategy to compete and outperform the best expert in hindsight with no lookahead knowledge. This is great news and means that we can track the best expert and expect close performance with a combination of performance tracking and adaptability.
Implementing it Yourself
You can play with the algorithms yourself by simply bringing a matrix of strategy returns and dropping it into the system.
Check out the strategy code now in the private Google Drive. If you are not a paid subscriber, consider becoming one now to gain access to all of the previous strategies and code from all my other posts.
And if you enjoyed this post, remember to subscribe to get all my content straight to your inbox. I’m grateful for all of the support I’ve been shown thus far and look forward to bringing you all more high quality content.
Happy researching!
I don’t think this is a rigorous way of selecting and optimising. Firstly, all your strategies are highly correlated so you should really look into the relative performances, regressed. Second, top(1) justification is based on prior performance which is basically just overfitting and I don’t think I need to justify why in this comment. Lastly, the concept of letting the “markets decide which ones are winners” by any sort of “burn rate” is just flawed. Market regimes change, so in theory this shouldn’t even work. The idea of stability of strategies really hamper this too. Picking the best out of the strategies that work in a brute force way and ignoring the ones that don’t work based on no economical or statistically significant way is not the way forward. You are basically ignoring losers and focusing on winners based on past performance and hoping that keeps on — which is the defn of overfitting.
so the game changer is you bet on the top(1) instead of top(n).
Any justification why top(1) is better than top(n)? top(n) seems to be slightly worse than averaging across all the strategies in your first try.. that makes one hesitate before jumping to any conclusion. How about top(2), top(3)..