This post serves as a central reference for the first version of our backtesting framework, what we call the “really, really thorough backtest.”
This framework can and should change with community feedback and experimentation, which is why we call this a “version 1.” We want our methodology to be transparent and iterative.
In case you need a refresher on any concepts here, be sure to check out our strategy evaluation posts.
Cross Validation and IS/OOS Splits
We will use two kinds of cross validation:
Random sampling with 200 IS/OOS splits
Walk-forward optimization with varying amounts of splits
The type of method used and specification will be disclosed with cross validation results.
Optimization
To define the optimal configuration of parameters or models, we choose the those with the highest Sharpe ratio. This is what we will use in each respective OOS period.
Strategy Metrics
We will use a variety of performance metrics, including:
Sharpe ratio
Annualized return
Maximum drawdown
Win rate
Win/Loss ratio
Profit factor
Expectancy
Calmar ratio
Sortino ratio
CAGR/Average drawdown ratio
We then use metric measurements from the random CV splits:
Significance: Median metric p-value from 1000 Monte Carlo simulation on each OOS period
Stability: Median IQR/Median value for OOS periods
Accuracy: IS/OOS correlation for the metric
Trade Assumptions
Trade execution made on next bar’s open
Fees: 0.01%
Slippage: 0.01%
Any changes or additional methodology notes will be included with results.