I want to backtest a large number of algorithms on the same dataset, one stock ticker. The algorithms are actually variations of one algorithm with various combinations of parameters, amounting to about 10,000 variations.
Is there a way to test all these algorithms truly in parallel, with a shared cache storing various indicators computed in the process?
I'm new to parallel computing, but I think GPUs and cloud computing providers might have solutions. However, the largest number of cores I've seen a provider offer is 128.
In very high-level pseudocode, I'm looking for something like,
// allocate a thread/core per algo
for (i = 0; i < algos.length; i++)
threads[i] = CPU.createThread(algos[i])
threads[i].initAlgo(param1, param2, ...)
// Pass each tick to all algos
for (i=0; i < ticks.length; i++)
for (j = 0; j < algos.length; j++)
signal = threads[j].algoProcessTick(ticks[i])
// do something with signal