I currently have a bash script, script.sh, with two nested loops. The first enumerates possible values for a, and the second enumerates possible values for b, like
#!/bin/sh
for a in {1..10}
do
for b in {1..10}
do
nohup python script.py $a $b &
done
done
So this spawns off 100 Python processes running script.py, one for each (a,b) pair. However, my machine only has 5 cores, so I want to cap the number of processes at 5 to avoid thrashing/wasteful switching. The goal is that I am always running 5 processes until all 100 processes are done.
xargs seems to be one way to do this, but I don't know how to pass these arguments to xargs. I've checked other similar questions but don't understand the surrounding bash jargon well enough to know what's happening. For example, I tried
seq 1 | xargs -i --max-procs=5 bash script.sh
but this doesn't seem to do anything - script.sh runs as before and still spawns off 100 processes.
I assume I'm misunderstanding how xargs works.
Thanks!


xargscan't, and doesn't, change how its subprocesses start their own children; rather, it provides control over how it starts its subprocesses.multiprocessingmodule instead ofxargs. Or alternatively the GNUparallelsutility. docs.python.org/3/library/multiprocessing.htmlxargs, which is a model of simplicity in comparison. Given a Pythonist's preference for simplicity, I'd consider xargs the obvious choice among the two. :)--max-procsis a GNUism, but-Pworks on OS X and modern FreeBSD as well.