You could potentially use DLL injection/detouring to intercept drawing commands from the game on their way to the graphics driver.
You'd detect the calls for the start of each frame, and add a your own command that draws a dithered pattern to the stencil buffer to seed it. Then you'd modify the game's drawing commands to draw only to the stencilled pixels when you pass them along to the real graphics driver.
This would get very complicated on big modern games, which might render to many off-screen targets for things like shadow maps, virtual texturing, and reflections, might be using the stencil buffer for their own purposes that would get stomped by your intervention, or might try to use results from adjacent pixels mid-frame (say for ambient occlusion, screenspace reflections, or antialiasing), before your ML solution has done its work to fill them in.
So, it's conceivable, but likely to be quite difficult to do well or achieve good performance.
If you want to test your ML techniques, I'd argue a much better solution is to get a free game engine like Unity or Unreal, and set up one of the sample game scenes available for those engines. That will give you a representative test case, with the advantage that you have access to the game's internals and the engine API, so you can add your selectively sampled rendering at the source, rather than trying to splice it into a set of rendering commands that were issued with no awareness of it.