Warning! Wall of text (see <TL;DR> paragraphs below for short version)
I have been noticing something in quite many games (most recently in cutting-edge RTS games such as Uber Entertainment's Planetary Annihilation, which is amazing by the way) that I think has room for improvement.
No multithreading. Main GL loop flushes input queue and draws UI, and does not necessarily update 3D scene. Design 3D rendering pipeline to draw fragments of the scene at a time into an alternate render buffer or texture, one piece at a time. This sounds a little impractical because it's not clear how the rendering should be split up. Tiles? scanlines? How big will they be? With a super heavy workload that normally renders at 2fps, I'd want to split the scene into 500ms/16.667ms = 30 chunks, but there's definitely no guarantee that each chunk would take the allotted 16.67ms. It also sounds like adjusting the chunk number will result in shuffling around resources on the GPU, and basically lead to a bunch of extra overhead.
<TL;DR #1> Two GL contexts, two threads. Thread #1 flushes input queue and draws UI, periodically updates texture on which 3D scene is drawn, draws 3D scene with full-screen-quad, and handles buffer swapping to ensure vsync smoothness. Thread #2 renders 3D scene to texture shared with Thread#1. Requires use of ping-pong scheme to facilitate resource sharing. Thread #2 flips a bit which Thread #1 will read on its next cycle to determine if it needs to flip the texture.
<TL;DR #2> My question is... can this be done in a cleaner way? Is there an engine design that can achieve this perhaps without requiring two contexts? I know that for example on iOS you are required to set up an OpenGL ES context for each thread, and as far as I can tell the only sane way to handle this is with threads.