I'm almost sold on the concept of a data oriented engine; however, one thing still eludes me. If we pack the data from a large level into huge arrays, and go over them, any visibility system that we have would basically render most of it as unprocessable.
Consider this exert:
void do_stuff_with_data(data* vdata, const const_data* cdata, int count)
{
for ( int i = 0; i < count; ++i )
{
if ( vdata->active )
{
... process data
}
}
}
Considering that most of the data is not visible, and due to stuff moving around, there is no sensible prediction for how to order it, and branch prediction is screwed. Doesn't that void the cache-wise benefits?
I have read up on article dealing with this specific problem, but most of the sources just show the "big picture", completely disregarding this important issue.