5
\$\begingroup\$

Choice of game design and other human psychological factors aside

is the present hardware not capable of working with double precision floating point calculations with reasonable performance? It can ease all the complicated tricks involved to get large worlds working seamlessly in current single precision float setup.

Correct me if I'm wrong but shouldn't modern x64 processors more suited to this kind of task?

For the problem with gpu performance it should still be easier to work in double precision and translate co-ordinates to float on camera space

\$\endgroup\$
5
  • 1
    \$\begingroup\$ Here's one. unigine.com/products/engine/unbounded-world \$\endgroup\$ Commented Oct 20, 2016 at 21:54
  • \$\begingroup\$ What do you mean by "other human psychological factors"? \$\endgroup\$ Commented Oct 20, 2016 at 22:16
  • \$\begingroup\$ I mean other factors like corporate reasoning such as it may be too costly or untested and new in existing market \$\endgroup\$ Commented Oct 20, 2016 at 22:23
  • \$\begingroup\$ Here is another: ode-wiki.org/wiki/… \$\endgroup\$ Commented Oct 20, 2016 at 23:00
  • \$\begingroup\$ Bullet physics also allow you to use double if you want. \$\endgroup\$ Commented Oct 20, 2016 at 23:12

1 Answer 1

7
\$\begingroup\$

There are, as has been pointed out to you in the comments.

Generally you don't see a lot of double-based engines in games because they aren't needed, and they might not be as fast as float-based ones.

The significant majority of games out there can work just fine using floats and the various techniques for doing world partitioning or readjustment that exist out there. They usually gain other advantages (or can gain them) by leveraging those kinds of partitioning techniques, as well, so there is some good synergy there.

Further, from a performance perspective, a game engine based on double-precision math (and thus double-precision matrix and vector APIs) can't take as much advantage of SIMD instruction sets, which are generally sized to allow for matrices and vectors that have 32-bit wide components. Those vector instructions can be a significant performance boost, and electing to use geometry representations that don't fit into the registers necessary for that instruction set means you don't get to leverage them.

The problem of interfacing with GPUs, which also generally want 32-bit component vectors, isn't quite as simple as you allude to in your question as well. Sure, you can recenter and downscale to floats relative to the render camera... but to get to view space you have to go through the graphics pipeline's transformation to world space, which means you have to have already submitted the geometry and coordinates as floats.

To do what you are suggesting there would involve short-circuiting the graphics pipeline by bringing all the coordinates into view space on the CPU, which is not nearly as well-suited to the task... especially if it cannot leverage the SIMD instruction set because the vectors don't fit.

You also have the problem of increasing the memory overhead of every coordinate component. This means a higher memory footprint overall, fewer such components fitting into cache at once, et cetera.

It's not a set of insurmountable problems. It's just that not very many games really benefit from doing so and thus the demand for middleware to support is proportionally low.

\$\endgroup\$
6
  • \$\begingroup\$ Say if I keep the camera visibility to a maximum of how much single precision world can handle and when sending data to GPU offset the origin of visible elements in this single precision frame to camera's location then won't it allow me to use the same pipelining as before ? \$\endgroup\$ Commented Oct 21, 2016 at 10:06
  • \$\begingroup\$ Yes. That's essentially what I'm saying you'd have to do, and that doing so has a cost that, most of the time, engine vendors probably wouldn't consider a cost worth paying and thus don't directly support themselves. \$\endgroup\$ Commented Oct 21, 2016 at 15:23
  • \$\begingroup\$ Doesn't "world partitioning" end up being slower than just using double natively? You'd have to write code to convert from local to world space, etc. Also, many modern SIMDs are 256-bit or 512-bit, so they can handle even 128-bit Vector4 calculations. \$\endgroup\$ Commented Jan 13, 2018 at 5:28
  • \$\begingroup\$ @AaronFranke Possibly if you're not partitioning the world for any other reason already -- but often you are, so the costs are somewhat amortized. \$\endgroup\$ Commented Jan 13, 2018 at 16:22
  • \$\begingroup\$ Also, Vulkan can work with double-precision camera matrixes and rendering. So no, you wouldn't need to short-circuit the graphics pipeline as long as you're using a cutting-edge API. \$\endgroup\$ Commented Apr 10, 2018 at 9:16

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.