0
\$\begingroup\$

I created a game prototype in Python, using Panda3D. I finished entire MVP game logic, prototype assets, networking (client and server side), created rudimentary UI, and everything is working well. The game is fully playable at this point and is currently being playtested.

Next steps for me are adding polish and eye candy to the game, and while python let me do rapid prototyping and build a complete game from scratch really quickly, starting to add more objects to the scene which serve no functional purpose is slowing down performance rapidly.

I see two approaches here. I can either continue development while heavily multithreading the workload, and extending as much performance critical code with c++ as possible, or I can switch to a new engine. In the case of the first option, any additional game logic can be easily parallelized, but the final bottleneck is the rendering pipeline, which is going to be a pain to get to work efficiently with thousands of objects in the scene, even if most don't really do anything.

On the other hand, rewriting most of the logic in another language can be done relatively quickly, the game logic itself is not too complicated. And of course, most game engines have built-in features for optimizing graphics workloads which I'd otherwise have to write myself.

In the end, I decided to experiment a bit before making a final choice. I wish to target all desktop (and later optionally mobile) platforms, so the first choice that came to mind is Unity.

I opened a new project, set camera to clear previous frame with solid color to not waste time on drawing the skybox, added logic to handle camera movement with WASD, and created a procedurally generated mesh and added it to scene. The mesh contains 537 vertices and 432 triangles, and is using the default material with no changes made to it. No custom textures or shaders.

For completeness sake, here's the entirety of the code I added myself.

namespace UI {
    public class CameraHandler : MonoBehaviour {
        private const float CameraSensitivity = 10f;
        private Vector3 _position = new Vector3(0, 4, -10);

        private void Update() {
            ShiftScreen();
        }

        private void ShiftCamera() {
            var realSensitivity = Time.deltaTime * CameraSensitivity;
            
            if (Input.GetKey(KeyCode.A)) _position.x -= realSensitivity;
            if (Input.GetKey(KeyCode.D)) _position.x += realSensitivity;
            if (Input.GetKey(KeyCode.W)) _position.z += realSensitivity;
            if (Input.GetKey(KeyCode.S)) _position.z -= realSensitivity;
            
            transform.position = _position;
        }
    }
}

namespace Objects {
    public class GameTileTerrain : MonoBehaviour {
        private bool _meshStale = true;
        private Mesh _mesh;
        ...

        private void Update() {
            if (_meshStale) UpdateTerrainMesh();
        }

        private void UpdateTerrainMesh() {
            var vertices = new Vector3[_mesh.vertices.Length];
            // build vertices array

            _mesh.vertices = vertices;
            _mesh.RecalculateBounds();

            _meshStale = false;
        }
    }
}

I'm seeing constant frame drops, both in a built binary and within unity editor. Profiler says that most of the time ~95% is spent on Semaphore.WaitForSignal() on all frames. From what I understand it's waiting for GPU to finish, but GPU is idle during most of that time.

Profiler screenshot

These results are from a MacBook Pro 2019 with Intel Iris GPU (yes I know, not intended for gaming, but it is one of the targeted platforms), and running at 1280x720, retina disabled in Unity player settings.

For comparison, the prototype I wrote in Panda3D can run 4 instances of the game at 4k, 60Hz simultaneously with no frame drops and plenty of CPU and GPU time to spare, when the scene contains dozens of objects with identical geometry as the one I added here, other objects, and plenty of other UI elements and game logic.

I expect that adding stuff now is not going to severely impact performance, this is purely fixed overhead that's causing those issues, but this baseline is so bad that I already have zero headroom for my own stuff. I also expect that running on Windows, with identical hardware would give me better performance, but am not willing to remove macOS from the list of supported platforms.

Is this a bug or is this intended behaviour? Is there anything I can do to improve performance at this early stage? I'm inclined to believe that there are games written in Unity which do run well on macs, but what I'm seeing on an empty scene is just terrible.

\$\endgroup\$
10
  • \$\begingroup\$ This sounds like a bug you should be reporting to the Unity engine developers. \$\endgroup\$ Commented Jul 6, 2021 at 10:51
  • \$\begingroup\$ What version of Unity are you using? Keep in mind that periodic frame drops are common in the Editor. In my experience even a simple project on a very high-end computer will stutter occasionally in the Editor. \$\endgroup\$ Commented Jul 9, 2021 at 20:19
  • \$\begingroup\$ The "entirety of the code" you wrote yourself is obviously not the entirety, since it refers to functions and variables that aren't present. If we only have excerpts to look at, we can't fully assess whether there's a problem with your code. My guess would be that you're calling UpdateTerrainMesh() every frame (e.g. because you forgot to set _meshStale = false, which would require re-uploading the terrain mesh to the GPU every frame, which could impact performance. \$\endgroup\$ Commented Jul 9, 2021 at 20:27
  • \$\begingroup\$ @Kevin I'm using 2020.3.13f1. I installed it the day I asked this question. I've expanded my source to show the logic behind _meshStale. The idea is that Start loads a flat mesh "mask" from disk whose vertices are manipulated and rewritten when game logic requires it. If you want, I can share the entire script on pastebin or somewhere, but it's definitely not the cause of performance loss, as it's not optimized at all and takes a larger part of a second to generate as its currently written, I'm guessing due to a large number of calls to perlin noise. The impact would be more significant \$\endgroup\$ Commented Jul 12, 2021 at 10:02
  • \$\begingroup\$ While I don't have exact metrics when built, as I'm not recording performance on my own yet, it's visible that it is frequently dropping frames when built and ran (development build or not) \$\endgroup\$ Commented Jul 12, 2021 at 10:04

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.