Questions tagged [gpu]
GPU (graphics processing unit), is a specialized processor designed to accelerate the process of building images.
183 questions
0
votes
3
answers
242
views
Do GPUs re-draw a all of a character's vertices/triangles/fragments every frame?
I'm a beginner in game programming and I want someone to confirm my understanding.
Let's say there's a 3D model for a character in the game. This character is made up of triangles. Each vertex in ...
0
votes
1
answer
151
views
How to efficiently construct model matrices in a vertex shader from a small amount of data, for instanced rendering of many moving 3d game characters
I am trying to efficiently render many 3d game characters: many instances of the same 3d model which can change their positions and z rotations each frame. For reference I'm using OpenGL but this ...
0
votes
2
answers
320
views
How can I efficiently render lots of moving objects in a game?
I'm using OpenGL but this question should apply generally to rendering.
I understand that for efficient rendering in games, you want to minimize communication between the CPU and GPU. This means pre-...
1
vote
1
answer
112
views
Efficiently passing data to the GPU corresponding to a collection of objects
I am beginning to work on a physics system, where there are multiple objects in the world that get rendered. The current approach I am considering consists of pseudo-OOP in C where each object has ...
0
votes
1
answer
1k
views
Rendering with sdl_gfx is so slow, any alternative?
I have been using sdl_gfx ( sdl2 extension library https://github.com/ferzkopp/SDL_gfx ) to make Android games, I have always noticed that rendering primitives is so slow.
So I was rendering it once ...
1
vote
0
answers
897
views
Completely independent dual GPU setup for VR with 100% "SLI efficiency"?
I have a simple (and maybe quite naive) question regarding dual GPU use for Virtual Reality (VR); it nagged me for years now and I couldn't figure out why this can't work, at least in principle:
I ...
0
votes
1
answer
2k
views
Int vs Float, which one is faster for gpu?
My game need to loop through massive amount of data, and the amount of data can increase by a lot depending on world settings set by player. The data is too big for CPU so i need to use GPU for it ...
1
vote
1
answer
326
views
Is DirectX 12 or lower just an API?
I am programming a game using DirectX 12. Shall it support all GPUs? Or just newer GPUs? What about the version of the Windows OS supported?
What changes when a new DirectX version comes?
7
votes
7
answers
2k
views
Increasing efficiency of N-Body gravity simulation
I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity.
I currently have a system in place that works, but if the number of planets ...
0
votes
2
answers
10k
views
Low FPS in Unreal engine, but GPU usage is low as well
I am running an Unreal Engine 4 project which has many high quality assets. My computer is fairly strong:
CPU: AMD Ryzen 5 3600 6-Core
GPU: GeForce RTX 3060 SSD: Lexar 500GB
NM610 M.2 NVMe SSD
RAM: 2 ...
1
vote
1
answer
3k
views
Why do we use GLSL(Shader) instead of CUDA?
I'm meaning GLSL and CUDA both utilize GPU to their maximum power and in some cases, I heard CUDA runs faster on Nvidia graphic card. So my question is why don't we use CUDA more often for GPU graphic ...
0
votes
1
answer
327
views
How to temporarily set additional system environment variable only in 'play' mode inside godot editor?
I'm learning godot with a laptop that has AMD discrete GPU. My OS is Arch Linux so if I want to use discrete GPU I have to set system environment variable ...
3
votes
1
answer
1k
views
How to render a grid of dots being exactly 1x1 pixel wide using a shader?
I would like to render a grid with many dots, all equally spaced and being exactly 1 pixel wide.
What I was able to achieve so far is this :
What I would like is (simulated using image editing ...
0
votes
0
answers
55
views
How many divisions does the GPU's texture mapper do in parallel?
Perspective-correct texture mapping requires one division per pixel. Before the advent of GPUs this was a problem because this was quite heavy to do on the CPU (especially back in the days of non-SSE ...
1
vote
1
answer
759
views
GPU Instanced Transparent Mesh Not Rendering
I'm trying to render a bunch of clouds via gpu instancing. This works perfectly fine with the default HDRP Lit Shader, and I get the following result:
However, as soon as I change the surface type ...
1
vote
1
answer
417
views
Computations in GPU Unity
I've made a fluid simulation using particles in Unity, but now it is painfully slow because all computations are done using the CPU. In order to make it faster, I need to do computations on the GPU, ...
1
vote
1
answer
359
views
How to share constant variables between Compute Shaders?
So, I have two compute shaders A and B (using unity and HLSL). Right now, before every dispatch, I send my mouse coordinates to both of them every update.
So, from my understanding, you can actually ...
7
votes
2
answers
866
views
Using GPU on Silverlight 5 for a Fast Fourier Transform
I've got an audio library for Silverlight that is in need of some acceleration on slower machines. Specifically, this library makes extensive and repeated use of the FFT transform as a part of its ...
0
votes
0
answers
53
views
Balance load between CPU and GPU [duplicate]
I am making a game with Unreal Engine 5, but it takes more GPU power and the CPU is used much less.
I want to optimize it to use both CPU and GPU so it can be playable on low-end PCs or laptops. Is ...
0
votes
0
answers
1k
views
AsyncGPUReadback.RequestIntoNativeArray - owner has been invalidated
I have the following C# code in Unity version 2022.2.0a12:
...
0
votes
1
answer
1k
views
Use of CPU vs. GPU on mobile devices
I was always told that if a task can be parrarelized, I should put it on the GPU for better performance. Although this is defenetly true for computer GPUs, I was wondering if the mobile GPUs were so ...
0
votes
0
answers
238
views
Is it possible to use hardware acceleration in OpenCL
I built a small game engine using OpenCL and SDL2 but it's not running really fast compare to Vulkan and OpenGL. I wrote rasterization code, but when I did some research Vulkan and OpenGL use hardware ...
5
votes
1
answer
884
views
Simple square vertex lifting shader
I am trying to rebuild the fur effect in Viva Pinata.
Here each square becomes a pattern of fur
I imagine the process to be like this...
U lift one end of the triangles.
Now I need to achieve "...
0
votes
0
answers
235
views
How Lots of ComputeBuffers at Once Affect Performance Unity
How will lots ComputeBuffer instances affect performance? And why?
I know that I should call ComputeBuffer.Release() on every <...
0
votes
1
answer
434
views
Graphic render speed in libgdx using many sprite sheets
I am currently working on a customizable Player-character for my top down view 2D pixel game. I am using libgdx and aiming for android devices as well as desktop applications.
I am wondering about an ...
1
vote
1
answer
1k
views
Map() fails when reading back GPU Texture
I need to read back a GPU texture (stored in the GPU as D3D11_USAGE_DEFAULT). I am doing this via creating a staging ID3D11Texture. The whole application is running ...
5
votes
1
answer
3k
views
Which Terrain LOD algorithm should I use for super large terrain?
My game needs a terrain, the requirements are:
Freely Zoom in & zoom out, like GoogleEarth. Max resolution when zooming in ~100 meter, Max range when zooming out ~2000km (a whole country scale).
...
0
votes
0
answers
40
views
Not clearing FBO's Texture error in battery economy mode
When rendering inside a FBO's texture, I'm not using glClear() but overwriting each fragment, GL_BLEND is set to true.
This works just fine, but I just realised when my laptop switch to economy mode, ...
13
votes
1
answer
2k
views
Information about rendering, batches, the graphical card, performance etc. + XNA?
I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes.
When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but ...
3
votes
4
answers
3k
views
How can I view an R32G32B32 texture?
I have a texture with R32G32B32 floats. I create this texture in-program on D3D11, using DXGI_FORMAT_R32G32B32_FLOAT. Now I ...
0
votes
0
answers
2k
views
SDL2 for hardware accelerated graphics?
I am attempting to make a 3d game using SDL2 just to learn and have a bit of fun. I was wondering if there is any way to get SDL2 do calculations on GPU. I have read that SDL2 Textures uses GPU for ...
0
votes
1
answer
2k
views
Unity Build GPU Performance
I have been banging my head against the wall with this for few days now with no improvement..
The problem is that after build my project keeps using over 30% of the GPU. Even in the editor it takes 20%...
0
votes
1
answer
286
views
Why was 24-bit color support introduced twice in GPUs?
I was doing research, trying to answer the question "Which was the first GPU to support 24-bit color". I know all color since 1992 is 24-bit, even in games like Doom. I mean 16 million ...
0
votes
1
answer
564
views
How can I make a custom mesh class in Unity?
I'm doing something in Unity where I need to specify the position and orientation of vertices with two Vector4s, and they're not just position and normal vectors. I'...
0
votes
0
answers
49
views
How can I use gpu to crop unnecessary pixels from an image?
I would like to keep only the color and x,y coordinates of the pixels that are touching a pixel that is a different color than itself (basically the edges) and remove the rest so that the gpu can ...
0
votes
0
answers
68
views
How is data written from two different gpu cores to the same memory?
Does each core’s data get written to the shared memory one at a time or both at the same time? For instance, when two cores that are next to each other need to write to the same memory spot does the ...
0
votes
1
answer
240
views
What is the difference between these two shaders in terms of performance?
I have implemented a two pass Gaussian blur shader in GLSL like this:
...
0
votes
0
answers
2k
views
How many triangles should I expect to be able to render in a second?
Assuming that I'm doing everything right and all I'm doing when I render my scene is going through all the vertex arrays that I want to render, how many triangles should I expect to be able to render ...
1
vote
1
answer
352
views
Understanding buffer swapping in more detail
This is more a theoretical question. This is what I understand regarding buffer swapping and vsync:
I - When vsync is off, whenever the developer swap the front/back buffers, the buffer that the GPU ...
0
votes
0
answers
994
views
gpu_rotate a texture2d or rendertexture
How does one gpu rotate a Texture2D or RenderTexture using Unity c#? Or, is that possible?
I think it might be something like this...
I'm also trying to understand how gpu_scale seems to work here? ...
3
votes
2
answers
2k
views
Why isn't more culling being done on the GPU?
I'm interested in game development and 3D graphics; however, I'm not very experienced, so I apologize in advance if this comes across as ignorant or overly general.
The impression I get is that quite ...
17
votes
4
answers
55k
views
Why would you use software rendering over hardware rendering, today?
As opposed to CPU or software rendering I assume?
Wouldn't generally all current rendering be GPU based, seeing as you would be using OpenGL or Direct X?
Could someone give me some info here, can't ...
4
votes
3
answers
4k
views
Should calculations be done on the CPU or GPU?
I'm currently learning OpenGL and it's become obvious that I can do most calculations on the CPU or the GPU. For example, I can do something like lightColor * objectColor on the CPU, then send it to ...
0
votes
0
answers
219
views
Why Java wont use GPU?
I have a java code (game) which runs at 3200 frames per second on my PC. On my laptop, it runs with 100 frames per second, despite the fact they have very similar hardware.
I think the issue might be ...
1
vote
1
answer
600
views
reading from texture2d resource in directx11
Hi^^ i am trying to read data in resource, which i used to do well without any problems. but suddenly it is not working.
first i made immutable resource that has data in it, which is
...
0
votes
2
answers
258
views
Is it possible to create accelerated 3D graphics on Windows using one's own API?
Say I want to come up with a way to replace what OpenGL and DirectX specifications do: communicate with GPU to get some functions done that help hardware-acceleration and rapid drawing of screen data. ...
1
vote
0
answers
337
views
How to render decoded video that is on GPU memory without copying to CPU
I'm reading this example of ffmpeg hardware decoding: https://github.com/FFmpeg/FFmpeg/blob/release/4.2/doc/examples/hw_decode.c
At line 109 it does this:
...
0
votes
1
answer
201
views
Is it possible to achieve the same performance of CUDA on OpenCL?
I am planning on porting some of my CPU code to GPU. I want my code to run on all GPUs, so openCL seems to be the right choice. Will I be able to achieve the same performance as of CUDA in openCL?. ...
1
vote
1
answer
3k
views
Windows 10 GPU Engine Performance Counters - Phys / Eng Meaning
For performance tracing of intermittent degredation in performance I wanted to use the Performance Counters available in Windows 10 1809 under GPU Engine -> Utilization percentage. This particular ...
7
votes
1
answer
5k
views
How to profile CPU and GPU performance if I have a monster PC?
I'm going to upgrade my PC soon. I'm worried that I will no longer spot performance losses in my game because of the better specs.
I can check memory usage easily, but how do I check and debug CPU ...