Skip to main content
25 events
when toggle format what by license comment
Jan 6, 2018 at 16:34 answer added user77245 timeline score: 1
S Mar 31, 2014 at 21:50 history bounty ended jmegaffin
S Mar 31, 2014 at 21:50 history notice removed jmegaffin
Mar 31, 2014 at 21:50 vote accept jmegaffin
Mar 26, 2014 at 15:32 answer added MrCranky timeline score: 6
Mar 25, 2014 at 21:48 answer added Sean Middleditch timeline score: 17
Mar 25, 2014 at 21:33 comment added AturSams Then read this and also this. You are basically asking why if there are two teams that are writing an algorithm (memory management), and one team doesn't see or know or care about (they don't even get a black box) a specific situation and the other has possibly invested hundreds of person hours into that situation in detail, the second one has a better chance of writing an effective solution.
Mar 25, 2014 at 21:28 comment added AturSams But the bottom line like you said yourself is that no work is needed unless the game is performing poorly. Why is the OS possibly unreliable when it comes to managing your games memory (if it's big and demanding). The OS designer doesn't know anything about your game. It is probably using an LRU (or similar) policy. You know (if you're smart) exactly what is going on in your game. How can someone predetermine the needs of your application and every other program on your PC for that matter and write an OS that will always outperform code that was designed specifically for a unique case?
Mar 25, 2014 at 21:22 comment added jmegaffin I'm asking why are the OS/drivers not good enough. I don't believe this is an opinion/discussion-based question. Also, about the article: It's written about a 2002 game for consoles - not exactly relevant for 2014 PC development.
Mar 25, 2014 at 21:22 comment added AturSams The main issue with games without a specifically tailored memory management system is performance imho. That it may work great 98% of the time and be terrible 2% of the time. This is not good for gaming. Nobody likes sudden spikes in memory activity. Read this for more info.
Mar 25, 2014 at 21:12 comment added AturSams The justification is a result of the situation where the performance is not satisfactory. In such a case you analyze and deduct the memory usage could be optimized because of several reasons and than you make the necessary changes. Next time, you implement the code with those changes, maybe clean them up a little but you can't justify optimizing a game before it exists. Also if we are in the business of optimizing, unless you own a game company, wouldn't it be more effective to allocate your mental resources for making a game (not an engine)? I think this is a discussion, not a question.
Mar 25, 2014 at 21:07 comment added jmegaffin I'm talking about large assets in general: textures, geometry, and sounds. I'm not interested in the timing/prefetching logic here, only justification for a paging resource manager. Also, I don't see how this is a "which tech to use" question.
Mar 25, 2014 at 21:05 comment added AturSams But you aren't including any information about the data structure used to contain the map data? This is dangerously close to which tech to use.
Mar 25, 2014 at 21:03 comment added jmegaffin No, I'm trying to gather information to make architecture design decisions before the fact. I know you might scream "premature optimization" but I think the resource system is important enough to require a strong design before it is written, especially in my case since it will make heavy use of my scripting infrastructure.
Mar 25, 2014 at 21:00 comment added AturSams Is your game suffering from performance issues and the profiling tool is showing it's because of memory access? The issue with games that contain open levels is that in technical terms, this translates to a lot of data. The more control you have on that data, the better the performance will be. Why not implement the most efficient system to manage data? Game development is very competitive and when you have money for experts, you aren't going to leave any stone unturned, looking to squeeze out more detail and higher fps.
Mar 25, 2014 at 20:59 comment added jmegaffin I'm wondering why game engines implement complex asset caching systems when operating systmems/drivers already have virtual memory systems available. It is relevant to me because I'm writing an open-world game where this information is very important.
Mar 25, 2014 at 20:55 comment added AturSams Are you sure this is relevant to your game project? I am not sure I see the technical challenge, only the solution. If you are not facing a technical challenge, it is hard to tailor a solution or understand one.
Mar 25, 2014 at 20:48 history edited jmegaffin CC BY-SA 3.0
added 465 characters in body
Mar 25, 2014 at 20:46 comment added jmegaffin From reading the answer here I get the idea that OpenGL drivers automatically keep all buffers/textures on the CPU as well. However, I can see why it might be useful to specify these transfers manually to take advantage of asynchronous uploads.
Mar 25, 2014 at 20:25 comment added dadoo Games You are assuming that asset content is stored in system memory. Much of your asset data will be stored in GPU memory, which is much more limited.
S Mar 25, 2014 at 19:59 history bounty started jmegaffin
S Mar 25, 2014 at 19:59 history notice added jmegaffin Canonical answer required
Mar 20, 2014 at 10:58 history tweeted twitter.com/#!/StackGameDev/status/446601678960459776
Mar 20, 2014 at 2:15 history edited jmegaffin CC BY-SA 3.0
added 289 characters in body
Mar 20, 2014 at 2:06 history asked jmegaffin CC BY-SA 3.0