We're running into a situation where we are unable to read bytes from a file into an array initialized on the heap, below is the (simplified) source code:
FileInputStream inp = ...;
byte[] buf = new byte[1024];
inp.read(buf, 0, buf.length); <<< this is where OOM is reached
Caused by: java.io.IOException: Cannot allocate memory
at java.base/java.io.FileInputStream.readBytes(Native Method)
at java.base/java.io.FileInputStream.read(FileInputStream.java:276)
We run this Java process in a containerized application in Docker/AWS ECS. We create a small (1kB) on-heap array buffer as a temporary location to stream the file in; we then copy this buffer into "native memory" (we use the Unsafe API for latency considerations, but this is not where the OOM occurs anyways).
This error occurs when we are already pretty close to our container memory limit. However, I'd expect the OOM to be triggered upon array initialization (when creating buf) as here Java will initialize the array to all 0s, thereby committing the pages to physical memory and hitting any OOM errors there (the most common case I see on Stack Overflow, see this post or this post).
But instead we see it while copying data into the already initialized array, where there shouldn't be any additional memory allocations. Since it occurs at readBytes (a native method that likely just calls an underlying OS memcpy), I'd think its during the array copy and not while creating some heap objects in the middle.
Is there any explanation for why see the OOM error during the memory copy and not during initilization?
read()’s native part. That’s what the message is referring to. Andread()isn’t ’an underlying OSmemcpy()’. You must be seriously close to the limit. Fix that.InputStream.read0()or whatever it calls. It’s been decades since I saw it.