Via pg_proctab(), I can see that the connections from our application can use up to 800MB of RAM per connection in postgres. Usually up to 400MB. (Update to clarify after a comment: These connections are from connections pools, most of the time they sit IDLE, with queries in between. The 400MB are measured when the connection is IDLE.)
I would like to investigate what these 400MB consist of. Why 400MB, and not just 40MB for example? I already looked at the jdbc connection objects on the application side and found nothing out of the ordinary.
My (wild) guess would be that the high RAM usage comes from the fact that we have many partitions on our tables.
Does postgres provide a way to look into the RAM of the processes of connections? We are on AWS RDS, so best case would be a way where I do not need server access.
pg_proctabreturns OS, not database statistics. A better question would be to check what do your queries do and how long do connections remain active? Are you sure that 400MB is per connection instead of eg buffered data? If the queries don't take advantage of indexes the database will have to load entire tables in memory and scan them for matches.rssof 'pg_proctab()' which to me seems to be a correct representation of the RAM the connection needs, as the process-view that AWS offers shows the same number of RAM usage for the process. The connections are from a connection pool and are IDLE most of the time - so it's not data of running queries. I also don't believe this is 'buffered data' as the buffer cache is a different set of memory as far as I understand the matter.pg_proctabdoesn't show connection statistics. It shows coarse-grained OS-level statistics. The article Analyzing the Limits of Connection Scalability in Postgres describes various techniques and extensions you can use to see what's actually going on and links to Measuring the Memory Overhead of a Postgres Connection