summaryrefslogtreecommitdiff
path: root/src/backend/storage/ipc
AgeCommit message (Collapse)Author
2025-01-20Fix latch event policy that hid socket events.Thomas Munro
If a WaitEventSetWait() caller asks for multiple events, an already set latch would previously prevent other events from being reported at the same time. Now, we'll also poll the kernel for other events that would fit in the caller's output buffer with a zero wait time. This policy change doesn't affect callers that ask for only one event. The main caller affected is the postmaster. If its latch is set extremely frequently by backends launching workers and workers exiting, we don't want it to handle only those jobs and ignore incoming client connections. Back-patch to 16 where the postmaster began using the API. The fast-return policy changed here is older than that, but doesn't cause any known problems in earlier releases. Reported-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Discussion: https://postgr.es/m/Z1n5UpAiGDmFcMmd%40nathan
2025-01-01Update copyright for 2025Bruce Momjian
Backpatch-through: 13
2024-12-28Replace PGPROC.isBackgroundWorker with isRegularBackend.Tom Lane
Commit 34486b609 effectively redefined isBackgroundWorker as meaning "not a regular backend", whereas before it had the narrower meaning of AmBackgroundWorkerProcess(). For clarity, rename the field to isRegularBackend and invert its sense. Discussion: https://postgr.es/m/1808397.1735156190@sss.pgh.pa.us
2024-12-28Exclude parallel workers from connection privilege/limit checks.Tom Lane
Cause parallel workers to not check datallowconn, rolcanlogin, and ACL_CONNECT privileges. The leader already checked these things (except for rolcanlogin which might have been checked for a different role). Re-checking can accomplish little except to induce unexpected failures in applications that might not even be aware that their query has been parallelized. We already had the principle that parallel workers rely on their leader to pass a valid set of authorization information, so this change just extends that a bit further. Also, modify the ReservedConnections, datconnlimit and rolconnlimit logic so that these limits are only enforced against regular backends, and only regular backends are counted while checking if the limits were already reached. Previously, background processes that had an assigned database or role were subject to these limits (with rather random exclusions for autovac workers and walsenders), and the set of existing processes that counted against each limit was quite haphazard as well. The point of these limits, AFAICS, is to ensure the availability of PGPROC slots for regular backends. Since all other types of processes have their own separate pools of PGPROC slots, it makes no sense either to enforce these limits against them or to count them while enforcing the limit. While edge-case failures of these sorts have been possible for a long time, the problem got a good deal worse with commit 5a2fed911 (CVE-2024-10978), which caused parallel workers to make some of these checks using the leader's current role where before we had used its AuthenticatedUserId, thus allowing parallel queries to fail after SET ROLE. The previous behavior was fairly accidental and I have no desire to return to it. This patch includes reverting 73c9f91a1, which was an emergency hack to suppress these same checks in some cases. It wasn't complete, as shown by a recent bug report from Laurenz Albe. We can also revert fd4d93d26 and 492217301, which hacked around the same problems in one regression test. In passing, remove the special case for autovac workers in CheckMyDatabase; it seems cleaner to have AutoVacWorkerMain pass the INIT_PG_OVERRIDE_ALLOW_CONNS flag, now that that does what's needed. Like 5a2fed911, back-patch to supported branches (which sadly no longer includes v12). Discussion: https://postgr.es/m/1808397.1735156190@sss.pgh.pa.us
2024-12-09Remove remants of "snapshot too old"Heikki Linnakangas
Remove the 'whenTaken' and 'lsn' fields from SnapshotData. After the removal of the "snapshot too old" feature, they were never set to a non-zero value. This largely reverts commit 3e2f3c2e423, which added the OldestActiveSnapshot tracking, and the init_toast_snapshot() function. That was only required for setting the 'whenTaken' and 'lsn' fields. SnapshotToast is now a constant again, like SnapshotSelf and SnapshotAny. I kept a thin get_toast_snapshot() wrapper around SnapshotToast though, to check that you have a registered or active snapshot. That's still a useful sanity check. Reviewed-by: Nathan Bossart, Andres Freund, Tom Lane Discussion: https://www.postgresql.org/message-id/cd4b4f8c-e63a-41c0-95f6-6e6cd9b83f6d@iki.fi
2024-11-28Remove useless casts to (void *)Peter Eisentraut
Many of them just seem to have been copied around for no real reason. Their presence causes (small) risks of hiding actual type mismatches or silently discarding qualifiers Discussion: https://www.postgresql.org/message-id/flat/461ea37c-8b58-43b4-9736-52884e862820@eisentraut.org
2024-11-27Look up backend type in pg_signal_backend() more cheaply.Nathan Bossart
Commit ccd38024bc, which introduced the pg_signal_autovacuum_worker role, added a call to pgstat_get_beentry_by_proc_number() for the purpose of determining whether the process is an autovacuum worker. This function calls pgstat_read_current_status(), which can be fairly expensive and may return cached, out-of-date information. Since we just need to look up the target backend's BackendType, and we already know its ProcNumber, we can instead inspect the BackendStatusArray directly, which is much less expensive and possibly more up-to-date. There are some caveats with this approach (which are documented in the code), but it's still substantially better than before. Reported-by: Andres Freund Reviewed-by: Andres Freund Discussion: https://postgr.es/m/ujenaa2uabzfkwxwmfifawzdozh3ljr7geozlhftsuosgm7n7q%40g3utqqyyosb6
2024-11-26Distinguish between AcquireExternalFD and epoll_create1 / kqueue failingAndres Freund
The error messages in CreateWaitEventSet() made it hard to know whether the syscall or AcquireExternalFD() failed. This is particularly relevant because AcquireExternalFD() imposes a lower limit than what would cause syscalls fail with EMFILE. I did not change the message in libpqsrv_connect_prepare(), which is the one other use of AcquireExternalFD() in our codebase, as the error message already is less ambiguous. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/xjjx7r4xa7beixuu4qtkdhnwdbchrrpo3gaeb3jsbinvvdiat5@cwjw55mna5of
2024-11-26Improve InitShmemAccess() prototypePeter Eisentraut
The code comment said, 'the argument should be declared "PGShmemHeader *seghdr", but we use void to avoid having to include ipc.h in shmem.h.' We can achieve the original goal with a struct forward declaration. (ipc.h was also not the correct header file.) Discussion: https://www.postgresql.org/message-id/flat/cnthxg2eekacrejyeonuhiaezc7vd7o2uowlsbenxqfkjwgvwj@qgzu6eoqrglb
2024-11-14Assign a child slot to every postmaster child processHeikki Linnakangas
Previously, only backends, autovacuum workers, and background workers had an entry in the PMChildFlags array. With this commit, all postmaster child processes, including all the aux processes, have an entry. Dead-end backends still don't get an entry, though, and other processes that don't touch shared memory will never mark their PMChildFlags entry as active. We now maintain separate freelists for different kinds of child processes. That ensures that there are always slots available for autovacuum and background workers. Previously, pre-authentication backends could prevent autovacuum or background workers from starting up, by using up all the slots. The code to manage the slots in the postmaster process is in a new pmchild.c source file. Because postmaster.c is just so large. Assigning pmsignal slot numbers is now pmchild.c's responsibility. This replaces the PMChildInUse array in pmsignal.c. Some of the comments in postmaster.c still talked about the "stats process", but that was removed in commit 5891c7a8ed. Fix those while we're at it. Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/a102f15f-eac4-4ff2-af02-f9ff209ec66f@iki.fi
2024-11-04Revert pg_wal_replay_wait() stored procedureAlexander Korotkov
This commit reverts 3c5db1d6b0, and subsequent improvements and fixes including 8036d73ae3, 867d396ccd, 3ac3ec580c, 0868d7ae70, 85b98b8d5a, 2520226c95, 014f9f34d2, e658038772, e1555645d7, 5035172e4a, 6cfebfe88b, 73da6b8d1b, and e546989a26. The reason for reverting is a set of remaining issues. Most notably, the stored procedure appears to need more effort than the utility statement to turn the backend into a "snapshot-less" state. This makes an approach to use stored procedures questionable. Catversion is bumped. Discussion: https://postgr.es/m/Zyhj2anOPRKtb0xW%40paquier.xyz
2024-10-27Remove unused #include's from backend .c filesPeter Eisentraut
as determined by IWYU These are mostly issues that are new since commit dbbca2cf299. Discussion: https://www.postgresql.org/message-id/flat/0df1d5b1-8ca8-4f84-93be-121081bde049%40eisentraut.org
2024-10-24Move LSN waiting declarations and definitions to better placeAlexander Korotkov
3c5db1d6b implemented the pg_wal_replay_wait() stored procedure. Due to the patch development history, the implementation resided in src/backend/commands/waitlsn.c (src/include/commands/waitlsn.h for headers). 014f9f34d moved pg_wal_replay_wait() itself to src/backend/access/transam/xlogfuncs.c near to the WAL-manipulation functions. But most of the implementation stayed in place. The code in src/backend/commands/waitlsn.c has nothing to do with commands, but is related to WAL. So, this commit moves this code into src/backend/access/transam/xlogwait.c (src/include/access/xlogwait.h for headers). Reported-by: Peter Eisentraut Discussion: https://postgr.es/m/18c0fa64-0475-415e-a1bd-665d922c5201%40eisentraut.org Reviewed-by: Pavel Borisov
2024-10-21Update outdated comment on WAL-logged locks with invalid XIDHeikki Linnakangas
We haven't generated those for a long time. Discussion: https://www.postgresql.org/message-id/b439edfc-c5e5-43a9-802d-4cb51ec20646@iki.fi
2024-10-08Use an shmem_exit callback to remove backend from PMChildFlags on exitHeikki Linnakangas
This seems nicer than having to duplicate the logic between InitProcess() and ProcKill() for which child processes have a PMChildFlags slot. Move the MarkPostmasterChildActive() call earlier in InitProcess(), out of the section protected by the spinlock. Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/a102f15f-eac4-4ff2-af02-f9ff209ec66f@iki.fi
2024-09-21Increase the number of fast-path lock slotsTomas Vondra
Replace the fixed-size array of fast-path locks with arrays, sized on startup based on max_locks_per_transaction. This allows using fast-path locking for workloads that need more locks. The fast-path locking introduced in 9.2 allowed each backend to acquire a small number (16) of weak relation locks cheaply. If a backend needs to hold more locks, it has to insert them into the shared lock table. This is considerably more expensive, and may be subject to contention (especially on many-core systems). The limit of 16 fast-path locks was always rather low, because we have to lock all relations - not just tables, but also indexes, views, etc. For planning we need to lock all relations that might be used in the plan, not just those that actually get used in the final plan. So even with rather simple queries and schemas, we often need significantly more than 16 locks. As partitioning gets used more widely, and the number of partitions increases, this limit is trivial to hit. Complex queries may easily use hundreds or even thousands of locks. For workloads doing a lot of I/O this is not noticeable, but for workloads accessing only data in RAM, the access to the shared lock table may be a serious issue. This commit removes the hard-coded limit of the number of fast-path locks. Instead, the size of the fast-path arrays is calculated at startup, and can be set much higher than the original 16-lock limit. The overall fast-path locking protocol remains unchanged. The variable-sized fast-path arrays can no longer be part of PGPROC, but are allocated as a separate chunk of shared memory and then references from the PGPROC entries. The fast-path slots are organized as a 16-way set associative cache. You can imagine it as a hash table of 16-slot "groups". Each relation is mapped to exactly one group using hash(relid), and the group is then processed using linear search, just like the original fast-path cache. With only 16 entries this is cheap, with good locality. Treating this as a simple hash table with open addressing would not be efficient, especially once the hash table gets almost full. The usual remedy is to grow the table, but we can't do that here easily. The access would also be more random, with worse locality. The fast-path arrays are sized using the max_locks_per_transaction GUC. We try to have enough capacity for the number of locks specified in the GUC, using the traditional 2^n formula, with an upper limit of 1024 lock groups (i.e. 16k locks). The default value of max_locks_per_transaction is 64, which means those instances will have 64 fast-path slots. The main purpose of the max_locks_per_transaction GUC is to size the shared lock table. It is often set to the "average" number of locks needed by backends, with some backends using significantly more locks. This should not be a major issue, however. Some backens may have to insert locks into the shared lock table, but there can't be too many of them, limiting the contention. The only solution is to increase the GUC, even if the shared lock table already has sufficient capacity. That is not free, especially in terms of memory usage (the shared lock table entries are fairly large). It should only happen on machines with plenty of memory, though. In the future we may consider a separate GUC for the number of fast-path slots, but let's try without one first. Reviewed-by: Robert Haas, Jakub Wartak Discussion: https://postgr.es/m/510b887e-c0ce-4a0c-a17a-2c6abb8d9a5c@enterprisedb.com
2024-08-29Rename some shared memory initialization routinesHeikki Linnakangas
To make them follow the usual naming convention where FoobarShmemSize() calculates the amount of shared memory needed by Foobar subsystem, and FoobarShmemInit() performs the initialization. I didn't rename CreateLWLocks() and InitShmmeIndex(), because they are a little special. They need to be called before any of the other ShmemInit() functions, because they set up the shared memory bookkeeping itself. I also didn't rename InitProcGlobal(), because unlike other Shmeminit functions, it's not called by individual backends. Reviewed-by: Andreas Karlsson Discussion: https://www.postgresql.org/message-id/c09694ff-2453-47e5-b26c-32a16cd75ce6@iki.fi
2024-08-29Refactor lock manager initialization to make it a bit less specialHeikki Linnakangas
Split the shared and local initialization to separate functions, and follow the common naming conventions. With this, we no longer create the LockMethodLocalHash hash table in the postmaster process, which was always pointless. Reviewed-by: Andreas Karlsson Discussion: https://www.postgresql.org/message-id/c09694ff-2453-47e5-b26c-32a16cd75ce6@iki.fi
2024-08-02Implement pg_wal_replay_wait() stored procedureAlexander Korotkov
pg_wal_replay_wait() is to be used on standby and specifies waiting for the specific WAL location to be replayed. This option is useful when the user makes some data changes on primary and needs a guarantee to see these changes are on standby. The queue of waiters is stored in the shared memory as an LSN-ordered pairing heap, where the waiter with the nearest LSN stays on the top. During the replay of WAL, waiters whose LSNs have already been replayed are deleted from the shared memory pairing heap and woken up by setting their latches. pg_wal_replay_wait() needs to wait without any snapshot held. Otherwise, the snapshot could prevent the replay of WAL records, implying a kind of self-deadlock. This is why it is only possible to implement pg_wal_replay_wait() as a procedure working without an active snapshot, not a function. Catversion is bumped. Discussion: https://postgr.es/m/eb12f9b03851bb2583adab5df9579b4b%40postgrespro.ru Author: Kartyshov Ivan, Alexander Korotkov Reviewed-by: Michael Paquier, Peter Eisentraut, Dilip Kumar, Amit Kapila Reviewed-by: Alexander Lakhin, Bharath Rupireddy, Euler Taveira Reviewed-by: Heikki Linnakangas, Kyotaro Horiguchi
2024-07-30Remove --disable-spinlocks.Thomas Munro
A later change will require atomic support, so it wouldn't make sense for a hypothetical new system not to be able to implement spinlocks. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (concept, not the patch) Reviewed-by: Andres Freund <andres@anarazel.de> (concept, not the patch) Discussion: https://postgr.es/m/3351991.1697728588%40sss.pgh.pa.us
2024-07-29Fix double-release of spinlockHeikki Linnakangas
Commit 9d9b9d46f3 added spinlocks to protect the fields in ProcSignal flags, but in EmitProcSignalBarrier(), the spinlock was released twice. With most spinlock implementations, releasing a lock that's not held is not easy to notice, because most of the time it does nothing, but if the spinlock was concurrently acquired by another process, it could lead to more serious issues. Fortunately, with the --disable-spinlocks emulation implementation, it caused more visible failures. In the passing, fix a type in comment and add an assertion that the procNumber passed to SendProcSignal looks valid. Discussion: https://www.postgresql.org/message-id/b8ce284c-18a2-4a79-afd3-1991a2e7d246@iki.fi
2024-07-29Fix compiler warning/error about typedef redefinitionsHeikki Linnakangas
Per buildfarm member 'sifaka': procsignal.c:87:3: error: redefinition of typedef 'ProcSignalHeader' is a C11 feature [-Werror,-Wtypedef-redefinition]
2024-07-29Move cancel key generation to after forking the backendHeikki Linnakangas
Move responsibility of generating the cancel key to the backend process. The cancel key is now generated after forking, and the backend advertises it in the ProcSignal array. When a cancel request arrives, the backend handling it scans the ProcSignal array to find the target pid and cancel key. This is similar to how this previously worked in the EXEC_BACKEND case with the ShmemBackendArray, just reusing the ProcSignal array. One notable change is that we no longer generate cancellation keys for non-backend processes. We generated them before just to prevent a malicious user from canceling them; the keys for non-backend processes were never actually given to anyone. There is now an explicit flag indicating whether a process has a valid key or not. I wrote this originally in preparation for supporting longer cancel keys, but it's a nice cleanup on its own. Reviewed-by: Jelte Fennema-Nio Discussion: https://www.postgresql.org/message-id/508d0505-8b7a-4864-a681-e7e5edfe32aa@iki.fi
2024-07-26Introduce num_os_semaphores GUC.Nathan Bossart
The documentation for System V IPC parameters provides complicated formulas to determine the appropriate values for SEMMNI and SEMMNS. Furthermore, these formulas have often been wrong because folks forget to update them (e.g., when adding a new auxiliary process). This commit introduces a new runtime-computed GUC named num_os_semaphores that reports the number of semaphores needed for the configured number of allowed connections, worker processes, etc. This new GUC allows us to simplify the formulas in the documentation, and it should help prevent future inaccuracies. Like the other runtime-computed GUCs, users can view it with "postgres -C" before starting the server, which is useful for preconfiguring the necessary operating system resources. Reviewed-by: Tom Lane, Sami Imseih, Andres Freund, Robert Haas Discussion: https://postgr.es/m/20240517164452.GA1914161%40nathanxps13
2024-07-13Fix lost Windows socket EOF events.Thomas Munro
Winsock only signals an FD_CLOSE event once if the other end of the socket shuts down gracefully. Because each WaitLatchOrSocket() call constructs and destroys a new event handle every time, with unlucky timing we can lose it and hang. We get away with this only if the other end disconnects non-gracefully, because FD_CLOSE is repeatedly signaled in that case. To fix this design flaw in our Windows socket support fundamentally, we'd probably need to rearchitect it so that a single event handle exists for the lifetime of a socket, or switch to completely different multiplexing or async I/O APIs. That's going to be a bigger job and probably wouldn't be back-patchable. This brute force kludge closes the race by explicitly polling with MSG_PEEK before sleeping. Back-patch to all supported releases. This should hopefully clear up some random build farm and CI hang failures reported over the years. It might also allow us to try using graceful shutdown in more places again (reverted in commit 29992a6) to fix instability in the transmission of FATAL error messages, but that isn't done by this commit. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Tested-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/176008.1715492071%40sss.pgh.pa.us
2024-07-09Introduce pg_signal_autovacuum_worker.Nathan Bossart
Since commit 3a9b18b309, roles with privileges of pg_signal_backend cannot signal autovacuum workers. Many users treated the ability to signal autovacuum workers as a feature instead of a bug, so we are reintroducing it via a new predefined role. Having privileges of this new role, named pg_signal_autovacuum_worker, only permits signaling autovacuum workers. It does not permit signaling other types of superuser backends. Bumps catversion. Author: Kirill Reshke Reviewed-by: Anthony Leung, Michael Paquier, Andrey Borodin Discussion: https://postgr.es/m/CALdSSPhC4GGmbnugHfB9G0%3DfAxjCSug_-rmL9oUh0LTxsyBfsg%40mail.gmail.com
2024-07-08Remove an extra period in code commentRichard Guo
Author: Junwang Zhao Discussion: https://postgr.es/m/CAEG8a3L9GgfKc+XT+NMHPY7atAOVYqjUqKEFQKhcPHFYRW=PuQ@mail.gmail.com
2024-07-03Optimize memory access in GetRunningTransactionData()Alexander Korotkov
e85662df44 made GetRunningTransactionData() calculate the oldest running transaction id within the current database. This commit optimized this calculation by performing a cheap transaction id comparison before fetching the process database id, while the latter could cause extra cache misses. Reported-by: Noah Misch Discussion: https://postgr.es/m/20240630231816.bf.nmisch%40google.com
2024-07-03Fix typo in GetRunningTransactionData()Alexander Korotkov
e85662df44 made GetRunningTransactionData() calculate the oldest running transaction id within the current database. However, because of the typo, the new code uses oldestRunningXid instead of oldestDatabaseRunningXid in comparison before updating oldestDatabaseRunningXid. This commit fixes that issue. Reported-by: Noah Misch Discussion: https://postgr.es/m/20240630231816.bf.nmisch%40google.com Backpatch-through: 17
2024-06-28Add wait event type "InjectionPoint", a custom type like "Extension".Noah Misch
Both injection points and customization of type "Extension" are new in v17, so this just changes a detail of an unreleased feature. Reported by Robert Haas. Reviewed by Michael Paquier. Discussion: https://postgr.es/m/CA+TgmobfMU5pdXP36D5iAwxV5WKE_vuDLtp_1QyH+H5jMMt21g@mail.gmail.com
2024-06-27Fix MVCC bug with prepared xact with subxacts on standbyHeikki Linnakangas
We did not recover the subtransaction IDs of prepared transactions when starting a hot standby from a shutdown checkpoint. As a result, such subtransactions were considered as aborted, rather than in-progress. That would lead to hint bits being set incorrectly, and the subtransactions suddenly becoming visible to old snapshots when the prepared transaction was committed. To fix, update pg_subtrans with prepared transactions's subxids when starting hot standby from a shutdown checkpoint. The snapshots taken from that state need to be marked as "suboverflowed", so that we also check the pg_subtrans. Backport to all supported versions. Discussion: https://www.postgresql.org/message-id/6b852e98-2d49-4ca1-9e95-db419a2696e0@iki.fi
2024-05-16Fix documentation about DROP DATABASE FORCE process termination rights.Noah Misch
Specifically, it terminates a background worker even if the caller couldn't terminate the background worker with pg_terminate_backend(). Commit 3a9b18b3095366cd0c4305441d426d04572d88c1 neglected to update this. Back-patch to v13, which introduced DROP DATABASE FORCE. Reviewed by Amit Kapila. Reported by Kirill Reshke. Discussion: https://postgr.es/m/20240429212756.60.nmisch@google.com
2024-04-17Remove GlobalVisTestNonRemovable[Full]Horizon, not used anymoreAndres Freund
GlobalVisTestNonRemovableHorizon() was only used for the implementation of snapshot_too_old, which was removed in f691f5b80a8. As using GlobalVisTestNonRemovableHorizon() is not particularly efficient, no new uses for it should be added. Therefore remove. Discussion: https://postgr.es/m/20240415185720.q4dg4dlcyvvrabz4@awork3.anarazel.de
2024-04-11Revert: Implement pg_wal_replay_wait() stored procedureAlexander Korotkov
This commit reverts 06c418e163, e37662f221, bf1e650806, 25f42429e2, ee79928441, and 74eaf66f98 per review by Heikki Linnakangas. Discussion: https://postgr.es/m/b155606b-e744-4218-bda5-29379779da1a%40iki.fi
2024-04-03Combine freezing and pruning steps in VACUUMHeikki Linnakangas
Execute both freezing and pruning of tuples in the same heap_page_prune() function, now called heap_page_prune_and_freeze(), and emit a single WAL record containing all changes. That reduces the overall amount of WAL generated. This moves the freezing logic from vacuumlazy.c to the heap_page_prune_and_freeze() function. The main difference in the coding is that in vacuumlazy.c, we looked at the tuples after the pruning had already happened, but in heap_page_prune_and_freeze() we operate on the tuples before pruning. The heap_prepare_freeze_tuple() function is now invoked after we have determined that a tuple is not going to be pruned away. VACUUM no longer needs to loop through the items on the page after pruning. heap_page_prune_and_freeze() does all the work. It now returns the list of dead offsets, including existing LP_DEAD items, to the caller. Similarly it's now responsible for tracking 'all_visible', 'all_frozen', and 'hastup' on the caller's behalf. Author: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://www.postgresql.org/message-id/20240330055710.kqg6ii2cdojsxgje@liskov
2024-04-02Move WaitLSNShmemInit() to CreateOrAttachShmemStructs()Alexander Korotkov
Thanks to Andres Freund, Thomas Munrom and David Rowley for investigating this issue. Discussion: https://postgr.es/m/CAPpHfdvap5mMLikt8CUjA0osAvCJHT0qnYeR3f84EJ_Kvse0mg%40mail.gmail.com
2024-04-02Implement pg_wal_replay_wait() stored procedureAlexander Korotkov
pg_wal_replay_wait() is to be used on standby and specifies waiting for the specific WAL location to be replayed before starting the transaction. This option is useful when the user makes some data changes on primary and needs a guarantee to see these changes on standby. The queue of waiters is stored in the shared memory array sorted by LSN. During replay of WAL waiters whose LSNs are already replayed are deleted from the shared memory array and woken up by setting of their latches. pg_wal_replay_wait() needs to wait without any snapshot held. Otherwise, the snapshot could prevent the replay of WAL records implying a kind of self-deadlock. This is why it is only possible to implement pg_wal_replay_wait() as a procedure working in a non-atomic context, not a function. Catversion is bumped. Discussion: https://postgr.es/m/eb12f9b03851bb2583adab5df9579b4b%40postgrespro.ru Author: Kartyshov Ivan, Alexander Korotkov Reviewed-by: Michael Paquier, Peter Eisentraut, Dilip Kumar, Amit Kapila Reviewed-by: Alexander Lakhin, Bharath Rupireddy, Euler Taveira
2024-03-14Fix false reports in pg_visibilityAlexander Korotkov
Currently, pg_visibility computes its xid horizon using the GetOldestNonRemovableTransactionId(). The problem is that this horizon can sometimes go backward. That can lead to reporting false errors. In order to fix that, this commit implements a new function GetStrictOldestNonRemovableTransactionId(). This function computes the xid horizon, which would be guaranteed to be newer or equal to any xid horizon computed before. We have to do the following to achieve this. 1. Ignore processes xmin's, because they consider connection to other databases that were ignored before. 2. Ignore KnownAssignedXids, because they are not database-aware. At the same time, the primary could compute its horizons database-aware. 3. Ignore walsender xmin, because it could go backward if some replication connections don't use replication slots. As a result, we're using only currently running xids to compute the horizon. Surely these would significantly sacrifice accuracy. But we have to do so to avoid reporting false errors. Inspired by earlier patch by Daniel Shelepanov and the following discussion with Robert Haas and Tom Lane. Discussion: https://postgr.es/m/1649062270.289865713%40f403.i.mail.ru Reviewed-by: Alexander Lakhin, Dmitry Koval
2024-03-13Make the order of the header file includes consistentPeter Eisentraut
Similar to commit 7e735035f20. Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAMbWs4-WhpCFMbXCjtJ%2BFzmjfPrp7Hw1pk4p%2BZpU95Kh3ofZ1A%40mail.gmail.com
2024-03-04Remove the adminpack contrib extensionDaniel Gustafsson
The adminpack extension was only used to support pgAdmin III, which in turn was declared EOL many years ago. Removing the extension also allows us to remove functions from core as well which were only used to support old version of adminpack. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://postgr.es/m/CALj2ACUmL5TraYBUBqDZBi1C+Re8_=SekqGYqYprj_W8wygQ8w@mail.gmail.com
2024-03-04Remove unused #include's from backend .c filesPeter Eisentraut
as determined by include-what-you-use (IWYU) While IWYU also suggests to *add* a bunch of #include's (which is its main purpose), this patch does not do that. In some cases, a more specific #include replaces another less specific one. Some manual adjustments of the automatic result: - IWYU currently doesn't know about includes that provide global variable declarations (like -Wmissing-variable-declarations), so those includes are being kept manually. - All includes for port(ability) headers are being kept for now, to play it safe. - No changes of catalog/pg_foo.h to catalog/pg_foo_d.h, to keep the patch from exploding in size. Note that this patch touches just *.c files, so nothing declared in header files changes in hidden ways. As a small example, in src/backend/access/transam/rmgr.c, some IWYU pragma annotations are added to handle a special case there. Discussion: https://www.postgresql.org/message-id/flat/af837490-6b2f-46df-ba05-37ea6a6653fc%40eisentraut.org
2024-03-04Use MyBackendType in more places to check what process this isHeikki Linnakangas
Remove IsBackgroundWorker, IsAutoVacuumLauncherProcess(), IsAutoVacuumWorkerProcess(), and IsLogicalSlotSyncWorker() in favor of new Am*Process() macros that use MyBackendType. For consistency with the existing Am*Process() macros. Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/f3ecd4cb-85ee-4e54-8278-5fabfb3a4ed0@iki.fi
2024-03-03Replace BackendIds with 0-based ProcNumbersHeikki Linnakangas
Now that BackendId was just another index into the proc array, it was redundant with the 0-based proc numbers used in other places. Replace all usage of backend IDs with proc numbers. The only place where the term "backend id" remains is in a few pgstat functions that expose backend IDs at the SQL level. Those IDs are now in fact 0-based ProcNumbers too, but the documentation still calls them "backend ids". That term still seems appropriate to describe what the numbers are, so I let it be. One user-visible effect is that pg_temp_0 is now a valid temp schema name, for backend with ProcNumber 0. Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/8171f1aa-496f-46a6-afc3-c46fe7a9b407@iki.fi
2024-03-03Redefine backend ID to be an index into the proc arrayHeikki Linnakangas
Previously, backend ID was an index into the ProcState array, in the shared cache invalidation manager (sinvaladt.c). The entry in the ProcState array was reserved at backend startup by scanning the array for a free entry, and that was also when the backend got its backend ID. Things become slightly simpler if we redefine backend ID to be the index into the PGPROC array, and directly use it also as an index to the ProcState array. This uses a little more memory, as we reserve a few extra slots in the ProcState array for aux processes that don't need them, but the simplicity is worth it. Aux processes now also have a backend ID. This simplifies the reservation of BackendStatusArray and ProcSignal slots. You can now convert a backend ID into an index into the PGPROC array simply by subtracting 1. We still use 0-based "pgprocnos" in various places, for indexes into the PGPROC array, but the only difference now is that backend IDs start at 1 while pgprocnos start at 0. (The next commmit will get rid of the term "backend ID" altogether and make everything 0-based.) There is still a 'backendId' field in PGPROC, now part of 'vxid' which encapsulates the backend ID and local transaction ID together. It's needed for prepared xacts. For regular backends, the backendId is always equal to pgprocno + 1, but for prepared xact PGPROC entries, it's the ID of the original backend that processed the transaction. Reviewed-by: Andres Freund, Reid Thompson Discussion: https://www.postgresql.org/message-id/8171f1aa-496f-46a6-afc3-c46fe7a9b407@iki.fi
2024-02-26Add helper functions for dshash tables with string keys.Nathan Bossart
Presently, string keys are not well-supported for dshash tables. The dshash code always copies key_size bytes into new entries' keys, and dshash.h only provides compare and hash functions that forward to memcmp() and tag_hash(), both of which do not stop at the first NUL. This means that callers must pad string keys so that the data beyond the first NUL does not adversely affect the results of copying, comparing, and hashing the keys. To better support string keys in dshash tables, this commit does a couple things: * A new copy_function field is added to the dshash_parameters struct. This function pointer specifies how the key should be copied into new table entries. For example, we only want to copy up to the first NUL byte for string keys. A dshash_memcpy() helper function is provided and used for all existing in-tree dshash tables without string keys. * A set of helper functions for string keys are provided. These helper functions forward to strcmp(), strcpy(), and string_hash(), all of which ignore data beyond the first NUL. This commit also adjusts the DSM registry's dshash table to use the new helper functions for string keys. Reviewed-by: Andy Fan Discussion: https://postgr.es/m/20240119215941.GA1322079%40nathanxps13
2024-02-21Remove superfluous 'pgprocno' field from PGPROCHeikki Linnakangas
It was always just the index of the PGPROC entry from the beginning of the proc array. Introduce a macro to compute it from the pointer instead. Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/8171f1aa-496f-46a6-afc3-c46fe7a9b407@iki.fi
2024-02-14Centralize logic for restoring errno in signal handlers.Nathan Bossart
Presently, we rely on each individual signal handler to save the initial value of errno and then restore it before returning if needed. This is easily forgotten and, if missed, often goes undetected for a long time. In commit 3b00fdba9f, we introduced a wrapper signal handler function that checks whether MyProcPid matches getpid(). This commit moves the aforementioned errno restoration code from the individual signal handlers to the new wrapper handler so that we no longer need to worry about missing it. Reviewed-by: Andres Freund, Noah Misch Discussion: https://postgr.es/m/20231121212008.GA3742740%40nathanxps13
2024-02-14Add a slot synchronization function.Amit Kapila
This commit introduces a new SQL function pg_sync_replication_slots() which is used to synchronize the logical replication slots from the primary server to the physical standby so that logical replication can be resumed after a failover or planned switchover. A new 'synced' flag is introduced in pg_replication_slots view, indicating whether the slot has been synchronized from the primary server. On a standby, synced slots cannot be dropped or consumed, and any attempt to perform logical decoding on them will result in an error. The logical replication slots on the primary can be synchronized to the hot standby by using the 'failover' parameter of pg-create-logical-replication-slot(), or by using the 'failover' option of CREATE SUBSCRIPTION during slot creation, and then calling pg_sync_replication_slots() on standby. For the synchronization to work, it is mandatory to have a physical replication slot between the primary and the standby aka 'primary_slot_name' should be configured on the standby, and 'hot_standby_feedback' must be enabled on the standby. It is also necessary to specify a valid 'dbname' in the 'primary_conninfo'. If a logical slot is invalidated on the primary, then that slot on the standby is also invalidated. If a logical slot on the primary is valid but is invalidated on the standby, then that slot is dropped but will be recreated on the standby in the next pg_sync_replication_slots() call provided the slot still exists on the primary server. It is okay to recreate such slots as long as these are not consumable on standby (which is the case currently). This situation may occur due to the following reasons: - The 'max_slot_wal_keep_size' on the standby is insufficient to retain WAL records from the restart_lsn of the slot. - 'primary_slot_name' is temporarily reset to null and the physical slot is removed. The slot synchronization status on the standby can be monitored using the 'synced' column of pg_replication_slots view. A functionality to automatically synchronize slots by a background worker and allow logical walsenders to wait for the physical will be done in subsequent commits. Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-13Fix 'mmap' DSM implementation with allocations larger than 4 GBHeikki Linnakangas
Fixes bug #18341. Backpatch to all supported versions. Discussion: https://www.postgresql.org/message-id/18341-ce16599e7fd6228c@postgresql.org
2024-01-23Fix possible NULL pointer dereference in GetNamedDSMSegment().Nathan Bossart
GetNamedDSMSegment() doesn't check whether dsm_attach() returns NULL, which creates the possibility of a NULL pointer dereference soon after. To fix, emit an ERROR if dsm_attach() returns NULL. This shouldn't happen, but it would be nice to avoid a segfault if it does. In passing, tidy up the surrounding code. Reported-by: Tom Lane Reviewed-by: Michael Paquier, Bharath Rupireddy Discussion: https://postgr.es/m/3348869.1705854106%40sss.pgh.pa.us