summaryrefslogtreecommitdiff
path: root/src/bin/pg_dump
AgeCommit message (Collapse)Author
9 daysRemove unnecessary casts in printf format arguments (%zu/%zd)Peter Eisentraut
Many of these are probably left over from before use of %zu/%zd was portable. Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/07fa29f9-42d7-4aac-8834-197918cbbab6%40eisentraut.org
9 daysUse palloc_object() and palloc_array() in more areas of the treeMichael Paquier
The idea is to encourage more the use of these new routines across the tree, as these offer stronger type safety guarantees than palloc(). The following paths are included in this batch, treating all the areas proposed by the author for the most trivial changes, except src/backend (by far the largest batch): src/bin/ src/common/ src/fe_utils/ src/include/ src/pl/ src/test/ src/tutorial/ Similar work has been done in 31d3847a37be. The code compiles the same before and after this commit, with the following exceptions due to changes in line numbers because some of the new allocation formulas are shorter: blkreftable.c pgfnames.c pl_exec.c Author: David Geier <geidav.pg@gmail.com> Discussion: https://postgr.es/m/ad0748d4-3080-436e-b0bc-ac8f86a3466a@gmail.com
9 daysUnify error messagesÁlvaro Herrera
No visible changes, just refactor how messages are constructed.
2025-12-02Remove useless casting to same typePeter Eisentraut
This removes some casts where the input already has the same type as the type specified by the cast. Their presence could cause risks of hiding actual type mismatches in the future or silently discarding qualifiers. It also improves readability. Same kind of idea as 7f798aca1d5 and ef8fe693606. (This does not change all such instances, but only those hand-picked by the author.) Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://www.postgresql.org/message-id/flat/aSQy2JawavlVlEB0%40ip-10-97-1-34.eu-west-3.compute.internal
2025-12-02Update comment related to C99Peter Eisentraut
One could do more work here to eliminate the Windows difference described in the comment, but that can be a separate project. The purpose of this change is to update comments that might confusingly indicate that C99 is not required. Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/170308e6-a7a3-4484-87b2-f960bb564afa%40eisentraut.org
2025-11-25pg_dump tests: don't put dumps in stdoutÁlvaro Herrera
This bloats the regression log files for no reason. Backpatch to 18; no further only because it fails to apply cleanly. (It's just whitespace change that conflicts, but I don't think this warrants more effort than this.) Discussion: https://postgr.es/m/202511251218.zfs4nu2qnh2m@alvherre.pgsql
2025-11-18Fix typoÁlvaro Herrera
2025-11-13Fix indentation issueMichael Paquier
Issue introduced by 84fb27511dbe. I have missed this diff while adding pgoff_t to the typedef list of pgindent, while addressing a separate indentation issue. Per buildfarm member koel.
2025-11-04Allow "SET list_guc TO NULL" to specify setting the GUC to empty.Tom Lane
We have never had a SET syntax that allows setting a GUC_LIST_INPUT parameter to be an empty list. A locution such as SET search_path = ''; doesn't mean that; it means setting the GUC to contain a single item that is an empty string. (For search_path the net effect is much the same, because search_path ignores invalid schema names and '' must be invalid.) This is confusing, not least because configuration-file entries and the set_config() function can easily produce empty-list values. We considered making the empty-string syntax do this, but that would foreclose ever allowing empty-string items to be valid in list GUCs. While there isn't any obvious use-case for that today, it feels like the kind of restriction that might hurt someday. Instead, let's accept the forbidden-up-to-now value NULL and treat that as meaning an empty list. (An objection to this could be "what if we someday want to allow NULL as a GUC value?". That seems unlikely though, and even if we did allow it for scalar GUCs, we could continue to treat it as meaning an empty list for list GUCs.) Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andrei Klychkov <andrew.a.klychkov@gmail.com> Reviewed-by: Jim Jones <jim.jones@uni-muenster.de> Discussion: https://postgr.es/m/CA+mfrmwsBmYsJayWjc8bJmicxc3phZcHHY=yW5aYe=P-1d_4bg@mail.gmail.com
2025-10-21Avoid short seeks in pg_restore.Tom Lane
If a data block to be skipped over is less than 4kB, just read the data instead of using fseeko(). Experimentation shows that this avoids useless kernel calls --- possibly quite a lot of them, at least with current glibc --- while not incurring any extra I/O, since libc will read 4kB at a time anyway. (There may be platforms where the default buffer size is different from 4kB, but this change seems unlikely to hurt in any case.) We don't expect short data blocks to be common in the wake of 66ec01dc4 and related commits. But older pg_dump files may well contain very short data blocks, and that will likely be a case to be concerned with for a long time. While here, do a little bit of other cleanup in _skipData. Make "buflen" be size_t not int; it can't really exceed the range of int, but comparing size_t and int variables is just asking for trouble. Also, when we initially allocate a buffer for reading skipped data into, make sure it's at least 4kB to reduce the odds that we'll shortly have to realloc it bigger. Author: Dimitrios Apostolou <jimis@gmx.net> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/2edb7a57-b225-3b23-a680-62ba90658fec@gmx.net
2025-10-20pg_dump: Remove unnecessary code for security labels on extensions.Fujii Masao
Commit d9572c4e3b4 added extension support and made pg_dump attempt to dump security labels on extensions. However, security labels on extensions are not actually supported, so this code was unnecessary. This commit removes it. Suggested-by: Jian He <jian.universality@gmail.com> Author: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxF8=z0v=888NKKEoTHQ+Jc4EXutFi91BF0fFjgFsZT6JQ@mail.gmail.com
2025-10-19Don't rely on zlib's gzgetc() macro.Tom Lane
It emerges that zlib's configuration logic is not robust enough to guarantee that the macro will have the same ideas about struct field layout as the library itself does, leading to corruption of zlib's state struct followed by unintelligible failure messages. This hazard has existed for a long time, but we'd not noticed for several reasons: (1) We only use gzgetc() when trying to read a manually-compressed TOC file within a directory-format dump, which is a rarely-used scenario that we weren't even testing before 20ec99589. (2) No corruption actually occurs unless sizeof(long) is different from sizeof(off_t) and the platform is big-endian. (3) Some platforms have already fixed the configuration instability, at least sufficiently for their environments. Despite (3), it seems foolish to assume that the problem isn't going to be present in some environments for a long time to come. Hence, avoid relying on this macro. We can just #undef it and fall back on the underlying function of the same name. Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/2122679.1760846783@sss.pgh.pa.us Backpatch-through: 13
2025-10-18Fix determination of not-null constraint "locality" for inherited columnsÁlvaro Herrera
It is possible to have a non-inherited not-null constraint on an inherited column, but we were failing to preserve such constraints during pg_upgrade where the source is 17 or older, because of a bug in the pg_dump query for it. Oversight in commit 14e87ffa5c54. Fix that query. In passing, touch-up a bogus nearby comment introduced by the same commit. In version 17, make the regression tests leave a table in this situation, so that this scenario is tested in the cross-version upgrade tests of 18 and up. Author: Dilip Kumar <dilipbalaut@gmail.com> Reported-by: Andrew Bille <andrewbille@gmail.com> Bug: #19074 Backpatch-through: 18 Discussion: https://postgr.es/m/19074-ae2548458cf0195c@postgresql.org
2025-10-18Fix pg_dump sorting of foreign key constraintsÁlvaro Herrera
Apparently, commit 04bc2c42f765 failed to notice that DO_FK_CONSTRAINT objects require identical handling as DO_CONSTRAINT ones, which causes some pg_upgrade tests in debug builds to fail spuriously. Add that. Author: Álvaro Herrera <alvherre@kurilemu.de> Backpatch-through: 13 Discussion: https://postgr.es/m/202510181201.k6y75v2tpf5r@alvherre.pgsql
2025-10-17Improve TAP tests by replacing ok() with better Test::More functionsTom Lane
Transpose the changes made by commit fabb33b35 in 002_pg_dump.pl into its recently-created clone 006_pg_dump_compress.pl.
2025-10-17Improve TAP tests by replacing ok() with better Test::More functionsMichael Paquier
The TAP tests whose ok() calls are changed in this commit were relying on perl operators, rather than equivalents available in Test::More. For example, rather than the following: ok($data =~ qr/expr/m, "expr matching"); ok($data !~ qr/expr/m, "expr not matching"); The new test code uses this equivalent: like($data, qr/expr/m, "expr matching"); unlike($data, qr/expr/m, "expr not matching"); A huge benefit of the new formulation is that it is possible to know about the values we are checking if a failure happens, making debugging easier, should the test runs happen in the buildfarm, in the CI or locally. This change leads to more test code overall as perltidy likes to make the code pretty the way it is in this commit. Author: Sadhuprasad Patro <b.sadhu@gmail.com> Discussion: https://postgr.es/m/CAFF0-CHhwNx_Cv2uy7tKjODUbeOgPrJpW4Rpf1jqB16_1bU2sg@mail.gmail.com
2025-10-16Add more TAP test coverage for pg_dump.Tom Lane
Add a test case to cover pg_dump with --compress=none. This brings the coverage of compress_none.c up from about 64% to 90%, in particular covering the new code added in a previous patch. Include compression of toc.dat in manually-compressed test cases. We would have found the bug fixed in commit a239c4a0c much sooner if we'd done this. As far as I can tell, this doesn't reduce test coverage at all, since there are other tests of directory format that still use an uncompressed toc.dat. Widen the wide row used to verify correct (de) compression. Commit 1a05c1d25 advises us (not without reason) to ensure that this test case fully fills DEFAULT_IO_BUFFER_SIZE, so that loops within the compression logic will iterate completely. To follow that advice with the proposed DEFAULT_IO_BUFFER_SIZE of 128K, we need something close to this. This does indeed increase the reported code coverage by a few lines. While here, fix a glitch that I noticed in testing: the $glob_patterns tests were incapable of failing, because glob() will return 'foo' as 'foo' whether there is a matching file or not. (Indeed, the stanza just above that one relies on that.) In my testing, this patch adds approximately as much runtime as was saved by the previous patch, so that it's about a wash compared to the old code. However, we get better test coverage. Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/3515357.1760128017@sss.pgh.pa.us
2025-10-16Split 002_pg_dump.pl into two test files.Tom Lane
Add a new test script 006_pg_dump_compress.pl, containing just the pg_dump tests specifically concerned with compression, and remove those tests from 002_pg_dump.pl. We can also drop some infrastructure in 002_pg_dump.pl that was used only for these tests. The point of this is to avoid the cost of running these test cases over and over in all the scenarios (runs) that 002_pg_dump.pl exercises. We don't learn anything more about the behavior of the compression code that way, and we expend significant amounts of time, since one of these test cases is quite large and due to get larger. The intent of this specific patch is to provide exactly the same coverage as before, except that I went back to using --no-sync in all the test runs moved over to 006_pg_dump_compress.pl. I think that avoiding that had basically been cargo-culted into these test cases as a result of modeling them on the defaults_custom_format test case; again, doing that over and over isn't going to teach us anything new. Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/3515357.1760128017@sss.pgh.pa.us
2025-10-16Align the data block sizes of pg_dump's various compression modes.Tom Lane
After commit fe8192a95, compress_zstd.c tends to produce data block sizes around 128K, and we don't really have any control over that unless we want to overrule ZSTD_CStreamOutSize(). Which seems like a bad idea. But let's try to align the other compression modes to produce block sizes roughly comparable to that, so that pg_restore's skip-data performance isn't enormously different for different modes. gzip compression can be brought in line simply by setting DEFAULT_IO_BUFFER_SIZE = 128K, which this patch does. That increases some unrelated buffer sizes, but none of them seem problematic for modern platforms. lz4's idea of appropriate block size is highly nonlinear: if we just increase DEFAULT_IO_BUFFER_SIZE then the output blocks end up around 200K. I found that adjusting the slop factor in LZ4State_compression_init was a not-too-ugly way of bringing that number roughly into line. With compress = none you get data blocks the same sizes as the table rows, which seems potentially problematic for narrow tables. Introduce a layer of buffering to make that case match the others. Comments in compress_io.h and 002_pg_dump.pl suggest that if we increase DEFAULT_IO_BUFFER_SIZE then we need to increase the amount of data fed through the tests in order to improve coverage. I've not done that here, leaving it for a separate patch. Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/3515357.1760128017@sss.pgh.pa.us
2025-10-16Refactor logical worker synchronization code into a separate file.Amit Kapila
To support the upcoming addition of a sequence synchronization worker, this patch extracts common synchronization logic shared by table sync workers and the new sequence sync worker into a dedicated file. This modularization improves code reuse, maintainability, and clarity in the logical workers framework. Author: vignesh C <vignesh21@gmail.com> Author: Hou Zhijie <houzj.fnst@fujitsu.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
2025-10-13Fix serious performance problems in LZ4Stream_read_internal.Tom Lane
I was distressed to find that reading an LZ4-compressed toc.dat file was hundreds of times slower than it ought to be. On investigation, the blame mostly affixes to LZ4Stream_read_overflow's habit of memmove'ing all the remaining buffered data after each read operation. Since reading a TOC file tends to involve a lot of small (even one-byte) decompression calls, that amounts to an O(N^2) cost. This could have been fixed with a minimal patch, but to my eyes LZ4Stream_read_internal and LZ4Stream_read_overflow are badly-written spaghetti code; in particular the eol_flag logic is inefficient and duplicative. I chose to throw the code away and rewrite from scratch. This version is about sixty lines shorter as well as not having the performance issue. Fortunately, AFAICT the only way to get to this problem is to manually LZ4-compress the toc.dat and/or blobs.toc files within a directory-style archive; in the main data files, we read blocks that are large enough that the O(N^2) behavior doesn't manifest. Few people do that, which likely explains the lack of field complaints. Otherwise this performance bug might be considered bad enough to warrant back-patching. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/3515357.1760128017@sss.pgh.pa.us
2025-10-13Fix poor buffering logic in pg_dump's lz4 and zstd compression code.Tom Lane
Both of these modules dumped each bit of output that they got from the underlying compression library as a separate "data block" in the emitted archive file. In the case of zstd this'd frequently result in block sizes well under 100 bytes; lz4 is a little better but still produces blocks around 300 bytes, at least in the test case I tried. This bloats the archive file a little bit compared to larger block sizes, but the real problem is that when pg_restore has to skip each data block rather than seeking directly to some target data, tiny block sizes are enormously inefficient. Fix both modules so that they fill their allocated buffer reasonably well before dumping a data block. In the case of lz4, also delete some redundant logic that caused the lz4 frame header to be emitted as a separate data block. (That saves little, but I see no reason to expend extra code to get worse results.) I fixed the "stream API" code too. In those cases, feeding small amounts of data to fwrite() probably doesn't have any meaningful performance consequences. But it seems like a bad idea to leave the two sets of code doing the same thing in two different ways. In passing, remove unnecessary "extra paranoia" check in _ZstdWriteCommon. _CustomWriteFunc (the only possible referent of cs->writeF) already protects itself against zero-length writes, and it's really a modularity violation for _ZstdWriteCommon to know that the custom format disallows empty data blocks. Also, fix Zstd_read_internal to do less work when passed size == 0. Reported-by: Dimitrios Apostolou <jimis@gmx.net> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/3515357.1760128017@sss.pgh.pa.us
2025-10-13Fix issue with reading zero bytes in Gzip_read.Tom Lane
pg_dump expects a read request of zero bytes to be a no-op; see for example ReadStr(). Gzip_read got this wrong and falsely supposed that the resulting gzret == 0 indicated an error. We could complicate that error-checking logic some more, but it seems best to just fall out immediately when passed size == 0. This bug breaks the nominally-supported case of manually gzip'ing the toc.dat file within a directory-style dump, so back-patch to v16 where this code came in. (Prior branches already have a short-circuit for size == 0 before their only gzread call.) Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/3515357.1760128017@sss.pgh.pa.us Backpatch-through: 16
2025-10-11Restore test coverage of LZ4Stream_gets().Tom Lane
In commit a45c78e32 I removed the only regression test case that reaches this function, because it turns out that we only use it if reading an LZ4-compressed blobs.toc file in a directory dump, and that is a state that has to be created manually. That seems like a bad thing to not test, not so much for LZ4Stream_gets() itself as because it means the squirrely eol_flag logic in LZ4Stream_read_internal() is not tested. The reason for the change was that I thought the lz4 program did not have any way to perform compression without explicit specification of the output file name. However, it turns out that the syntax synopsis in its man page is a lie, and if you read enough of the man page you find out that with "-m" it will do what's needful. So restore the manual compression step in that test case. Noted while testing some proposed changes in pg_dump's compression logic. Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/3515357.1760128017@sss.pgh.pa.us Backpatch-through: 17
2025-10-09Add "ALL SEQUENCES" support to publications.Amit Kapila
This patch adds support for the ALL SEQUENCES clause in publications, enabling synchronization/replication of all sequences that is useful for upgrades. Publications can now include all sequences via FOR ALL SEQUENCES. psql enhancements: \d shows publications for a given sequence. \dRp indicates if a publication includes all sequences. ALL SEQUENCES can be combined with ALL TABLES, but not with other options like TABLE or TABLES IN SCHEMA. We can extend support for more granular clauses in future. The view pg_publication_sequences provides information about the mapping between publications and sequences. This patch enables publishing of sequences; subscriber-side support will be added in upcoming patches. Author: vignesh C <vignesh21@gmail.com> Author: Tomas Vondra <tomas@vondra.me> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Nisha Moond <nisha.moond412@gmail.com> Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
2025-09-18pg_restore: Fix security label handling with --no-publications/subscriptions.Fujii Masao
Previously, pg_restore did not skip security labels on publications or subscriptions even when --no-publications or --no-subscriptions was specified. As a result, it could issue SECURITY LABEL commands for objects that were never created, causing those commands to fail. This commit fixes the issue by ensuring that security labels on publications and subscriptions are also skipped when the corresponding options are used. Backpatch to all supported versions. Author: Jian He <jian.universality@gmail.com> Reviewed-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/CACJufxHCt00pR9h51AVu6+yPD5J7JQn=7dQXxqacj0XyDhc-fA@mail.gmail.com Backpatch-through: 13
2025-09-16Fix pg_dump COMMENT dependency for separate domain constraints.Noah Misch
The COMMENT should depend on the separately-dumped constraint, not the domain. Sufficient restore parallelism might fail the COMMENT command by issuing it before the constraint exists. Back-patch to v13, like commit 0858f0f96ebb891c8960994f023ed5a17b758a38. Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de> Discussion: https://postgr.es/m/20250913020233.fa.nmisch@google.com Backpatch-through: 13
2025-09-16pg_dump: Fix dumping of security labels on subscriptions and event triggers.Fujii Masao
Previously, pg_dump incorrectly queried pg_seclabel to retrieve security labels for subscriptions, which are stored in pg_shseclabel as they are global objects. This could result in security labels for subscriptions not being dumped. This commit fixes the issue by updating pg_dump to query the pg_seclabels view, which aggregates entries from both pg_seclabel and pg_shseclabel. While querying pg_shseclabel directly for subscriptions was an alternative, using pg_seclabels is simpler and sufficient. In addition, pg_dump is updated to dump security labels on event triggers, which were previously omitted. Backpatch to all supported versions. Author: Jian He <jian.universality@gmail.com> Co-authored-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/CACJufxHCt00pR9h51AVu6+yPD5J7JQn=7dQXxqacj0XyDhc-fA@mail.gmail.com Backpatch-through: 13
2025-09-16pg_restore: Fix comment handling with --no-policies.Fujii Masao
Previously, pg_restore did not skip comments on policies even when --no-policies was specified. As a result, it could issue COMMENT commands for policies that were never created, causing those commands to fail. This commit fixes the issue by ensuring that comments on policies are also skipped when --no-policies is used. Backpatch to v18, where --no-policies was added in pg_restore. Author: Jian He <jian.universality@gmail.com> Co-authored-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/CACJufxHCt00pR9h51AVu6+yPD5J7JQn=7dQXxqacj0XyDhc-fA@mail.gmail.com Backpatch-through: 18
2025-09-16pg_restore: Fix comment handling with --no-publications / --no-subscriptions.Fujii Masao
Previously, pg_restore did not skip comments on publications or subscriptions even when --no-publications or --no-subscriptions was specified. As a result, it could issue COMMENT commands for objects that were never created, causing those commands to fail. This commit fixes the issue by ensuring that comments on publications and subscriptions are also skipped when the corresponding options are used. Backpatch to all supported versions. Author: Jian He <jian.universality@gmail.com> Co-authored-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/CACJufxHCt00pR9h51AVu6+yPD5J7JQn=7dQXxqacj0XyDhc-fA@mail.gmail.com Backpatch-through: 13
2025-09-08pg_upgrade: Transfer pg_largeobject_metadata's files when possible.Nathan Bossart
Commit 161a3e8b68 taught pg_upgrade to use COPY for large object metadata for upgrades from v12 and newer, which is much faster to restore than the proper large object commands. For upgrades from v16 and newer, we can take this a step further and transfer the large object metadata files as if they were user tables. We can't transfer the files from older versions because the aclitem data type (needed by pg_largeobject_metadata.lomacl) changed its storage format in v16 (see commit 7b378237aa). Note that this commit is essentially a revert of commit 12a53c732c. There are a couple of caveats. First, we still need to COPY the corresponding pg_shdepend rows for large objects. Second, we need to COPY anything in pg_largeobject_metadata with a comment or security label, else restoring those will fail. This means that an upgrade in which every large object has a comment or security label won't gain anything from this commit, but it should at least avoid making those unusual use-cases any worse. pg_upgrade must also take care to transfer the relfilenodes of pg_largeobject_metadata and its index, as was done for pg_largeobject in commits d498e052b4 and bbe08b8869. Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/aJ3_Gih_XW1_O2HF%40nathan
2025-09-03Generate GUC tables from .dat filePeter Eisentraut
Store the information in guc_tables.c in a .dat file similar to the catalog data in src/include/catalog/, and generate a part of guc_tables.c from that. The goal is to make it easier to edit that information, and to be able to make changes to the downstream data structures more easily. (Essentially, those are the same reasons as for the original adoption of the .dat format.) Reviewed-by: John Naylor <johncnaylorls@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: David E. Wheeler <david@justatheory.com> Discussion: https://www.postgresql.org/message-id/flat/dae6fe89-1e0c-4c3f-8d92-19d23374fb10%40eisentraut.org
2025-09-02Add max_retention_duration option to subscriptions.Amit Kapila
This commit introduces a new subscription parameter, max_retention_duration, aimed at mitigating excessive accumulation of dead tuples when retain_dead_tuples is enabled and the apply worker lags behind the publisher. When the time spent advancing a non-removable transaction ID exceeds the max_retention_duration threshold, the apply worker will stop retaining conflict detection information. In such cases, the conflict slot's xmin will be set to InvalidTransactionId, provided that all apply workers associated with the subscription (with retain_dead_tuples enabled) confirm the retention duration has been exceeded. To ensure retention status persists across server restarts, a new column subretentionactive has been added to the pg_subscription catalog. This prevents unnecessary reactivation of retention logic after a restart. The conflict detection slot will not be automatically re-initialized unless a new subscription is created with retain_dead_tuples = true, or the user manually re-enables retain_dead_tuples. A future patch will introduce support for automatic slot re-initialization once at least one apply worker confirms that the retention duration is within the configured max_retention_duration. Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Nisha Moond <nisha.moond412@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com
2025-08-29pg_dump: Fix compression API errorhandlingDaniel Gustafsson
Compression in pg_dump is abstracted using an API with multiple implementations which can be selected at runtime by the user. The API and its implementations have evolved over time, notable commits include bf9aa490db, e9960732a9, 84adc8e20, and 0da243fed. The errorhandling defined by the API was however problematic and the implementations had a few bugs and/or were not following the API specification. This commit modifies the API to ensure that callers can perform errorhandling efficiently and fixes all the implementations such that they all implement the API in the same way. A full list of the changes can be seen below. * write_func: - Make write_func throw an error on all error conditions. All callers of write_func were already checking for success and calling pg_fatal on all errors, so we might as well make the API support that case directly with simpler errorhandling as a result. * open_func: - zstd: move stream initialization from the open function to the read and write functions as they can have fatal errors. Also ensure to dup the file descriptor like none and gzip. - lz4: Ensure to dup the file descriptor like none and gzip. * close_func: - zstd: Ensure to close the file descriptor even if closing down the compressor fails, and clean up state allocation on fclose failures. Make sure to capture errors set by fclose. - lz4: Ensure to close the file descriptor even if closing down the compressor fails, and instead of calling pg_fatal log the failures using pg_log_error. Make sure to capture errors set by fclose. - none: Make sure to catch errors set by fclose. * read_func / gets_func: - Make read_func unconditionally return the number of read bytes instead of making it optional per implementation. - lz4: Make sure to call throw an error and not return -1 - gzip: gzread returning zero cannot be assumed to indicate EOF as it is documented to return zero for some types of errors. - lz4, zstd: Convert the _read_internal helper functions to not call pg_fatal on errors to be able to handle gets_func returning NULL on error. * getc_func: - zstd: Use an unsigned char rather than an int to read char into. * LZ4Stream_init: - Make sure to not switch to inited state until we know that initialization succeeded and reset errno just in case. On top of these changes there are minor comment cleanups and improvements as well as an attempt to consistently reset errno in codepaths where it is inspected. This work was initiated by a report of API misuse, which turned into a larger body of work. As this is an internal API these changes can be backpatched into all affected branches. Author: Tom Lane <tgl@sss.pgh.pa.us> Author: Daniel Gustafsson <daniel@yesql.se> Reported-by: Evgeniy Gorbanev <gorbanyoves@basealt.ru> Discussion: https://postgr.es/m/517794.1750082166@sss.pgh.pa.us Backpatch-through: 16
2025-08-23Sort DO_DEFAULT_ACL dump objects independent of OIDs.Noah Misch
Commit 0decd5e89db9f5edb9b27351082f0d74aae7a9b6 missed DO_DEFAULT_ACL, leading to assertion failures, potential dump order instability, and spurious schema diffs. Back-patch to v13, like that commit. Reported-by: Alexander Lakhin <exclusion@gmail.com> Author: Kirill Reshke <reshkekirill@gmail.com> Discussion: https://postgr.es/m/d32aaa8d-df7c-4f94-bcb3-4c85f02bea21@gmail.com Backpatch-through: 13
2025-08-11Restrict psql meta-commands in plain-text dumps.Nathan Bossart
A malicious server could inject psql meta-commands into plain-text dump output (i.e., scripts created with pg_dump --format=plain, pg_dumpall, or pg_restore --file) that are run at restore time on the machine running psql. To fix, introduce a new "restricted" mode in psql that blocks all meta-commands (except for \unrestrict to exit the mode), and teach pg_dump, pg_dumpall, and pg_restore to use this mode in plain-text dumps. While at it, encourage users to only restore dumps generated from trusted servers or to inspect it beforehand, since restoring causes the destination to execute arbitrary code of the source superusers' choice. However, the client running the dump and restore needn't trust the source or destination superusers. Reported-by: Martin Rakhmanov Reported-by: Matthieu Denais <litezeraw@gmail.com> Reported-by: RyotaK <ryotak.mail@gmail.com> Suggested-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Security: CVE-2025-8714 Backpatch-through: 13
2025-08-11Convert newlines to spaces in names written in v11+ pg_dump comments.Noah Misch
Maliciously-crafted object names could achieve SQL injection during restore. CVE-2012-0868 fixed this class of problem at the time, but later work reintroduced three cases. Commit bc8cd50fefd369b217f80078585c486505aafb62 (back-patched to v11+ in 2023-05 releases) introduced the pg_dump case. Commit 6cbdbd9e8d8f2986fde44f2431ed8d0c8fce7f5d (v12+) introduced the two pg_dumpall cases. Move sanitize_line(), unchanged, to dumputils.c so pg_dumpall has access to it in all supported versions. Back-patch to v13 (all supported versions). Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Backpatch-through: 13 Security: CVE-2025-8715
2025-08-08pg_dump: Fix incorrect parsing of object types in pg_dump --filter.Fujii Masao
Previously, pg_dump --filter could misinterpret invalid object types in the filter file as valid ones. For example, the invalid object type "table-data" (likely a typo for the valid "table_data") could be mistakenly recognized as "table", causing pg_dump to succeed when it should have failed. This happened because pg_dump identified keywords as sequences of ASCII alphabetic characters, treating non-alphabetic characters (like hyphens) as keyword boundaries. As a result, "table-data" was parsed as "table". To fix this, pg_dump --filter now treats keywords as strings of non-whitespace characters, ensuring invalid types like "table-data" are correctly rejected. Back-patch to v17, where the --filter option was introduced. Author: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com> Reviewed-by: Srinath Reddy <srinath2133@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/CAHGQGwFzPKUwiV5C-NLBqz1oK1+z9K8cgrF+LcxFem-p3_Ftug@mail.gmail.com Backpatch-through: 17
2025-08-02Simplify options in pg_dump and pg_restore.Jeff Davis
Remove redundant options --with-data and --with-schema, and rename --with-statistics to just --statistics. Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/f379d0aeefe8effe13302a436bc28f549f09e924.camel@j-davis.com Backpatch-through: 18
2025-08-01pg_dump: reject combination of "only" and "with"Jeff Davis
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de> Discussion: https://postgr.es/m/8ce896d1a05040905cc1a3afbc04e94d8e95669a.camel@j-davis.com Backpatch-through: 18
2025-07-31Sort dump objects independent of OIDs, for the 7 holdout object types.Noah Misch
pg_dump sorts objects by their logical names, e.g. (nspname, relname, tgname), before dependency-driven reordering. That removes one source of logically-identical databases differing in their schema-only dumps. In other words, it helps with schema diffing. The logical name sort ignored essential sort keys for constraints, operators, PUBLICATION ... FOR TABLE, PUBLICATION ... FOR TABLES IN SCHEMA, operator classes, and operator families. pg_dump's sort then depended on object OID, yielding spurious schema diffs. After this change, OIDs affect dump order only in the event of catalog corruption. While pg_dump also wrongly ignored pg_collation.collencoding, CREATE COLLATION restrictions have been keeping that imperceptible in practical use. Use techniques like we use for object types already having full sort key coverage. Where the pertinent queries weren't fetching the ignored sort keys, this adds columns to those queries and stores those keys in memory for the long term. The ignorance of sort keys became more problematic when commit 172259afb563d35001410dc6daad78b250924038 added a schema diff test sensitive to it. Buildfarm member hippopotamus witnessed that. However, dump order stability isn't a new goal, and this might avoid other dump comparison failures. Hence, back-patch to v13 (all supported versions). Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/20250707192654.9e.nmisch@google.com Backpatch-through: 13
2025-07-30Revert Non text modes for pg_dumpall, and pg_restore supportAndrew Dunstan
Recent discussions of the mechanisms used to manage global data have raised concerns about their robustness and security. Rather than try to deal with those concerns at a very late stage of the release cycle, the conclusion is to revert these features and work on them for the next release. This reverts parts or all of the following commits: 1495eff7bdb Non text modes for pg_dumpall, correspondingly change pg_restore 5db3bf7391d Clean up from commit 1495eff7bdb 289f74d0cb2 Add more TAP tests for pg_dumpall 2ef57908067 Fix a couple of error messages and tests for them b52a4a5f285 Clean up error messages from 1495eff7bdb 4170298b6ec Further cleanup for directory creation on pg_dump/pg_dumpall 22cb6d28950 Fix memory leak in pg_restore.c 928394b664b Improve various new-to-v18 appendStringInfo calls 39729ec01d2 Fix fat fingering in 22cb6d28950 5822bf21d50 Add missing space in pg_restore documentation. f09088a01d3 Free memory properly in pg_restore.c 40b9c27014d pg_restore cleanups 4aad2cb7707 Portability fix: isdigit() must be passed an unsigned char. 88e947136b4 Fix typos and grammar in the code f60420cff66 doc: Alphabetize long options for pg_dump[all]. bc35adee8d7 doc: Put new options in consistent order on man pages a876464abc7 Message style improvements dec6643487b Improve pg_dump/pg_dumpall help synopses and terminology 0ebd2425558 Run pgperltidy Discussion: https://postgr.es/m/20250708212819.09.nmisch@google.com Backpatch-to: 18 Reviewed-by: Noah Misch <noah@leadboat.com>
2025-07-23Preserve conflict-relevant data during logical replication.Amit Kapila
Logical replication requires reliable conflict detection to maintain data consistency across nodes. To achieve this, we must prevent premature removal of tuples deleted by other origins and their associated commit_ts data by VACUUM, which could otherwise lead to incorrect conflict reporting and resolution. This patch introduces a mechanism to retain deleted tuples on the subscriber during the application of concurrent transactions from remote nodes. Retaining these tuples allows us to correctly ignore concurrent updates to the same tuple. Without this, an UPDATE might be misinterpreted as an INSERT during resolutions due to the absence of the original tuple. Additionally, we ensure that origin metadata is not prematurely removed by vacuum freeze, which is essential for detecting update_origin_differs and delete_origin_differs conflicts. To support this, a new replication slot named pg_conflict_detection is created and maintained by the launcher on the subscriber. Each apply worker tracks its own non-removable transaction ID, which the launcher aggregates to determine the appropriate xmin for the slot, thereby retaining necessary tuples. Conflict information retention (deleted tuples and commit_ts) can be enabled per subscription via the retain_conflict_info option. This is disabled by default to avoid unnecessary overhead for configurations that do not require conflict resolution or logging. During upgrades, if any subscription on the old cluster has retain_conflict_info enabled, a conflict detection slot will be created to protect relevant tuples from deletion when the new cluster starts. This is a foundational work to correctly detect update_deleted conflict which will be done in a follow-up patch. Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Nisha Moond <nisha.moond412@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com
2025-07-21pg_dump: include comments on not-null constraints on domains, tooÁlvaro Herrera
Commit e5da0fe3c22b introduced catalog entries for not-null constraints on domains; but because commit b0e96f311985 (the original work for catalogued not-null constraints on tables) forgot to teach pg_dump to process the comments for them, this one also forgot. Add that now. We also need to teach repairDependencyLoop() about the new type of constraints being possible for domains. Backpatch-through: 17 Co-authored-by: jian he <jian.universality@gmail.com> Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de> Reported-by: jian he <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxF-0bqVR=j4jonS6N2Ka6hHUpFyu3_3TWKNhOW_4yFSSg@mail.gmail.com
2025-07-18pg_upgrade: Use COPY for large object metadata.Nathan Bossart
Presently, pg_dump generates commands like SELECT pg_catalog.lo_create('5432'); ALTER LARGE OBJECT 5432 OWNER TO alice; GRANT SELECT ON LARGE OBJECT 5432 TO bob; for each large object. This is particularly slow at restore time, especially when there are tens or hundreds of millions of large objects. From reports and personal experience, such slow restores seem to be most painful when encountered during pg_upgrade. This commit teaches pg_dump to instead dump pg_largeobject_metadata and the corresponding pg_shdepend rows when in binary upgrade mode, i.e., pg_dump now generates commands like COPY pg_catalog.pg_largeobject_metadata (oid, lomowner, lomacl) FROM stdin; 5432 16384 {alice=rw/alice,bob=r/alice} \. COPY pg_catalog.pg_shdepend (dbid, classid, objid, objsubid, refclassid, refobjid, deptype) FROM stdin; 5 2613 5432 0 1260 16384 o 5 2613 5432 0 1260 16385 a \. Testing indicates the COPY approach can be significantly faster. To do any better, we'd probably need to find a way to copy/link pg_largeobject_metadata's files during pg_upgrade, which would be limited to upgrades from >= v16 (since commit 7b378237aa changed the storage format for aclitem, which is used for pg_largeobject_metadata.lomacl). Note that this change only applies to binary upgrade mode (i.e., dumps initiated by pg_upgrade) since it inserts rows directly into catalogs. Also, this optimization can only be used for upgrades from >= v12 because pg_largeobject_metadata was created WITH OIDS in older versions, which prevents pg_dump from handling pg_largeobject_metadata.oid properly. With some extra effort, it might be possible to support upgrades from older versions, but the added complexity didn't seem worth it to support versions that will have been out-of-support for nearly 3 years by the time this change is released. Experienced hackers may remember that prior to v12, pg_upgrade copied/linked pg_largeobject_metadata's files (see commit 12a53c732c). Besides the aforementioned storage format issues, this approach failed to transfer the relevant pg_shdepend rows, and pg_dump still had to generate an lo_create() command per large object so that creating the dependent comments and security labels worked. We could perhaps adopt a hybrid approach for upgrades from v16 and newer (i.e., generate lo_create() commands for each large object, copy/link pg_largeobject_metadata's files, and COPY the relevant pg_shdepend rows), but further testing is needed. Reported-by: Hannu Krosing <hannuk@google.com> Suggested-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Hannu Krosing <hannuk@google.com> Reviewed-by: Nitin Motiani <nitinmotiani@google.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAMT0RQSS-6qLH%2BzYsOeUbAYhop3wmQTkNmQpo5--QRDUR%2BqYmQ%40mail.gmail.com
2025-07-16Fix dumping of comments on invalid constraints on domainsÁlvaro Herrera
We skip dumping constraints together with domains if they are invalid ('separate') so that they appear after data -- but their comments were dumped together with the domain definition, which in effect leads to the comment being dumped when the constraint does not yet exist. Delay them in the same way. Oversight in 7eca575d1c28; backpatch all the way back. Author: jian he <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxF_C2pe6J_+nPr6C5jf5rQnbYP8XOKr4HM8yHZtp2aQqQ@mail.gmail.com
2025-07-16pg_dumpall: Skip global objects with --statistics-only or --no-schema.Jeff Davis
Previously, pg_dumpall would still dump global objects such as roles and tablespaces even when --statistics-only or --no-schema was specified. Since these global objects are treated as schema-level data, they should be skipped in these cases. This commit fixes the issue by ensuring that global objects are not dumped when either --statistics-only or --no-schema is used. Author: Fujii Masao <masao.fujii@oss.nttdata.com> Reviewed-by: Corey Huinker <corey.huinker@gmail.com> Discussion: https://postgr.es/m/08129593-6f3c-4fb9-94b7-5aa2eefb99b0@oss.nttdata.com Backpatch-through: 18
2025-07-10pg_dump: Fix object-type sort priority for large objects.Nathan Bossart
Commit a45c78e328 moved large object metadata from SECTION_PRE_DATA to SECTION_DATA but neglected to move PRIO_LARGE_OBJECT in dbObjectTypePriorities accordingly. While this hasn't produced any known live bugs, it causes problems for a proposed patch that optimizes upgrades with many large objects. Fixing the priority might also make the topological sort step marginally faster by reducing the number of ordering violations that have to be fixed. Reviewed-by: Nitin Motiani <nitinmotiani@google.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/aBkQLSkx1zUJ-LwJ%40nathan Discussion: https://postgr.es/m/aG_5DBCjdDX6KAoD%40nathan Backpatch-through: 17
2025-07-02meson: Increase minimum version to 0.57.2Peter Eisentraut
The previous minimum was to maintain support for Python 3.5, but we now require Python 3.6 anyway (commit 45363fca637), so that reason is obsolete. A small raise to Meson 0.57 allows getting rid of a fair amount of version conditionals and silences some future-deprecated warnings. With the version bump, the following deprecation warnings appeared and are fixed: WARNING: Project targets '>=0.57' but uses feature deprecated since '0.55.0': ExternalProgram.path. use ExternalProgram.full_path() instead WARNING: Project targets '>=0.57' but uses feature deprecated since '0.56.0': meson.build_root. use meson.project_build_root() or meson.global_build_root() instead. It turns out that meson 0.57.0 and 0.57.1 are buggy for our use, so the minimum is actually set to 0.57.2. This is specific to this version series; in the future we won't necessarily need to be this precise. Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/flat/42e13eb0-862a-441e-8d84-4f0fd5f6def0%40eisentraut.org
2025-06-30Run pgperltidyJoe Conway
This is required before the creation of a new branch. pgindent is clean, as well as is reformat-dat-files. perltidy version is v20230309, as documented in pgindent's README.