| Age | Commit message (Collapse) | Author |
|
The TAP tests whose ok() calls are changed in this commit were relying
on perl operators, rather than equivalents available in Test::More. For
example, rather than the following:
ok($data =~ qr/expr/m, "expr matching");
ok($data !~ qr/expr/m, "expr not matching");
The new test code uses this equivalent:
like($data, qr/expr/m, "expr matching");
unlike($data, qr/expr/m, "expr not matching");
A huge benefit of the new formulation is that it is possible to know
about the values we are checking if a failure happens, making debugging
easier, should the test runs happen in the buildfarm, in the CI or
locally.
This change leads to more test code overall as perltidy likes to make
the code pretty the way it is in this commit.
Author: Sadhuprasad Patro <b.sadhu@gmail.com>
Discussion: https://postgr.es/m/CAFF0-CHhwNx_Cv2uy7tKjODUbeOgPrJpW4Rpf1jqB16_1bU2sg@mail.gmail.com
|
|
Seems Apple's version of "wc -l" puts spaces before the number.
(I wonder why the cfbot didn't find this.) While here, make
the failure case log what it got, to aid debugging future issues.
Per buildfarm.
|
|
We try to use the pager only when more than a screenful's worth of
data is to be printed. However, the code in print.c that's concerned
with counting the number of lines that will be needed missed a lot of
edge cases:
* While plain aligned mode accounted for embedded newlines in column
headers and table cells, unaligned and vertical output modes did not.
* In particular, since vertical mode repeats the headers for each
record, we need to account for embedded newlines in the headers for
each record.
* Multi-line table titles were not accounted for.
* tuples_only mode (where headers aren't printed) wasn't accounted
for.
* Footers were accounted for as one line per footer, again missing
the possibility of multi-line footers. (In some cases such as
"\d+" on a view, there can be many lines in a footer.) Also,
we failed to account for the default footer.
To fix, move the entire responsibility for counting lines into
IsPagerNeeded (or actually, into a new subroutine count_table_lines),
and then expand the logic as appropriate. Also restructure to make it
perhaps a bit easier to follow. It's still only completely accurate
for ALIGNED/WRAPPED/UNALIGNED formats, but the other formats are not
typically used with interactive output.
Arrange to not run count_table_lines at all unless we will use
its result, and teach it to quit early as soon as it's proven
that the output is long enough to require use of the pager.
When dealing with large tables this should save a noticeable
amount of time, since pg_wcssize() isn't exactly cheap.
In passing, move the "flog" output step to the bottom of printTable(),
rather than running it when we've already opened the pager in some
modes. In principle it shouldn't interfere with the pager because
flog should always point to a non-interactive file; but it seems silly
to risk any interference, especially when the existing positioning
seems to have been chosen with the aid of a dartboard.
Also add a TAP test to exercise pager mode. Up to now, we have had
zero test coverage of these code paths, because they aren't reached
unless isatty(stdout). We do have the test infrastructure to improve
that situation, though. Following the lead of 010_tab_completion.pl,
set up an interactive psql and feed it some test cases. To detect
whether it really did invoke the pager, point PSQL_PAGER to "wc -l".
The test is skipped if that utility isn't available.
Author: Erik Wienhold <ewie@ewie.name>
Test-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2dd2430f-dd20-4c89-97fd-242616a3d768@ewie.name
|
|
A malicious server could inject psql meta-commands into plain-text
dump output (i.e., scripts created with pg_dump --format=plain,
pg_dumpall, or pg_restore --file) that are run at restore time on
the machine running psql. To fix, introduce a new "restricted"
mode in psql that blocks all meta-commands (except for \unrestrict
to exit the mode), and teach pg_dump, pg_dumpall, and pg_restore to
use this mode in plain-text dumps.
While at it, encourage users to only restore dumps generated from
trusted servers or to inspect it beforehand, since restoring causes
the destination to execute arbitrary code of the source superusers'
choice. However, the client running the dump and restore needn't
trust the source or destination superusers.
Reported-by: Martin Rakhmanov
Reported-by: Matthieu Denais <litezeraw@gmail.com>
Reported-by: RyotaK <ryotak.mail@gmail.com>
Suggested-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Security: CVE-2025-8714
Backpatch-through: 13
|
|
Running COPY within a pipeline can break protocol synchronization in
multiple ways. psql is limited in terms of result processing if mixing
COPY commands with normal queries while controlling a pipeline with the
new meta-commands, as an effect of the following reasons:
- In COPY mode, the backend ignores additional Sync messages and will
not send a matching ReadyForQuery expected by the frontend. Doing a
\syncpipeline just after COPY will leave the frontend waiting for a
ReadyForQuery message that won't be sent, leaving psql out-of-sync.
- libpq automatically sends a Sync with the Copy message which is not
tracked in the command queue, creating an unexpected synchronisation
point that psql cannot really know about. While it is possible to track
such activity for a \copy, this cannot really be done sanely with plain
COPY queries. Backend failures during a COPY would leave the pipeline
in an aborted state while the backend would be in a clean state, ready
to process commands.
At the end, fixing those issues would require modifications in how libpq
handles pipeline and COPY. So, rather than implementing workarounds in
psql to shortcut the libpq internals (with command queue handling for
one), and because meta-commands for pipelines in psql are a new feature
with COPY in a pipeline having a limited impact compared to other
queries, this commit forbids the use of COPY within a pipeline to avoid
possible break of protocol synchronisation within psql. If there is a
use-case for COPY support within pipelines in libpq, this could always
be added in the future, if necessary.
Most of the changes of this commit impacts the tests for psql pipelines,
removing the tests related to COPY. Some TAP tests still exist for COPY
TO/FROM and \copy to/from, to check that that connections are aborted
when this operation is attempted.
Reported-by: Nikita Kalinin <n.kalinin@postgrespro.ru>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/AC468509-06E8-4E2A-A4B1-63046A4AC6AB@postgrespro.ru
|
|
When the backend reads COPY data, it ignores all sync messages, as per
c01641f8aed0. With psql pipelines, it is possible to manually send sync
messages with \sendpipeline which leaves the frontend in an
unrecoverable state as the backend will not send the necessary
ReadyForQuery message that is expected to feed psql result consumption
logic.
It could be possible to artificially reduce the piped_syncs and
requested_results, however libpq's state would still have queued sync
messages in its command queue, and the only way to consume those without
directly calling pqCommandQueueAdvance() is to process ReadyForQuery
messages that won't be sent since the backend ignores these. Perhaps
this could be improved in the future, but I am not really excited about
introducing this amount of complications in libpq to manipulate the
message queues without a better use case to support it.
Hence, this patch aborts the connection if we detect excessive sync
messages after a COPY in a pipeline to avoid staying in an inconsistent
protocol state, which is the best thing we can do with pipelines in
psql for now. Note that this change does not prevent wrapping a set
of queries inside a block made of \startpipeline and \endpipeline, only
the use of \syncpipeline for a COPY.
Reported-by: Nikita Kalinin <n.kalinin@postgrespro.ru>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/18944-8a926c30f68387dd@postgresql.org
|
|
A correct cocktail of COPY FROM, SELECT and/or DML queries and
\syncpipeline was able to break the logic in charge of discarding
results of a pipeline, done in discardAbortedPipelineResults(). Such
sequence make the backend generate a FATAL, due to a protocol
synchronization loss.
This problem comes down to the fact that we did not consider the case of
libpq returning a PGRES_FATAL_ERROR when discarding the results of an
aborted pipeline. The discarding code is changed so as this result
status is handled as a special case, with the caller of
discardAbortedPipelineResults() being responsible for consuming the
result.
A couple of tests are added to cover the problems reported, bringing an
interesting gain in coverage as there were no tests in the tree covering
the case of protocol synchronization loss.
Issue introduced by 41625ab8ea3d.
Reported-by: Alexander Kozhemyakin <a.kozhemyakin@postgrespro.ru>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/ebf6ce77-b180-4d6b-8eab-71f641499ddf@postgrespro.ru
|
|
The routine was coded so as a WAL sender was always used, state required
only for one failure test related to START_REPLICATION. This test is
changed so as a WAL sender is used by passing a replication option to
psql_fails_like(), instead of forcing the use of a WAL sender for all
the tests.
This has come up as useful in the context of a separate bug fix where
we are looking at extending tests for some failure scenarios. These
tests need to happen in the context of a normal backend, and not a WAL
sender where the extended query protocol cannot be used.
Discussion: https://postgr.es/m/aAXkJIOildLUA7vQ@paquier.xyz
|
|
The test committed in 1a759c83278 was prone to failing when using
locales with a different decimal separator. Since the test value
isn't the important part, change to using an integer instead.
Author: Daniel Gustafsson <daniel@yesql.se>
Reported-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://postgr.es/m/CAFj8pRDE=7uW7QP4rg-OQLE2i-puYsUUt+eHE-L6_b_J9w=eWg@mail.gmail.com
|
|
The default interval for \watch to wait between executing queries,
when executed without a specified interval, was hardcoded to two
seconds. This adds the new variable WATCH_INTERVAL which is used
to set the default interval, making it configurable for the user.
This makes \watch the first command which has a user configurable
default setting.
Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/B2FD26B4-8F64-4552-A603-5CC3DF1C7103@yesql.se
|
|
This commit reshapes the grammar of some commands to apply a more
consistent style across the board, following rules similar to
ce1b0f9da03e:
- Elimination of some pointless used-once variables.
- Use of long options, to self-document better the options used.
- Use of fat commas to link option names and their assigned values,
including redirections, so as perltidy can be tricked to put them
together.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://postgr.es/m/87jz8rzf3h.fsf@wibble.ilmari.org
|
|
This commit rewrites a good chunk of the command arrays in TAP tests
with a grammar based on the following rules:
- Fat commas are used between option names and their values, making it
clear to both humans and perltidy that values and names are bound
together. This is particularly useful for the readability of multi-line
command arrays, and there are plenty of them in the TAP tests. Most of
the test code is updated to use this style. Some commands used
parenthesis to show the link, or attached values and options in a single
string. These are updated to use fat commas instead.
- Option names are switched to use their long names, making them more
self-documented. Based on a suggestion by Andrew Dunstan.
- Add some trailing commas after the last item in multi-line arrays,
which is a common perl style.
Not all the places are taken care of, but this covers a very good chunk
of them.
Author: Dagfinn Ilmari Mannsåker
Reviewed-by: Michael Paquier, Peter Smith, Euler Taveira
Discussion: https://postgr.es/m/87jzc46d8u.fsf@wibble.ilmari.org
|
|
Backpatch-through: 13
|
|
We now create contype='n' pg_constraint rows for not-null constraints on
user tables. Only one such constraint is allowed for a column.
We propagate these constraints to other tables during operations such as
adding inheritance relationships, creating and attaching partitions and
creating tables LIKE other tables. These related constraints mostly
follow the well-known rules of conislocal and coninhcount that we have
for CHECK constraints, with some adaptations: for example, as opposed to
CHECK constraints, we don't match not-null ones by name when descending
a hierarchy to alter or remove it, instead matching by the name of the
column that they apply to. This means we don't require the constraint
names to be identical across a hierarchy.
The inheritance status of these constraints can be controlled: now we
can be sure that if a parent table has one, then all children will have
it as well. They can optionally be marked NO INHERIT, and then children
are free not to have one. (There's currently no support for altering a
NO INHERIT constraint into inheriting down the hierarchy, but that's a
desirable future feature.)
This also opens the door for having these constraints be marked NOT
VALID, as well as allowing UNIQUE+NOT NULL to be used for functional
dependency determination, as envisioned by commit e49ae8d3bc58. It's
likely possible to allow DEFERRABLE constraints as followup work, as
well.
psql shows these constraints in \d+, though we may want to reconsider if
this turns out to be too noisy. Earlier versions of this patch hid
constraints that were on the same columns of the primary key, but I'm
not sure that that's very useful. If clutter is a problem, we might be
better off inventing a new \d++ command and not showing the constraints
in \d+.
For now, we omit these constraints on system catalog columns, because
they're unlikely to achieve anything.
The main difference to the previous attempt at this (b0e96f311985) is
that we now require that such a constraint always exists when a primary
key is in the column; we didn't require this previously which had a
number of unpalatable consequences. With this requirement, the code is
easier to reason about. For example:
- We no longer have "throwaway constraints" during pg_dump. We needed
those for the case where a table had a PK without a not-null
underneath, to prevent a slow scan of the data during restore of the
PK creation, which was particularly problematic for pg_upgrade.
- We no longer have to cope with attnotnull being set spuriously in
case a primary key is dropped indirectly (e.g., via DROP COLUMN).
Some bits of code in this patch were authored by Jian He.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Author: Bernd Helmle <mailings@oopsware.de>
Reviewed-by: 何建 (jian he) <jian.universality@gmail.com>
Reviewed-by: 王刚 (Tender Wang) <tndrwang@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/202408310358.sdhumtyuy2ht@alvherre.pgsql
|
|
Attempting to use an interval of time less than 1ms would cause \watch
to hang. This was confusing, so let's change the logic so as an
interval lower than 1ms behaves the same as 0.
Comments are added to mention that the internals of do_watch() had
better rely on "sleep_ms", the interval value in milliseconds. While on
it, this commit adds a test to check the behavior of interval values
less than 1ms.
\watch hanging for interval values less than 1ms existed before
6f9ee74d45aa, that has changed the code to support an interval value of
0.
Reported-by: Heikki Linnakangas
Author: Andrey M. Borodin, Michael Paquier
Discussion: https://postgr.es/m/88445e0e-3156-4b9d-afae-9a1a7b1631f6@iki.fi
Backpatch-through: 16
|
|
This reverts some hacks added in d33a81203 and cd69ec66c.
At the time the concern was the already-ancient version of
libedit shipped in Debian 10 (Buster). That platform is
now two years past EOL, so desupporting it for PG 18 seems
fine. (Also, if anyone is really hot to keep testing it,
they can use SKIP_READLINE_TESTS to skip this test.)
We might have to reconsider if any animals still in the
buildfarm don't like this, but the best way to find out
is to try it.
Anton Melnikov
Discussion: https://postgr.es/m/CAGRrpzZU48F2oV3d8eDLr=4TU9xFH5Jt9ED+qU1+X91gMH68Sw@mail.gmail.com
|
|
Run pgindent, pgperltidy, and reformat-dat-files.
The pgindent part of this is pretty small, consisting mainly of
fixing up self-inflicted formatting damage from patches that
hadn't bothered to add their new typedefs to typedefs.list.
In order to keep it from making anything worse, I manually added
a dozen or so typedefs that appeared in the existing typedefs.list
but not in the buildfarm's list. Perhaps we should formalize that,
or better find a way to get those typedefs into the automatic list.
pgperltidy is as opinionated as always, and reformat-dat-files too.
|
|
Formerly this was done with a cursor, which is problematic since
not all result-set-returning query types can be put into a cursor.
The new implementation is better integrated into other psql
features, too.
Daniel Vérité, reviewed by Laurenz Albe and myself (and whacked
around a bit by me, so any remaining bugs are my fault)
Discussion: https://postgr.es/m/CAKZiRmxsVTkO928CM+-ADvsMyePmU3L9DQCa9NwqjvLPcEe5QA@mail.gmail.com
|
|
psql has the :{?name} syntax for testing for a psql variable existence. This
commit implements a tab completion for this syntax. Notably, in order to
implement this we have to remove '{' from WORD_BREAKS. It appears that
'{' here from the very beginning and it comes from the default value of
rl_basic_word_break_characters. And :{?name} is the only psql syntax using
the '{' sign. So, removing it from WORD_BREAKS shouldn't break anything.
Discussion: https://postgr.es/m/CAGRrpzZU48F2oV3d8eDLr%3D4TU9xFH5Jt9ED%2BqU1%2BX91gMH68Sw%40mail.gmail.com
Author: Steve Chavez
Reviewed-by: Erik Wienhold
|
|
Reported-by: Michael Paquier
Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz
Backpatch-through: 12
|
|
There are a lot of Perl scripts in the tree, mostly code generation
and TAP tests. Occasionally, these scripts produce warnings. These
are probably always mistakes on the developer side (true positives).
Typical examples are warnings from genbki.pl or related when you make
a mess in the catalog files during development, or warnings from tests
when they massage a config file that looks different on different
hosts, or mistakes during merges (e.g., duplicate subroutine
definitions), or just mistakes that weren't noticed because there is a
lot of output in a verbose build.
This changes all warnings into fatal errors, by replacing
use warnings;
by
use warnings FATAL => 'all';
in all Perl files.
Discussion: https://www.postgresql.org/message-id/flat/06f899fd-1826-05ab-42d6-adeb1fd5e200%40eisentraut.org
|
|
Commit 664d75753 pulled 010_tab_completion.pl's infrastructure for
invoking an interactive psql session out into a generally-useful test
function, but it didn't move enough stuff. We need to set up various
environment variables that readline will look at, both to ensure
stability of test results and to prevent test actions from cluttering
the calling user's ~/.psql_history. Expecting calling scripts to
remember to do that is too failure-prone: the other existing caller
001_password.pl did not do it. Hence, remove those initialization
steps from 010_tab_completion.pl and put them into interactive_psql().
Since interactive_psql was already making a local ENV hash, this has
no effect on calling scripts.
Discussion: https://postgr.es/m/794610.1703182896@sss.pgh.pa.us
|
|
Author: Daniel Vérité
Reviewed-by: Peter Eisentraut
Discussion: https://www.postgresql.org/message-id/33ce8350-8cd1-45ff-a5fe-f9be7bc70649%40manitou-mail.org
|
|
Previously, the test relied on a trick with a shell to retrieve the PID
of the psql session to be stopped with SIGINT, that was skipped on
Windows. This commit changes the test to use IPC::Run::signal()
instead, which still does not work on Windows, but for a different
reason: SIGINT would stop the test before finishing.
This should allow the test to run on non-Windows platforms where PPID is
not supported (like NetBSD), spreading it a bit more across the
buildfarm. And the logic of the test is simpler.
It is the first time in the tree that IPC::Run::signal() is used, so, as
a matter of safety (or just call that as me having cold feet), no
backpatch is done, at least for now.
Author: Yugo NAGATA
Reviewed-by: Fabien Coelho
Discussion: https://postgr.es/m/20230810125935.22c2922ea5250ba79358965b@sraoss.co.jp
|
|
This changes 020_cancel.pl so as the test is entirely skipped on
Windows. This test was already doing nothing under WIN32, except
initializing and starting a node without using it so this shaves a few
test cycles.
Author: Yugo NAGATA
Reviewed-by: Fabien Coelho
Discussion: https://postgr.es/m/20230810125935.22c2922ea5250ba79358965b@sraoss.co.jp
Backpatch-through: 15
|
|
As cd82e5c79d14 did. Otherwise, the test fails in locales that use
decimal separators other than ".".
|
|
When running a repeat query with \watch in psql, it can be
helpful to be able to stop the watch process when the query
no longer returns the expected amount of rows. An example
would be to watch for the presence of a certain event in
pg_stat_activity and stopping when the event is no longer
present, or to watch an index creation and stop when the
index is created.
This adds a min_rows=MIN parameter to \watch which can be
set to a non-negative integer, and the watch query will
stop executing when it returns less than MIN rows.
Author: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAKAnmmKStATuddYxP71L+p0DHtp9Rvjze3XRoy0Dyw67VQ45UA@mail.gmail.com
|
|
Run pgindent, pgperltidy, and reformat-dat-files.
This set of diffs is a bit larger than typical. We've updated to
pg_bsd_indent 2.1.2, which properly indents variable declarations that
have multi-line initialization expressions (the continuation lines are
now indented one tab stop). We've also updated to perltidy version
20230309 and changed some of its settings, which reduces its desire to
add whitespace to lines to make assignments etc. line up. Going
forward, that should make for fewer random-seeming changes to existing
code.
Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
|
|
This breaks out the background and interactive psql functionality into a
new class, PostgreSQL::Test::BackgroundPsql. Sessions are still initiated
via PostgreSQL::Test::Cluster, but once started they can be manipulated by
the new helper functions which intend to make querying easier. A sample
session for a command which can be expected to finish at a later time can
be seen below.
my $session = $node->background_psql('postgres');
$bsession->query_until(qr/start/, q(
\echo start
CREATE INDEX CONCURRENTLY idx ON t(a);
));
$bsession->quit;
Patch by Andres Freund with some additional hacking by me.
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/20230130194350.zj5v467x4jgqt3d6@awork3.anarazel.de
|
|
psql parses the interval argument of \watch with locale-dependent
strtod(). In commit 00beecfe8 I added a test case that exercises
a fractional interval, but I hard-coded 0.01 which doesn't work
in locales where the radix point isn't ".". We don't want to
change this longstanding parsing behavior, so fix the test case
to generate a suitably locale-aware spelling.
Report and patch by Alexander Korotkov.
Discussion: https://postgr.es/m/CAPpHfdv+10Uk6FWjsh3+ju7kHYr76LaRXbYayXmrM7FBU-=Hgg@mail.gmail.com
|
|
\watch can now be told to stop after N executions of the query.
With the idea that we might want to add more options to \watch
in future, this patch generalizes the command's syntax to a list
of name=value options, with the interval allowed to omit the name
for backwards compatibility.
Andrey Borodin, reviewed by Kyotaro Horiguchi, Nathan Bossart,
Michael Paquier, Yugo Nagata, and myself
Discussion: https://postgr.es/m/CAAhFRxiZ2-n_L1ErMm9AZjgmUK=qS6VHb+0SaMn8sqqbhF7How@mail.gmail.com
|
|
A failure in parsing the interval value defined in the \watch command
was silently switched to 1s of interval between two queries, which can
be confusing. This commit improves the error handling, and a couple of
tests are added to check after:
- An incorrect value.
- An out-of-range value.
- A negative value.
A value of zero is able to work now, meaning that there is no interval
of time between two queries in a \watch loop. No backpatch is done, as
it could break existing applications.
Author: Andrey Borodin
Reviewed-by: Kyotaro Horiguchi, Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/CAAhFRxiZ2-n_L1ErMm9AZjgmUK=qS6VHb+0SaMn8sqqbhF7How@mail.gmail.com
|
|
This allows for a string which if an input field matches causes the
column's default value to be inserted. The advantage of this is that
the default can be inserted in some rows and not others, for which
non-default data is available.
The file_fdw extension is also modified to take allow use of this
option.
Israel Barth Rubio
Discussion: https://postgr.es/m/CAO_rXXAcqesk6DsvioOZ5zmeEmpUN5ktZf-9=9yu+DTr0Xr8Uw@mail.gmail.com
|
|
Backpatch-through: 11
|
|
The motivation for this is twofold. For one the meson patchset would like to
have more control over the logfiles. For another, the log file location for
tap tests (tmp_check/log) is not symmetric to the log location for
pg_regress/isolation tests (log/).
This commit does not change the default location for log files for tap tests,
as that'd break the buildfarm log collection, it just provides the
infrastructure for doing so.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/1131990.1660661896@sss.pgh.pa.us
Discussion: https://postgr.es/m/20220828170806.GN2342@telsasoft.com
|
|
This is motivated by the meson patchset, which wants to put the log / data for
tests in a different place than the autoconf build. Right now log files for
tap tests have to be inside $TESTDIR/tmp_check, whereas log files for
pg_regress/isolationtester are outside of tmp_check. This change doesn't fix
the latter, but is a prerequisite.
The only test that needs adjustment is 010_tab_completion.pl, as it hardcoded
the tmp_check/ directory. Instead create a dedicated directory for the test
files. It's also a bit cleaner independently, because it doesn't intermingle
the test files with more important things like the log/ directory.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/1131990.1660661896@sss.pgh.pa.us
Discussion: https://postgr.es/m/d861493c-ed20-c251-7a89-7924f5197341@enterprisedb.com
|
|
pgperltidy and reformat-dat-files too. Not many changes.
|
|
This commit, in completion of 157f873, forces a ROLLBACK for
--single-transaction only when ON_ERROR_STOP is used when one of the
steps defined by -f/-c fails. Hence, COMMIT is always used when
ON_ERROR_STOP is not set, ignoring the status code of the last action
taken in the set of switches specified by -c/-f (previously ROLLBACK
would have been issued even without ON_ERROR_STOP if the last step
failed, while COMMIT was issued if a step in-between failed as long as
the last step succeeded, leading to more inconsistency).
While on it, this adds much more test coverage in this area when not
using ON_ERROR_STOP with multiple switch patterns involving -c and -f
for query files, single queries and slash commands.
The behavior of ON_ERROR_STOP is arguably a bug, but there was no much
support for a backpatch to force a ROLLBACK on a step failure, so this
change is done only on HEAD for now.
Per discussion with Tom Lane and Kyotaro Horiguchi.
Discussion: https://postgr.es/m/Yqbc8bAdwnP02na4@paquier.xyz
|
|
Some locales use a comma as decimal separator (like Czech or French),
and psql's 001_basic.pl for \timing was not able to handle that
properly. This fixes the matching regexes to be able to handle both
comma and dot as possible decimal separators, as per a suggestion from
Andrew Dunstan.
psql tests were the only place with such a portability issue
(check-world passed here with a forced LANG/LANGUAGE). These tests are
new as of c0280bc, so there is no need for a backpatch.
Reported-by: Pavel Stehule
Discussion: https://postgr.es/m/CAFj8pRBz8iQmd2aOaCLvO-rJY6vZr-h6Q0qvV0J+yb78J7uiaA@mail.gmail.com
|
|
psql --single-transaction is able to handle multiple -c and -f switches
in a single transaction since d5563d7d, but this had the surprising
behavior of forcing a transaction COMMIT even if psql failed with an
error in the client (for example incorrect path given to \copy), which
would generate an error, but still commit any changes that were already
applied in the backend. This commit makes the behavior more consistent,
by enforcing a transaction ROLLBACK if any commands fail, both
client-side and backend-side, so as no changes are applied if one error
happens in any of them.
Some tests are added on HEAD to provide some coverage about all that.
Backend-side errors are unreliable as IPC::Run can complain on SIGPIPE
if psql quits before reading a query result, but that should work
properly in the case where any errors come from psql itself, which is
what the original report is about.
Reported-by: Christoph Berg
Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/17504-76b68018e130415e@postgresql.org
Backpatch-through: 10
|
|
The changes to show all query results (7844c9918) broke \timing output
in case of an error; it didn't update the timing result and showed
0.000 ms.
Fix by updating the timing result also in the error case. Also, for
robustness, update the timing result any time a result is obtained,
not only for the last, so a sensible value is always available.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Richard Guo <guofenglinux@gmail.com>
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/3813350.1652111765%40sss.pgh.pa.us
|
|
Run pgindent, pgperltidy, and reformat-dat-files.
I manually fixed a couple of comments that pgindent uglified.
|
|
Get rid of the separate "FATAL" log level, as it was applied
so inconsistently as to be meaningless. This mostly involves
s/pg_log_fatal/pg_log_error/g.
Create a macro pg_fatal() to handle the common use-case of
pg_log_error() immediately followed by exit(1). Various
modules had already invented either this or equivalent macros;
standardize on pg_fatal() and apply it where possible.
Invent the ability to add "detail" and "hint" messages to a
frontend message, much as we have long had in the backend.
Except where rewording was needed to convert existing coding
to detail/hint style, I have (mostly) resisted the temptation
to change existing message wording.
Patch by me. Design and patch reviewed at various stages by
Robert Haas, Kyotaro Horiguchi, Peter Eisentraut and
Daniel Gustafsson.
Discussion: https://postgr.es/m/1363732.1636496441@sss.pgh.pa.us
|
|
Previously, psql printed only the last result if a command string
returned multiple result sets. Now it prints all of them. The
previous behavior can be obtained by setting the psql variable
SHOW_ALL_RESULTS to off.
This is a significantly enhanced version of
3a5130672296ed4e682403a77a9a3ad3d21cef75 (that was later reverted).
There is also much more test coverage for various psql features now.
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: "Iwata, Aya" <iwata.aya@jp.fujitsu.com> (earlier version)
Reviewed-by: Daniel Verite <daniel@manitou-mail.org> (earlier version)
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> (earlier version)
Reviewed-by: vignesh C <vignesh21@gmail.com> (earlier version)
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
|
|
This is another piece of functionality that happens while a user query
is being sent and which did not have any test coverage.
|
|
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
|
|
Dagfinn Ilmari Mannsåker and Tom Lane
Discussion: https://postgr.es/m/87k0curq0w.fsf@wibble.ilmari.org
|
|
Slow hosts may avoid load-induced, spurious failures by setting
environment variable PG_TEST_TIMEOUT_DEFAULT to some number of seconds
greater than 180. Developers may see faster failures by setting that
environment variable to some lesser number of seconds. In tests, write
$PostgreSQL::Test::Utils::timeout_default wherever the convention has
been to write 180. This change raises the default for some briefer
timeouts. Back-patch to v10 (all supported versions).
Discussion: https://postgr.es/m/20220218052842.GA3627003@rfd.leadboat.com
|
|
Add a few TAP tests for things that happen while a user query is being
sent:
- \timing
- client encoding handling
- notifications
Discussion: https://www.postgresql.org/message-id/3199e176-424e-1bef-f180-c1548466c2da@enterprisedb.com
|
|
Split psql_like() into two functions psql_like() and psql_fails_like()
and make them mirror the existing command_like() and
command_fails_like() more closely. In particular, follow the
universal convention that the test name is the last argument.
Discussion: https://www.postgresql.org/message-id/3199e176-424e-1bef-f180-c1548466c2da@enterprisedb.com
|