This is an experimental work. By setting pgxl_remote_fetch_size to 0, user can
fetch all rows at once from the remote side (instead of fetching a small number
at a time, thus giving the node to produce more rows as the previous set is
consumed). While fetching all rows at once is not very useful, it allows us to
circumvent PostgreSQL's limitation of not supporting parallel queries unless a
Portal is run once and to the end.
We do see hangs in regression while running with PGOPTIONS set to "-c
force_parallel_mode=on -c pgxl_remote_fetch_size=0", so there are issues that
need to be addressed. Also, it doesn't seem quite possible for users to
dynamically set pgxl_remote_fetch_size to enforce parallel query. So this is an
experimental feature that we don't expect users to heavily use, just yet.
{
{"pgxl_remote_fetch_size", PGC_USERSET, UNGROUPED,
- gettext_noop("Number of maximum tuples to fetch in one remote iteration"),
+ gettext_noop("Number of maximum tuples to fetch in one remote "
+ "iteration. 0 fetches all rows at once."),
NULL,
0
},
&PGXLRemoteFetchSize,
- 1000, 1, INT_MAX,
+ 1000, 0, INT_MAX,
NULL, NULL, NULL
},