Patch contributed by Umar Hayat.
Discussion: https://www.pgpool.net/pipermail/pgpool-hackers/2020-April/003587.html
<para>
<firstterm>Watchdog</firstterm> is a sub process of <productname>Pgpool-II</productname>
to add high availability. Watchdog is used to resolve the single
- point of failure by coordinating multiple <productname>pgpool-II</productname>
- nodes. The watchdog was first introduced in <productname>pgpool-II</productname>
+ point of failure by coordinating multiple <productname>Pgpool-II</productname>
+ nodes. The watchdog was first introduced in <productname>Pgpool-II</productname>
<emphasis>V3.2</emphasis> and is significantly enhanced in
- <productname>pgpool-II</productname> <emphasis>V3.5</emphasis>, to ensure the presence of a
+ <productname>Pgpool-II</productname> <emphasis>V3.5</emphasis>, to ensure the presence of a
quorum at all time. This new addition to watchdog makes it more fault tolerant
and robust in handling and guarding against the split-brain syndrome
and network partitioning. However to ensure the quorum mechanism properly
- works, the number of pgpool-II nodes must be odd in number and greater than or
+ works, the number of Pgpool-II nodes must be odd in number and greater than or
equal to 3.
</para>
the "NODES LIST DATA" result packet.
</para>
<para>
- The result packet returnd by watchdog for the "GET NODES LIST"
+ The result packet returned by watchdog for the "GET NODES LIST"
will contains the list of all configured watchdog nodes to do
health check on in the <acronym>JSON</acronym> format.
The <acronym>JSON</acronym> of the watchdog nodes contains the
status and brings the virtual IP assigned to watchdog down.
Thus clients of <productname>Pgpool-II</productname> cannot
connect to <productname>Pgpool-II</productname> using the
- virtual IP any more. This is neccessary to avoid split-brain,
+ virtual IP any more. This is necessary to avoid split-brain,
that is, situations where there are multiple active
<productname>Pgpool-II</productname>.
</para>
Since <productname>Pgpool-II</productname> is a middleware that works between
<productname>PostgreSQL</> servers and a <productname>PostgreSQL</> database client, so when a client application
connects to the <productname>Pgpool-II</productname>, <productname>Pgpool-II</productname>
- inturn connects to the <productname>PostgreSQL</> servers using the same credentials to serve the incomming
+ in turn connects to the <productname>PostgreSQL</> servers using the same credentials to serve the incoming
client connection. Thus, all the access privileges and restrictions defined for the user in <productname>PostgreSQL</>
gets automatically applied to all <productname>Pgpool-II</productname> clients, with an exceptions of
the authentications on <productname>PostgreSQL</> side that depends on the client's IP addresses or hostnames.
<productname>Pgpool-II</productname> server and not that of the actual client.
Therefore, for the client host based authentications <productname>Pgpool-II</productname> has the
<literal>pool_hba</literal> mechanism similar to the <literal>pg_hba</literal> mechanism for
- authenticating the incomming client connections.
+ authenticating the incoming client connections.
-->
</para>
<para>
Since <productname>Pgpool-II</productname> does not know anything about
users in the <productname>PostgreSQL</> backend server, the database name is simply
- compared against the entries in the databaseE field of <filename>pool_hba.conf</filename>.
+ compared against the entries in the database field of <filename>pool_hba.conf</filename>.
</para>
</note>
</para>
</para>
<para>
- But serialization has its own overheads, and it is recomended
+ But serialization has its own overheads, and it is recommended
to be used only with the larger values of <xref linkend="guc-num-init-children">.
For the small number of <xref linkend="guc-num-init-children">,
the serialize accept can degrade the performance because of
<note>
<para>
- It is recomended to do a benchmark before deciding wether to use
- <varname>serialize_accept</varname> or not, because the corelation
+ It is recommended to do a benchmark before deciding whether to use
+ <varname>serialize_accept</varname> or not, because the correlation
of <xref linkend="guc-num-init-children"> and <varname>serialize_accept</varname>
can be different on different environments.
</para>
<para>
<productname>Pgpool-II</productname> <emphasis>V2.2</emphasis> or later,
automatically detects whether the table has a SERIAL columns or not,
- so it never locks the table if it desn't have the SERIAL columns.
+ so it never locks the table if it doesn't have the SERIAL columns.
</para>
<para>
$ cp /usr/local/etc/pgpool.conf.sample /usr/local/etc/pgpool.conf
</programlisting>
<productname>Pgpool-II</productname> only accepts connections from the localhost
- using port 9999. If you wish to receive conenctions from other hosts,
+ using port 9999. If you wish to receive connections from other hosts,
set <xref linkend="guc-listen-addresses"> to <literal>'*'</literal>.
<programlisting>
listen_addresses = 'localhost'
<title>Watchdog Configuration Example</title>
<para>
- This tutrial explains the simple way to try "Watchdog".
+ This tutorial explains the simple way to try "Watchdog".
What you need is 2 Linux boxes on which <productname>
Pgpool-II</productname> is installed and a <productname>PostgreSQL</>
on the same machine or in the other one. It is enough
[root@osspc20]# {installed_dir}/bin/pgpool -n -f {installed_dir}/etc/pgpool.conf > pgpool.log 2>&1
</programlisting>
Log messages will show that <productname>Pgpool-II</productname>
- has joind the watchdog cluster as standby watchdog.
+ has joined the watchdog cluster as standby watchdog.
<programlisting>
LOG: watchdog cluster configured with 1 remote nodes
LOG: watchdog remote node:0 on Linux_osspc16_9000:9000
There are the parameters about watchdog's monitoring.
Specify the interval to check <xref linkend="guc-wd-interval">,
the count to retry <xref linkend="guc-wd-life-point">,
- the qyery to check <xref linkend="guc-wd-lifecheck-query"> and
- finaly the type of lifecheck <xref linkend="guc-wd-lifecheck-method">.
- <programlisting>
+ the query to check <xref linkend="guc-wd-lifecheck-query"> and
+ finally the type of lifecheck <xref linkend="guc-wd-lifecheck-method">.
+ <programlisting>
wd_lifecheck_method = 'query'
# Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')
# (change requires restart)
</sect1>
<sect1 id="example-cluster">
- <title><productname>Pgpoo-II</productname> + Watchdog Setup Example</title>
+ <title><productname>Pgpool-II</productname> + Watchdog Setup Example</title>
<para>
This section shows an example of streaming replication configuration using <productname>Pgpool-II</productname>. In this example, we use 3 <productname>Pgpool-II</productname> servers to manage <productname>PostgreSQL</productname> servers to create a robust cluster system and avoid the single point of failure or split brain.
</para>
</para>
<table id="example-cluster-table">
- <title><productname>Pgpool-II</productname>, <productname>PostgreSQL</productname> version informations and Configuration</title>
+ <title><productname>Pgpool-II</productname>, <productname>PostgreSQL</productname> version information and Configuration</title>
<tgroup cols="5">
<thead>
<row>
<listitem>
<para>
- To use the failover and online recovery of <productname>Pgpool-II</productname>, the settings that allow SSH without passowrd to other servers (<literal>osspc16</literal> - <literal>osspc20</literal>) are necessary.
+ To use the failover and online recovery of <productname>Pgpool-II</productname>, the settings that allow SSH without password to other servers (<literal>osspc16</literal> - <literal>osspc20</literal>) are necessary.
</para>
</listitem>
<listitem>
<para>
- To allow <literal>repl</literal> user without specifying password for streaming replication and online recovery, we create the <filename>.pgpass</filename> file in <literal>postgres</literal> user's home directory and change the permisson to <literal>600</literal> on both postgreSQL servers <literal>osspc19</literal> and <literal>osspc20</literal>.
+ To allow <literal>repl</literal> user without specifying password for streaming replication and online recovery, we create the <filename>.pgpass</filename> file in <literal>postgres</literal> user's home directory and change the permission to <literal>600</literal> on both PostgreSQL servers <literal>osspc19</literal> and <literal>osspc20</literal>.
</para>
<programlisting>
[osspc19]$ cat /var/lib/pgsql/.pgpass
Here are the common settings on <literal>osspc16</literal>, <literal>osspc17</literal> and <literal>osspc18</literal>.
</para>
<para>
- When installing Pgpool-II from RPM, all the Pgpool-II configuration files are in <filename>/etc/pgpool-II</filename>. In this example, we copy the sample configuration file for streaming replicaton mode.
+ When installing Pgpool-II from RPM, all the Pgpool-II configuration files are in <filename>/etc/pgpool-II</filename>. In this example, we copy the sample configuration file for streaming replication mode.
</para>
<programlisting>
# cp /etc/pgpool-II/pgpool.conf.sample-stream /etc/pgpool-II/pgpool.conf
health_check_max_retries = 10
</programlisting>
<para>
- Specify the backend informations with <literal>osspc19</literal> and <literal>osspc20</literal>.
+ Specify the backend information with <literal>osspc19</literal> and <literal>osspc20</literal>.
</para>
<programlisting>
# - Backend Connection Settings -
<sect3 id="example-cluster-pgpool-config-failover">
<title>Failover configuration</title>
<para>
- Specify <varname>failover_command</varname> to execute failover.sh script. The special characters <command>%d %P %H %R</command> in failover_command are replcaed with <literal>DB node ID of the detached node</literal>, <literal>Old primary node ID</literal>, <literal>Hostname of the new master node</literal>, <literal>Database cluster directory of the new master node</literal>.
+ Specify <varname>failover_command</varname> to execute failover.sh script. The special characters <command>%d %P %H %R</command> in failover_command are replaced with <literal>DB node ID of the detached node</literal>, <literal>Old primary node ID</literal>, <literal>Hostname of the new master node</literal>, <literal>Database cluster directory of the new master node</literal>.
</para>
<programlisting>
failover_command = '/etc/pgpool-II/failover.sh %d %P %H %R'
</programlisting>
<para>
- Create <filename>/etc/pgpool-II/failover.sh</filename>, and set the file permisson to <literal>755</literal>.
+ Create <filename>/etc/pgpool-II/failover.sh</filename>, and set the file permission to <literal>755</literal>.
</para>
<programlisting>
# vi /etc/pgpool-II/failover.sh
host all postgres 0.0.0.0/0 md5
</programlisting>
<para>
- To use md5 authentication, we need to register the user name and password in file <filename>pool_passwd</filename>. Execute command <command>pg_md5 --md5auth --username=<user name> <password></command> to regist user name and MD5-hashed password in file <filename>pool_passwd</filename>. If <filename>pool_passwd</filename> doesn't exist yet, it will be created in the same directory as <filename>pgpool.conf</filename>.
+ To use md5 authentication, we need to register the user name and password in file <filename>pool_passwd</filename>. Execute command <command>pg_md5 --md5auth --username=<user name> <password></command> to register user name and MD5-hashed password in file <filename>pool_passwd</filename>. If <filename>pool_passwd</filename> doesn't exist yet, it will be created in the same directory as <filename>pgpool.conf</filename>.
</para>
<programlisting>
# pg_md5 --md5auth --username=pgpool <password of pgpool user>
# pcp_recovery_node -h 133.137.174.153 -p 9898 -U postgres -n 1
</programlisting>
<para>
- After executing <command>pcp_recovery_node</command> command, vertify that <literal>osspc20</literal> is started as a <productname>PostgreSQL</productname> standby server.
+ After executing <command>pcp_recovery_node</command> command, verify that <literal>osspc20</literal> is started as a <productname>PostgreSQL</productname> standby server.
</para>
<programlisting>
# psql -h 133.137.174.153 -p 9999 -U pgpool postgres
osspc18:9999 Linux osspc18 osspc18 9999 9000 7 STANDBY #osspc18 runs as STANDBY
</programlisting>
<para>
- Start <productname>Pgpool-II</productname> (<literal>osspc16</literal>) which we have stopped again, and vertify that <literal>osspc16</literal> runs as a standby.
+ Start <productname>Pgpool-II</productname> (<literal>osspc16</literal>) which we have stopped again, and verify that <literal>osspc16</literal> runs as a standby.
</para>
<programlisting>
[root@osspc16 ~]# systemctl start pgpool.service
<sect3 id="example-cluster-try-failover">
<title>Failover</title>
<para>
- First, use <command>psql</command> to connect to <productname>PostgreSQL</productname> via virtual IP, and verify the backend informations.
+ First, use <command>psql</command> to connect to <productname>PostgreSQL</productname> via virtual IP, and verify the backend information.
</para>
<programlisting>
# psql -h 133.137.174.153 -p 9999 -U pgpool postgres
<title>AWS Configuration Example</title>
<para>
- This tutrial explains the simple way to try "Watchdog"
+ This tutorial explains the simple way to try "Watchdog"
on <ulink url="https://aws.amazon.com/">AWS</ulink> and using
the <ulink url="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html">
Elastic IP Address</ulink> as the Virtual IP for the high availability solution.
<xref linkend="guc-delegate-ip"> which we will not set in this example instead
we will use <xref linkend="guc-wd-escalation-command"> and
<xref linkend="guc-wd-de-escalation-command"> to switch the
- Elastic IP address to the maste/Active <productname>Pgpool-II</productname> node.
+ Elastic IP address to the master/Active <productname>Pgpool-II</productname> node.
</para>
<sect3 id="example-AWS-pgpool-config-instance-1">
</row>
<row>
<entry>%m</entry>
- <entry> New master node ID</entry>
+ <entry>New master node ID</entry>
</row>
<row>
<entry>%H</entry>
</row>
<row>
<entry>%P</entry>
- <entry> Old primary node ID</entry>
+ <entry>Old primary node ID</entry>
</row>
<row>
<entry>%r</entry>
Specifies a user command to run when a <productname>PostgreSQL</> backend node gets attached to
<productname>Pgpool-II</productname>. <productname>Pgpool-II</productname>
replaces the following special characters with the backend specific information.
- before excuting the command.
+ before executing the command.
</para>
<table id="ffailback-command-table">
<para>
If <varname>follow_master_command</varname> is not empty, then after failover
on the primary node gets completed in Master Slave mode with streaming replication,
- <productname>Pgpool-II</productname> degenerates all nodes excepted the new primary
+ <productname>Pgpool-II</productname> degenerates all nodes except the new primary
and starts new child processes to be ready again to accept connections from the clients.
After this, <productname>Pgpool-II</productname> executes the command configured
in the <varname>follow_master_command</varname> for each degenerated backend nodes.
<title>Installation from RPM</title>
<para>
This chapter describes the installation
- of <productname>Pgpool-II</productname> from PRM. If you are
+ of <productname>Pgpool-II</productname> from RPM. If you are
going to install from the source code, please
check <xref linkend="install-source">.
</para>
<caution>
<para>
- <acronym>JDBC</acronym> driver postgreSQL-9.3 and earlier versions
+ <acronym>JDBC</acronym> driver PostgreSQL-9.3 and earlier versions
does not send the application name in the startup packet even if
the application name is specified using the <acronym>JDBC</acronym>
driver option <literal>"ApplicationName"</literal> and
<literal>"assumeMinServerVersion=9.0"</literal>.
So if you want to use the <xref linkend="guc-app-name-redirect-preference-list">
- feature through <acronym>JDBC</acronym>, Use postgreSQL-9.4 or later version of the driver.
+ feature through <acronym>JDBC</acronym>, Use PostgreSQL-9.4 or later version of the driver.
</para>
</caution>
</para>
<para>
- <varname>memqcache_cache_block_size</varname> must be set to atleast 512.
+ <varname>memqcache_cache_block_size</varname> must be set to at least 512.
</para>
<para>
Specifies the relation cache expiration time in seconds.
The relation cache is used for caching the query result of
<productname>PostgreSQL</> system catalogs that is used by <productname>Pgpool-II
- </productname> to get various informations including the table
+ </productname> to get various information including the table
structures and to check table types(e.g. To check if the referred
table is a temporary table or not). The cache is maintained in
the local memory space of <productname>Pgpool-II</productname>
safely leave it as an empty string.
</para>
<para>
- Connections from cliens are not allowed only in the second stage
+ Connections from clients are not allowed only in the second stage
while the data can be updated or retrieved during the first stage.
</para>
<para>
<para>
Specifies a command to be run by master node at the second
- stage of online recovery. This parameter only neccessary for
+ stage of online recovery. This parameter only necessary for
<xref linkend="guc-replication-mode">. The command file must
be placed in the database cluster directory for security
reasons. For example,
if <varname>recovery_2nd_stage_command</varname> = <literal>
'sync-command'</literal>,
then <productname>Pgpool-II</productname> will look for the
- command scrit in <literal>$PGDATA</literal> directory and
+ command script in <literal>$PGDATA</literal> directory and
will try to execute <command>$PGDATA/sync-command</command>.
</para>
<para>
<refnamediv>
<refname>pcp_detach_node</refname>
<refpurpose>
- detaches the given node from Pgpool-II. Exisiting connections to Pgpool-II are forced to be disconnected.</refpurpose>
+ detaches the given node from Pgpool-II. Existing connections to Pgpool-II are forced to be disconnected.</refpurpose>
</refnamediv>
<refsynopsisdiv>
<title>Description</title>
<para>
<command>pcp_detach_node</command>
- detaches the given node from Pgpool-II. Exisiting connections to Pgpool-II are forced to be disconnected.
+ detaches the given node from Pgpool-II. Existing connections to Pgpool-II are forced to be disconnected.
</para>
</refsect1>
Index 0 gets one's self watchdog information.
</para>
<para>
- If ommitted then gets information of all watchdog nodes.
+ If omitted then gets information of all watchdog nodes.
</para>
</listitem>
</varlistentry>
<para>
Reload configuration file
of <productname>Pgpool-II</productname>. No specific options
- exist for realod mode. Common options are applicable.
+ exist for reload mode. Common options are applicable.
</para>
</refsect1>
hostname, the port, the status, the weight (only meaningful if
you use the load balancing mode), the role, the SELECT query
counts issued to each backend, whether each node is the load
- bakance node or not, and the replication delay (only if in
+ balance node or not, and the replication delay (only if in
streaming replication mode). The possible values in the status
column are explained in the <xref linkend="pcp-node-info">
reference. If the hostname is something like "/tmp", that means
to <productname>PostgreSQL</productname> with the result
(3). <productname>PostgreSQL</productname> returns the result
to <productname>Pgpool-II</productname> (5)
- and <productname>Pgpool-II</productname> fowards the data to
+ and <productname>Pgpool-II</productname> forwards the data to
the user (6).
</para>
<para>
of <productname>PostgreSQL</productname>. The pcp port number
is hard coded as 9898, the pcp user name is assumes to be same
as caller's <productname>PostgreSQL</productname> user name.
- password is extraced from $HOME/.pcppass.
+ password is extracted from $HOME/.pcppass.
</para>
<sect1 id="installing-pgpool-adm">
</para>
<para>
- This status mismatch should be fixe by pcp_promote_node, but when the node
+ This status mismatch should be fixed by pcp_promote_node, but when the node
is the master node (the first alive node), it fails as mentioned above.
</para>
2016-12-28 [afebadf]
-->
<para>
- Fix authentication timeout that can occur right after client connecttions. (Yugo Nagata)
+ Fix authentication timeout that can occur right after client connections. (Yugo Nagata)
</para>
</listitem>
Change the <filename>pgpool.service</filename> and sysconfig files to output <productname>Pgpool-II</productname> log. (Bo Peng)
</para>
<para>
- Removeing "Type=forking" and add OPTS=" -n" to
+ Removing "Type=forking" and add OPTS=" -n" to
run <productname>Pgpool-II</productname> with non-daemon mode, because we need to redirect logs.
Using <command>"journalctl"</command> command to see <productname>Pgpool-II</productname> systemd log.
</para>
frontend. In most cases this is good. However if other than primary
node or master node returns an error state (this could happen if load
balance node is other than primary or master node and the query is an
- errornous SELECT), this should be returned to frontend, because the
+ erroneous SELECT), this should be returned to frontend, because the
frontend already received an error.
</para>
</listitem>
Fix bug mistakenly overriding global backend status right after failover. (Tatsuo Ishii)
</para>
<para>
- See <ulink url="http://www.sraoss.jp/pipermail/pgpool-general/2017-September/005786.html">[pgpool-general: 5728]</ulink> for mor details.
+ See <ulink url="http://www.sraoss.jp/pipermail/pgpool-general/2017-September/005786.html">[pgpool-general: 5728]</ulink> for more details.
</para>
</listitem>
2017-03-24 [dab2ff0]
-->
<para>
- Fix for 0000296: PGPool v3.6.2 terminated by systemd because the service Type has been set to 'forking'.
+ Fix for 0000296: Pgpool v3.6.2 terminated by systemd because the service Type has been set to 'forking'.
<ulink url="http://www.pgpool.net/mantisbt/view.php?id=296">(Bug 296)</ulink> (Muhammad Usama)
</para>
</listitem>
</para>
<para>
- This status mismatch should be fixe by pcp_promote_node, but when the node
+ This status mismatch should be fixed by pcp_promote_node, but when the node
is the master node (the first alive node), it fails as mentioned above.
</para>
2016-12-28 [afebadf]
-->
<para>
- Fix authentication timeout that can occur right after client connecttions. (Yugo Nagata)
+ Fix authentication timeout that can occur right after client connections. (Yugo Nagata)
</para>
</listitem>
(<ulink url="http://www.pgpool.net/mantisbt/view.php?id=487">bug 487</>) (Bo Peng)
</para>
<para>
- When all backend nodes are down, <productname>Pgpool-II</> throws an uncorrect
+ When all backend nodes are down, <productname>Pgpool-II</> throws an incorrect
error message "ERROR: connection cache is full". Change the error
message to "all backend nodes are down, pgpool requires at least one valid node".
</para>
</para>
<para>
If order of startup packet's parameters differ between cached connection
- pools and connection request, did't use connection pool ,and created new
+ pools and connection request, didn't use connection pool ,and created new
connection pool.
</para>
</listitem>
</para>
<para>
In a explicit transaction, the <literal>SELECT</literal> results are cached in temporary buffer.
- If a write <acronym>SQL</acronym> is sent which modifies the table, the temporary buffe should be reset.
+ If a write <acronym>SQL</acronym> is sent which modifies the table, the temporary buffer should be reset.
</para>
</listitem>
Change the <filename>pgpool.service</filename> and sysconfig files to output <productname>Pgpool-II</productname> log. (Bo Peng)
</para>
<para>
- Removeing "Type=forking" and add OPTS=" -n" to
+ Removing "Type=forking" and add OPTS=" -n" to
run <productname>Pgpool-II</productname> with non-daemon mode, because we need to redirect logs.
Using <command>"journalctl"</command> command to see <productname>Pgpool-II</productname> systemd log.
</para>
frontend. In most cases this is good. However if other than primary
node or master node returns an error state (this could happen if load
balance node is other than primary or master node and the query is an
- errornous SELECT), this should be returned to frontend, because the
+ erroneous SELECT), this should be returned to frontend, because the
frontend already received an error.
</para>
</listitem>
Fix bug mistakenly overriding global backend status right after failover. (Tatsuo Ishii)
</para>
<para>
- See <ulink url="http://www.sraoss.jp/pipermail/pgpool-general/2017-September/005786.html">[pgpool-general: 5728]</ulink> for mor details.
+ See <ulink url="http://www.sraoss.jp/pipermail/pgpool-general/2017-September/005786.html">[pgpool-general: 5728]</ulink> for more details.
</para>
</listitem>
2017-03-24 [1172e6c]
-->
<para>
- Fix for 0000296: PGPool v3.6.2 terminated by systemd because the service Type has been set to 'forking'.
+ Fix for 0000296: Pgpool v3.6.2 terminated by systemd because the service Type has been set to 'forking'.
<ulink url="http://www.pgpool.net/mantisbt/view.php?id=296">(Bug 296)</ulink> (Muhammad Usama)
</para>
</listitem>
</para>
<para>
- This status mismatch should be fixe by pcp_promote_node, but when the node
+ This status mismatch should be fixed by pcp_promote_node, but when the node
is the master node (the first alive node), it fails as mentioned above.
</para>
2016-12-28 [afebadf]
-->
<para>
- Fix authentication timeout that can occur right after client connecttions. (Yugo Nagata)
+ Fix authentication timeout that can occur right after client connections. (Yugo Nagata)
</para>
</listitem>
2020-01-28 [d60e6b7]
-->
<para>
- Check if socket file exists at startup and remove them if PID file doesn't exist to avoid bind() failire. (Bo Peng)
+ Check if socket file exists at startup and remove them if PID file doesn't exist to avoid bind() failure. (Bo Peng)
</para>
</listitem>
</itemizedlist>
2019-07-03 [380d8a5]
-->
<para>
- Fix sefault when query cache is enabled. (<ulink url="https://www.pgpool.net/mantisbt/view.php?id=525">bug 525</ulink>) (Tatsuo Ishii)
+ Fix segfault when query cache is enabled. (<ulink url="https://www.pgpool.net/mantisbt/view.php?id=525">bug 525</ulink>) (Tatsuo Ishii)
</para>
</listitem>
(<ulink url="http://www.pgpool.net/mantisbt/view.php?id=487">bug 487</>) (Bo Peng)
</para>
<para>
- When all backend nodes are down, <productname>Pgpool-II</> throws an uncorrect
+ When all backend nodes are down, <productname>Pgpool-II</> throws an incorrect
error message "ERROR: connection cache is full". Change the error
message to "all backend nodes are down, pgpool requires at least one valid node".
</para>
</para>
<para>
If order of startup packet's parameters differ between cached connection
- pools and connection request, did't use connection pool ,and created new
+ pools and connection request, didn't use connection pool ,and created new
connection pool.
</para>
</listitem>
</para>
<para>
In a explicit transaction, the <literal>SELECT</literal> results are cached in temporary buffer.
- If a write <acronym>SQL</acronym> is sent which modifies the table, the temporary buffe should be reset.
+ If a write <acronym>SQL</acronym> is sent which modifies the table, the temporary buffer should be reset.
</para>
</listitem>
Change the <filename>pgpool.service</filename> and sysconfig files to output <productname>Pgpool-II</productname> log. (Bo Peng)
</para>
<para>
- Removeing "Type=forking" and add OPTS=" -n" to
+ Removing "Type=forking" and add OPTS=" -n" to
run <productname>Pgpool-II</productname> with non-daemon mode, because we need to redirect logs.
Using <command>"journalctl"</command> command to see <productname>Pgpool-II</productname> systemd log.
</para>
frontend. In most cases this is good. However if other than primary
node or master node returns an error state (this could happen if load
balance node is other than primary or master node and the query is an
- errornous SELECT), this should be returned to frontend, because the
+ erroneous SELECT), this should be returned to frontend, because the
frontend already received an error.
</para>
</listitem>
Fix bug mistakenly overriding global backend status right after failover. (Tatsuo Ishii)
</para>
<para>
- See <ulink url="http://www.sraoss.jp/pipermail/pgpool-general/2017-September/005786.html">[pgpool-general: 5728]</ulink> for mor details.
+ See <ulink url="http://www.sraoss.jp/pipermail/pgpool-general/2017-September/005786.html">[pgpool-general: 5728]</ulink> for more details.
</para>
</listitem>
2017-09-01 [ad90886]
-->
<para>
- Fix <varname>wd_authkey</varname> bug in that a request to add new node to the clusetr is rejected by master. (Yugo Nagata)
+ Fix <varname>wd_authkey</varname> bug in that a request to add new node to the cluster is rejected by master. (Yugo Nagata)
</para>
<para>
This is a bug due to the implementation of 3.5.6 and 3.6.3.
2017-06-09 [b86e7b7]
-->
<para>
- Fix a posible hang with streaming replication and extended protocol (Yugo Nagata)
+ Fix a possible hang with streaming replication and extended protocol (Yugo Nagata)
</para>
<para>
- This hang occured under a certain condition. The following is an example.
+ This hang occurred under a certain condition. The following is an example.
</para>
<programlisting>
</programlisting>
<para>
- Without using <productname>JDBC</productname>, we can reproduce the problem by <productname>pgproto</productname> with the followeing messages.
+ Without using <productname>JDBC</productname>, we can reproduce the problem by <productname>pgproto</productname> with the following messages.
</para>
<programlisting>
2017-03-24 [d726c3a]
-->
<para>
- Fix for 0000296: PGPool v3.6.2 terminated by systemd because the service Type has been set to 'forking'.
+ Fix for 0000296: Pgpool v3.6.2 terminated by systemd because the service Type has been set to 'forking'.
<ulink url="http://www.pgpool.net/mantisbt/view.php?id=296">(Bug 296)</ulink> (Muhammad Usama)
</para>
</listitem>
</para>
<para>
- This status mismatch should be fixe by pcp_promote_node, but when the node
+ This status mismatch should be fixed by pcp_promote_node, but when the node
is the master node (the first alive node), it fails as mentioned above.
</para>
2016-12-28 [afebadf]
-->
<para>
- Fix authentication timeout that can occur right after client connecttions. (Yugo Nagata)
+ Fix authentication timeout that can occur right after client connections. (Yugo Nagata)
</para>
</listitem>
</listitem>
<listitem>
<para>
- Pgool-II forward it to backend 1
+ Pgpool-II forward it to backend 1
</para>
</listitem>
<listitem>
</listitem>
<listitem>
<para>
- Pgool-II forward it to backend 0 & 1
+ Pgpool-II forward it to backend 0 & 1
</para>
</listitem>
<listitem>
</listitem>
<listitem>
<para>
- Pgool-II forward it to backend 0 & 1
+ Pgpool-II forward it to backend 0 & 1
</para>
</listitem>
<listitem>
2020-01-28 [d60e6b7]
-->
<para>
- Check if socket file exists at startup and remove them if PID file doesn't exist to avoid bind() failire. (Bo Peng)
+ Check if socket file exists at startup and remove them if PID file doesn't exist to avoid bind() failure. (Bo Peng)
</para>
</listitem>
</itemizedlist>
2019-07-03 [380d8a5]
-->
<para>
- Fix sefault when query cache is enabled. (<ulink url="https://www.pgpool.net/mantisbt/view.php?id=525">bug 525</ulink>) (Tatsuo Ishii)
+ Fix segfault when query cache is enabled. (<ulink url="https://www.pgpool.net/mantisbt/view.php?id=525">bug 525</ulink>) (Tatsuo Ishii)
</para>
</listitem>
(<ulink url="http://www.pgpool.net/mantisbt/view.php?id=487">bug 487</>) (Bo Peng)
</para>
<para>
- When all backend nodes are down, <productname>Pgpool-II</> throws an uncorrect
+ When all backend nodes are down, <productname>Pgpool-II</> throws an incorrect
error message "ERROR: connection cache is full". Change the error
message to "all backend nodes are down, pgpool requires at least one valid node".
</para>
</para>
<para>
If order of startup packet's parameters differ between cached connection
- pools and connection request, did't use connection pool ,and created new
+ pools and connection request, didn't use connection pool ,and created new
connection pool.
</para>
</listitem>
</para>
<para>
In a explicit transaction, the <literal>SELECT</literal> results are cached in temporary buffer.
- If a write <acronym>SQL</acronym> is sent which modifies the table, the temporary buffe should be reset.
+ If a write <acronym>SQL</acronym> is sent which modifies the table, the temporary buffer should be reset.
</para>
</listitem>
Change the <filename>pgpool.service</filename> and sysconfig files to output <productname>Pgpool-II</productname> log. (Bo Peng)
</para>
<para>
- Removeing "Type=forking" and add OPTS=" -n" to
+ Removing "Type=forking" and add OPTS=" -n" to
run <productname>Pgpool-II</productname> with non-daemon mode, because we need to redirect logs.
Using <command>"journalctl"</command> command to see <productname>Pgpool-II</productname> systemd log.
</para>
frontend. In most cases this is good. However if other than primary
node or master node returns an error state (this could happen if load
balance node is other than primary or master node and the query is an
- errornous SELECT), this should be returned to frontend, because the
+ erroneous SELECT), this should be returned to frontend, because the
frontend already received an error.
</para>
</listitem>
Fix bug mistakenly overriding global backend status right after failover. (Tatsuo Ishii)
</para>
<para>
- See <ulink url="http://www.sraoss.jp/pipermail/pgpool-general/2017-September/005786.html">[pgpool-general: 5728]</ulink> for mor details.
+ See <ulink url="http://www.sraoss.jp/pipermail/pgpool-general/2017-September/005786.html">[pgpool-general: 5728]</ulink> for more details.
</para>
</listitem>
2017-09-01 [b661a8b]
-->
<para>
- Fix <varname>wd_authkey</varname> bug in that a request to add new node to the clusetr is rejected by master. (Yugo Nagata)
+ Fix <varname>wd_authkey</varname> bug in that a request to add new node to the cluster is rejected by master. (Yugo Nagata)
</para>
<para>
This is a bug due to the implementation of 3.5.6 and 3.6.3.
2017-08-21 [1812a84]
-->
<para>
- Doc: Add new English and Japanese documents of <link linkend="example-cluster">Pgpoo-II + Watchdog Setup Example</link>. (Bo Peng)
+ Doc: Add new English and Japanese documents of <link linkend="example-cluster">Pgpool-II + Watchdog Setup Example</link>. (Bo Peng)
</para>
</listitem>
2017-06-09 [d9b0b83]
-->
<para>
- Fix a posible hang with streaming replication and extended protocol. (Yugo Nagata)
+ Fix a possible hang with streaming replication and extended protocol. (Yugo Nagata)
</para>
<para>
- This hang occured under a certain condition. The following is an example.
+ This hang occurred under a certain condition. The following is an example.
</para>
<programlisting>
</programlisting>
<para>
- Without using <productname>JDBC</productname>, we can reproduce the problem by <productname>pgproto</productname> with the followeing messages.
+ Without using <productname>JDBC</productname>, we can reproduce the problem by <productname>pgproto</productname> with the following messages.
</para>
<programlisting>
2017-04-14 [50fb9a4]
-->
<para>
- Removing the function defined but not used warnings from pool_config_vatiable.c (Muhammad Usama)
+ Removing the function defined but not used warnings from pool_config_variable.c (Muhammad Usama)
</para>
</listitem>
2017-03-24 [c2a0cc5]
-->
<para>
- Fix for 0000296: PGPool v3.6.2 terminated by systemd because the service Type has been set to 'forking'.
+ Fix for 0000296: Pgpool v3.6.2 terminated by systemd because the service Type has been set to 'forking'.
<ulink url="http://www.pgpool.net/mantisbt/view.php?id=296">(Bug 296)</ulink> (Muhammad Usama)
</para>
</listitem>
</para>
<para>
- This status mismatch should be fixe by pcp_promote_node, but when the node
+ This status mismatch should be fixed by pcp_promote_node, but when the node
is the master node (the first alive node), it fails as mentioned above.
</para>
2016-12-28 [afebadf]
-->
<para>
- Fix authentication timeout that can occur right after client connecttions. (Yugo Nagata)
+ Fix authentication timeout that can occur right after client connections. (Yugo Nagata)
</para>
</listitem>
</listitem>
<listitem>
<para>
- Pgool-II forward it to backend 1
+ Pgpool-II forward it to backend 1
</para>
</listitem>
<listitem>
</listitem>
<listitem>
<para>
- Pgool-II forward it to backend 0 & 1
+ Pgpool-II forward it to backend 0 & 1
</para>
</listitem>
<listitem>
</listitem>
<listitem>
<para>
- Pgool-II forward it to backend 0 & 1
+ Pgpool-II forward it to backend 0 & 1
</para>
</listitem>
<listitem>
restart. Watchdog not syncing status.
-->
<para>
- Sync inconsitent status
+ Sync inconsistent status
of <productname>PostgreSQL</productname> nodes
in <productname>Pgpool-II</productname> instances after
restart. (bug 218) (Muhammad Usama)
2016-08-18 [d3211dc] Let watchdog_setup to be installed.
-->
<para>
- Add new script called "watchdog_setup". (Tatstuo Ishii)
+ Add new script called "watchdog_setup". (Tatsuo Ishii)
</para>
<para>
<xref linkend="WATCHDOG-SETUP"> is a command to create a
Do not update status file if all backend nodes are in down status. (Chris Pacejo, Tatsuo Ishii)
</para>
<para>
- This commit tries to remove the data inconsitency in
+ This commit tries to remove the data inconsistency in
replication mode found
in <ulink url="http://www.pgpool.net/pipermail/pgpool-general/2015-August/003974.html">[pgpool-general:
3918]</ulink> by not recording the status file when all
-->
<para>
Change the default value of
- <xref linkend="guc-search-primary-node-timeout"> from 10 to 300. (Tatstuo Ishii)
+ <xref linkend="guc-search-primary-node-timeout"> from 10 to 300. (Tatsuo Ishii)
</para>
<para>
Prior default value 10 seconds is sometimes too short for a standby to
</listitem>
<listitem>
<!--
- 2016-08-01 [024eaea] Fix for 215: pgpool doesnt escalate ip in case of another node inavailability
+ 2016-08-01 [024eaea] Fix for 215: pgpool doesnt escalate ip in case of another node unavailability
-->
<para>
- Fix <productname>Pgpool-II</productname> doesn't escalate ip in case of another node inavailability. (bug 215) (Muhammad Usama)
+ Fix <productname>Pgpool-II</productname> doesn't escalate ip in case of another node unavailability. (bug 215) (Muhammad Usama)
</para>
<para>
The heartbeat receiver fails to identify the heartbeat sender watchdog node when
</listitem>
<listitem>
<!--
- 2016-06-08 [294cf4a] Fix a posible hang during health checking
+ 2016-06-08 [294cf4a] Fix a possible hang during health checking
-->
<para>
- Fix a posible hang during health checking. (bug 204) (Yugo Nagata)
+ Fix a possible hang during health checking. (bug 204) (Yugo Nagata)
</para>
<para>
- Helath checking was hang when any data wasn't sent
+ Health checking was hang when any data wasn't sent
from backend after <function>connect(2)</function> succeeded. To fix this,
<function>pool_check_fd()</function> returns 1 when <function>select(2)</function> exits with
- EINTR due to SIGALRM while health checkking is performed.
+ EINTR due to SIGALRM while health checking is performed.
</para>
</listitem>
<listitem>
</listitem>
<listitem>
<!--
- 2016-05-11 [de905f6] Fix documetation bug about raw mode
+ 2016-05-11 [de905f6] Fix documentation bug about raw mode
-->
<para>
- Fix Japanese and Chinese documetation bug about raw mode. (Yugo Nagata, Bo Peng)
+ Fix Japanese and Chinese documentation bug about raw mode. (Yugo Nagata, Bo Peng)
</para>
<para>
- Connection pool is avalilable in raw mode.
+ Connection pool is available in raw mode.
</para>
</listitem>
<listitem>
Fix pgpool hung after receiving error state from backend. (bug #169) (Tatsuo Ishii)
</para>
<para>
- This could happend if we execute an extended protocol query and it
+ This could happen if we execute an extended protocol query and it
fails.
</para>
</listitem>
2016-03-03 [bb295a2] Fix query stack problems in extended protocol case.
-->
<para>
- Fix query stack problems in extended protocol case. (bug 167, 168) (Tatstuo Ishii)
+ Fix query stack problems in extended protocol case. (bug 167, 168) (Tatsuo Ishii)
</para>
<para>
</para>
</listitem>
<listitem>
<!--
- 2016-02-03 [48e9d4b] Fix for [pgpool-II 0000166]: compile issue on freebsd
+ 2016-02-03 [48e9d4b] Fix for [pgpool-II 0000166]: compile issue on FreeBSD
-->
<para>
- Fix compile issue on freebsd. (Muhammad Usama)
+ Fix compile issue on FreeBSD. (Muhammad Usama)
</para>
<para>
Add missing include files. The patch is contributed by
</para>
<para>
Sometimes <xref linkend="guc-wd-authkey"> calculation fails for some reason other than
- authkey mismatch. The additional messages make these distingushable
+ authkey mismatch. The additional messages make these distinguishable
for each other.
</para>
</listitem>
replication mode, master slave mode, native replication mode and
raw mode. In any mode, <productname>Pgpool-II</> provides connection pooling,
automatic fail over and online recovery. The sample configuration
- files for each mode are provied. They are located
+ files for each mode are provided. They are located
under <filename>$prefix/etc</filename>. You can copy one of them
to <filename>$prefix/etc/pgpool.conf</filename>.
</para>
<para>
The master slave mode mode can be used with <productname>PostgreSQL</> servers
operating <productname>Slony</>. In this mode, <productname>Slony</>/<productname>PostgreSQL</> is responsible for
- synchronizing databases. Since <productname>Slony</> is being obsoletd by streaming
+ synchronizing databases. Since <productname>Slony</> is being obsoleted by streaming
replication, we do not recommend to use this mode unless you have
specific reason to use <productname>Slony</>. Load balancing is possible in the
Load balancing is possible in the mode. The sample configuration
<para>
In the raw mode, <productname>Pgpool-II</> does not care about the database
synchronization. It's user's responsibility to make the whole
- system does a meaningfull thing. Load balancing
+ system does a meaningful thing. Load balancing
is <emphasis>not</emphasis> possible in the mode. The sample
configuration
file <filename>$prefix/etc/pgpool.conf.sample</filename>.
<!-- doc/src/sgml/config.sgml -->
<sect1 id="runtime-ssl">
- <title>Secure Sockect Layer (SSL)</title>
+ <title>Secure Socket Layer (SSL)</title>
<sect2 id="runtime-config-ssl-settings">
</para>
<para>
<figure>
- <title>Process architecure of <productname>Pgpool-II</productname></title>
+ <title>Process architecture of <productname>Pgpool-II</productname></title>
<mediaobject>
<imageobject>
<imagedata fileref="process-diagram.gif">
</para>
<para>
- This parameter can be changed by reloading the <productname>Pgpool-II conf</>igurations.
+ This parameter can be changed by reloading the <productname>Pgpool-II</>configurations.
</para>
</listitem>
</sect2>
<sect2 id="config-watchdog-escalation-de-escalation">
- <title>Behaivor on escalation and de-escalation</title>
+ <title>Behaviour on escalation and de-escalation</title>
<para>
Configuration about behavior when <productname>Pgpool-II</productname>
<listitem>
<para>
When set to on, watchdog clears all the query cache in the shared memory
- when pgpool-II escaltes to active. This prevents the new active <productname>Pgpool-II</productname>
- from using old query caches inconsistence to the old active.
+ when pgpool-II escalates to active. This prevents the new active <productname>Pgpool-II</productname>
+ from using old query caches inconsistent to the old active.
</para>
<para>
Default is on.