4

For context, this issue occurred in a Go program I am writing using the default postgres database driver.

I have been building a service to talk to a postgres database which has a table similar to the one listed below:

CREATE TABLE object (
    id SERIAL PRIMARY KEY NOT NULL,
    name VARCHAR(255) UNIQUE,
    some_other_id BIGINT UNIQUE
    ...
);

I have created some endpoints for this item including an "Install" endpoint which effectively acts as an upsert function like so:

INSERT INTO object (name, some_other_id)
VALUES ($1, $2)
ON CONFLICT name DO UPDATE SET
    some_other_id = COALESCE(NULLIF($2, 0), object.some_other_id)

I also have an "Update" endpoint with an underlying query like so:

UPDATE object
SET some_other_id = COALESCE(NULLIF($2, 0), object.some_other_id)
WHERE name = $1

The problem:

Whenever I run the update query I always run into the error, referencing the field "some_other_id":

pq: value "1010101010144" is out of range for type integer

However this error never occurs on the "upsert" version of the query, even when the row already exists in the database (when it has to evaluate the COALESCE statement). I have been able to prevent this error by updating COALESCE statement to be as follows:

COALESCE(NULLIF($2, CAST(0 AS BIGINT)), object.some_other_id)

But as it never occurrs with the first query I wondered if this inconsitency had come from me doing something wrong or something that I don't understand? And also what the best practice is with this, should I be casting all values?

I am definitely passing in a 64 bit integer to the query for "some_other_id", and the first query works with the Go implementation even without the explicit type cast.

If any more information (or Go implementation) is required then please let me know, many thanks in advance! (:

Edit:

To eliminate confusion, the queries are being executed directly in Go code like so:

res, err := s.db.ExecContext(ctx, `UPDATE object SET some_other_id = COALESCE(NULLIF($2, 0), object.some_other_id) WHERE name = $1`,
    "a name",
    1010101010144,
)

Both queries are executed in exactly the same way.

Edit: Also corrected parameter (from $51 to $2) in my current workaround.

I would also like to take this opportunity to note that the query does work with my proposed fix, which suggests that the issue is in me confusing postgres with types in the NULLIF statement? There is no stored procedure asking for an INTEGER arg inbetween my code and the database, at least that I have written.

7
  • Try ... NULLIF(EXCLUDED.some_other_id, 0) .... Maybe you are not allowed to place two references to $2? See the documentation of INSERT for the special "EXCLUDED" table. Commented Jan 5, 2022 at 0:24
  • "However this error never occurs on the "upsert" version of the query, even when the row already exists in the database (when it has to evaluate the COALESCE statement)." -- in those cases where the COALESCE(NULLIF is evaluated, is $2 or object.some_other_id a value greater than one can fit in int4? Commented Jan 5, 2022 at 10:32
  • 1
    In case it's not already clear, this has nothing to with Go and everything to do with how postgres infers types for the parameter placeholders. My assumption is that your INSERT query doesn't fail because it is clear from (name,some_other_id) VALUES ($1,$2) that $2 should have the same type as the target some_other_id column, which is int8. This type information is then also used in NULLIF expression of the DO UPDATE SET part of the query. You can test this assumption by using (name) VALUES ($1) in the insert. Commented Jan 6, 2022 at 4:21
  • 1
    You'll find then that the NULLIF in DO UPDATE SET will fail the same way as it does in the UPDATE query. And the UPDATE query fails because there is no direct target column that can be used to infer the type for $2. Instead the NULLIF expression is used, specifically the second argument, i.e. 0, which is of type int4, is used to infer the type of the first argument, i.e. $2. Commented Jan 6, 2022 at 4:25
  • 1
    So... no, you are not confusing postgres, but also your queries are not expressive enough about the types of the parameters. To avoid this issue, you should use an explicit type cast with the parameter placeholders where the type cannot be inferred accurately. i.e. use NULLIF($2::int8, 0) Commented Jan 6, 2022 at 4:33

2 Answers 2

3

This has to do with how the postgres parser resolves types for the parameters. I don't know how exactly it's implemented, but given the observed behaviour, I would assume that the INSERT query doesn't fail because it is clear from (name,some_other_id) VALUES ($1,$2) that the $2 parameter should have the same type as the target some_other_id column, which is of type int8. This type information is then also used in the NULLIF expression of the DO UPDATE SET part of the query.

You can also test this assumption by using (name) VALUES ($1) in the INSERT and you'll see that the NULLIF expression in DO UPDATE SET will then fail the same way as it does in the UPDATE query.

So the UPDATE query fails because there is not enough context for the parser to infer the accurate type of the $2 parameter. The "closest" thing that the parser can use to infer the type of $2 is the NULLIF call expression, specifically it uses the type of the second argument of the call expression, i.e. 0, which is of type int4, and it then uses that type information for the first argument, i.e. $2.

To avoid this issue, you should use an explicit type cast with any parameter where the type cannot be inferred accurately. i.e. use NULLIF($2::int8, 0).

Sign up to request clarification or add additional context in comments.

Comments

0
COALESCE(NULLIF($51, CAST(0 AS BIGINT)), object.some_other_id)

Fifty-one? Realy?

pq: value "1010101010144" is out of range for type integer

Pay attention, the data type in the error message is an integer, not bigint.

I think the reason for the error is out of showed code. So I take out a magic crystal ball and make a pass with my hands.

an "Install" endpoint which effectively acts as an upsert function like so

I also have an "Update" endpoint

Do you call endpoint a PostgreSQL function (stored procedure)? I think yes. Also $1, $2 looks like PostgreSQL function arguments.

The magic crystal ball says: you have two PostgreSQL function with different data types of arguments:

  1. "Install" endpoint has $2 function argument as a bigint data type. It looks like CREATE FUNCTION Install(VARCHAR(255), bigint)

  2. "Update" endpoint has $2 function argument as an integer data type, not bigint. It looks like CREATE FUNCTION Update(VARCHAR(255), integer).

At last, I would rewrite your condition more understandable:

UPDATE object
SET some_other_id = 
CASE 
WHEN $2 = 0 THEN object.some_other_id
ELSE $2
END
WHERE name = $1

4 Comments

No, sorry for the confusion. The endpoint is part of the Go program I mentioned, a REST API endpoint that takes a JSON input and writes the object to the database. There are no SQL functions, and I am not mistaking an endpoint for a stored procedure. The queries above are exactly as they are written in my Go code, I will shortly update the question as such.
Maybe in Go code at "Update" endpoint using integer data type but in "Install" endpoint using bigint data type?
I wondered if it might be something like this (if I understand correctly). I have tried explicitly typing/casting the values as they are put into the query as args as well, and double checked that the queries take the same input but with no such luck ): Also thanks for your time on this! (:
>I have tried explicitly typing/casting the value I think look for declaration of the endpoint.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.