1

I'm migrating my Postgres database and am attempting to update a string value to a numeric value, like this:

UPDATE table SET column = 1 WHERE LENGTH(column) = 1;

This table contains around 20 million rows, and the update has been taking forever to run. I have an index on LENGTH(column) as well as 4 other indexes on different columns, one of which is a UNIQUE index on 2 columns. There's also a foreign key constraint on this table.

What could I do to speed this query up? If more information is needed, I'd be happy to provide it.

1
  • 1
    Depending on the expected percentage of rows updated and the current statistics it may be beneficial even drop the index on the target column (i.e, drop the index on length(column)). However, this could only be determined with EXPLAIN ANALYZE. But this would seem to make it a moot point as it runs the query. Since this is a migration it should be a 1 time event anyway (at least in user acceptance testing and production environments). Commented Aug 29, 2021 at 20:17

1 Answer 1

2

Dropping constraints that affect the column and indexes (except the one that supports the WHERE condition) will speed up such an UPDATE.

You can also get a small performance gain from increasing max_wal_size.

Other than that, you just have to wait it out.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.