0

Scenario:- There is a table X. When new request comes, using 'select' the existence of record is checked if not found using insert the data is inserted into table. Once 'insert' happens Trigger gets fired on Table X

Issue:- Time taken for insert is 10 sec. When Select is fired (in 5 sec) till that time earlier insert is not completed. Hence multiple records are getting inserted. Moreover, trigger also gets fired again.

How can this issue be resolved? Any suggestions to overcome this situation?

0

2 Answers 2

3

Using a SELECT statement prior to inserting new rows in order to prevent duplicate rows is never going to work properly (at least not with an acceptable performance).

Create a unique key or constraint on your table that prevents inserting duplicate values and handle any error that occurs.

If you do so, you can also use insert ... on conflict do update ... which is safe to use with concurrent inserts.

Sign up to request clarification or add additional context in comments.

1 Comment

Using the strategy mentioned above, i created an intermediate table (with unique key constraint and a flag), once data is inserted into the main table then flag of this intermediate table was changed and check was done. When the same requests comes then record is not inserted into intermediate table due to unique constraint and no insertion takes place in main table
0

I think you need to look at 13.2. Transaction Isolation in the Postgres manual. By default read committed allows you to read and write independent of each other, so you need to change the behaviour to block reads or commits, but it depends on how you want it to be used in the app, so please read up and then decide.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.