1

I'm a rookie in this topic, all I ever did was making a connection to database for one user, so I'm not familiar with making multiple user access to database.

My case is: 10 facilities will use my program for recording when workers are coming and leaving, the database will be on the main server and all I made was one user while I was programming/testing that program. My question is: Can multiple remote locations use one user for database to connect (there should be no collision because they are all writing different stuff, but at the same tables) and if that's not the case, what should I do?

1 Answer 1

3

Good relational databases handle this quite well, it is the “I” in the the so-called ACID properties of transactions in relational databases; it stands for isolation.

Concurrent processes are protected from simultaneously writing the same table row by locks that block other transactions until one transaction is done writing.

Readers are protected from concurrent writing by means of multiversion concurrency control (MVCC), which keeps old versions of the data around to serve readers without blocking anybody.

If you have enclosed all data modifications that belong together into a transaction, so that they happen atomically (the “A” in ACID), and your transactions are simple and short, your application will probably work just fine.

Problems may arise if these conditions are not satisfied:

  • If your data modifications are not protected by transactions, a concurrent session may see intermediate, incomplete results of a different session and thus work with inconsistent data.

  • If your transactions are complicated, later statements inside a transaction may rely on results of previous statements in indirect ways. This assumption can be broken by concurrent activity that modifies the data. There are three approaches to that:

    • Pessimistic locking: lock all data the first time you use them with something like SELECT ... FOR UPDATE so that nobody can modify them until your transaction is done.

    • Optimistic locking: don't lock, but whenever you access the data a second time, check that nobody else has modified them in the meantime. If that has been the case, roll the transaction back and try it again.

    • Use high transaction isolation levels like REPEATABLE READ and SERIALIZABLE which give better guarantees that the data you are using don't get modified concurrently. You have to be prepared to receive serialization errors if the database cannot keep the guarantees, in which case you have to roll the transaction back and retry it.

    These techniques achieve the same goal in different ways. The discussion when to use which one exceeds the scope of this answer.

  • If your transactions are complicated and/or take a long time (long transactions are to be avoided as much as possible, because they cause all kinds of problems in a database), you may encounter a deadlock, which is two transactions locking each other in a kind of “deadly embrace”. The database will detect this condition and interrupt one of the transactions with an error.

    There are two ways to deal with that:

    • Avoid deadlocks by always locking resources in a certain order (e.g., always update the account with the lower account number first).

    • When you encounter a deadlock, your code has to retry the transaction.

    Contrary to common believe, a deadlock is not necessarily a bug.

I recommend that you read the chapter about concurrency control in the PostgreSQL documentation.

Sign up to request clarification or add additional context in comments.

2 Comments

Thank you so much for that reply, one more thing: basically if I lock everything in transaction and try to avoid deadlocks as much as possible I should be fine? The thing about one query using resault of another, that happends in my code, but it never happends inside one query, except when I insert a new row in my main table that stores data about employee and when he came/left the work. P.S. Is it smart to encapsulate every query I have in transaction?
Every query is automatically in a transaction that consists only of that single statement. You have to start a transaction explicitly if you have a group of quries or modifications that go together. If you lock everything you use right away, you should be safe, but concurrency might suffer if many sessions are using the same data - they basically have to queue with no parallelism.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.