I have a csv file with around 30,000 rows of data.
I need these data to be in a database for my application.
I'm not sure what approach I should take to initialize this data.
I'm using docker image of postgresql.
my thoughts are:
- make
.sqlfile that inserts this data, and execute this when docker runs. - just keep the docker volume that has this data inserted and mount it every run.
- some other way...?
first approach is very versatile since inserting rows is a very common task that doesn't break. But down-side is that I need to do this in every docker-run.
I guess second approach is faster and efficient...? but volume might not be compatible if some reason postgres updates version or if I decided to change database.
any advices?