I have table tmp in my postgres database that contains roughly 139 million records. I am trying to move the the columns col1, col2, and col3 to col1, col2, and col3 of another tabled named r4a. I created the table r4a with this query:
CREATE TABLE r4a(
gid serial NOT NULL,
col1 double precision,
col2 double precision,
col3 double precision,
the_geom geometry,
CONSTRAINT r4a_pkey PRIMARY KEY (gid));
I created this insert into query to populate fields in r4a:
INSERT INTO r4a (col1,col2,col3)
SELECT col1, col2, col3
FROM tmp
limit 500;
It populates the gid [PK] serial column with numbers ranging from [14816024-14816523].
How does it determine which 500 records to limit the query too?
Is it choosing to import rows [14816024-14816523] or is it just arbitrarily assigning numbers?
Ideally I want the primary key to begin at 1 and count upwards. Being new to postgres and having such a large (in my opinion) table, I want to make sure I understand what is going on.
limitortopor something similar without specifying anyorder byclause will return a random set of rows. It might be that the rows affected are in some sort of order (usually in insertion order) but there is no guarantee. If you want a specific set of rows you have to specify it. I won't post this as an answer as I'm not familiar with the specifics of Postgresql, but I would bet that it applies to PG as well.