Is there a simple way to select all duplicates from a Postgres table? No need for joins or anything fancy. Everything I find on stack is about joins and duplicates involving multiple fields on a table.
I just need to select * from table where table contains duplicate entries
Any ideas?
Table definition from Postgres:
scotchbox=# \d+ eve_online_market_groups
Table "public.eve_online_market_groups"
Column | Type | Modifiers | Storage | Stats target | Description
------------+--------------------------------+-----------------------------------------------------------------------+----------+--------------+-------------
id | integer | not null default nextval('eve_online_market_groups_id_seq'::regclass) | plain | |
name | character varying(255) | not null | extended | |
item_id | integer | not null | plain | |
slug | character varying(255) | not null | extended | |
created_at | timestamp(0) without time zone | | plain | |
updated_at | timestamp(0) without time zone | | plain | |
Indexes:
"eve_online_market_groups_pkey" PRIMARY KEY, btree (id)
Has OIDs: no
fancy, but you would need to join (at least once) to itself, to find duplicates.select field1, field2,... from table group by field1, field2,... having count(*) > 1to select duplicating combinations; 2)select * from (select *, count(*) over (partition by field1, field2,...) as dup_cnt from table) t where dup_cnt > 1to select all columns.