Ошибка duplicate key value violates unique constraint

Intro

I also encountered this problem and the solution proposed by @adamo was basically the right solution. However, I had to invest a lot of time in the details, which is why I am now writing a new answer in order to save this time for others.

Case

My case was as follows: There was a table that was filled with data using an app. Now a new entry had to be inserted manually via SQL. After that the sequence was out of sync and no more records could be inserted via the app.

Solution

As mentioned in the answer from @adamo, the sequence must be synchronized manually. For this purpose the name of the sequence is needed. For Postgres, the name of the sequence can be determined with the command PG_GET_SERIAL_SEQUENCE. Most examples use lower case table names. In my case the tables were created by an ORM middleware (like Hibernate or Entity Framework Core etc.) and their names all started with a capital letter.

In an e-mail from 2004 (link) I got the right hint.

(Let’s assume for all examples, that Foo is the table’s name and Foo_id the related column.)

Command to get the sequence name:

SELECT PG_GET_SERIAL_SEQUENCE('"Foo"', 'Foo_id');

So, the table name must be in double quotes, surrounded by single quotes.

1. Validate, that the sequence is out-of-sync

SELECT CURRVAL(PG_GET_SERIAL_SEQUENCE('"Foo"', 'Foo_id')) AS "Current Value", MAX("Foo_id") AS "Max Value" FROM "Foo";

When the Current Value is less than Max Value, your sequence is out-of-sync.

2. Correction

SELECT SETVAL((SELECT PG_GET_SERIAL_SEQUENCE('"Foo"', 'Foo_id')), (SELECT (MAX("Foo_id") + 1) FROM "Foo"), FALSE);

Similar issue here. Happens after upgrade from 23.3.1 to 23.4.0 with the upgrade to postgresql.
I did have an error during the installation because of duplicate entries in sentry_releaseprojectenvironment which I had to remove manually to get overcome a duplicate key error during reindex.

installation log

Re-indexing database: sentry
ERROR:  could not create unique index "sentry_releaseprojectenvironmen_project_id_d2be2f28c78caf7_uniq"
DETAIL:  Key (project_id, release_id, environment_id)=(11, 5198, 2) is duplicated.
Error in install/upgrade-postgres.sh:38.
'docker exec sentry-self-hosted-postgres-1 psql -qAt -U postgres -d ${db} -c "reindex database ${db};"' exited with status 1
-> install.sh:main:33
--> install/upgrade-postgres.sh:source:38
2023-06-06 15:53:15.444 UTC [15601] STATEMENT:  INSERT INTO "sentry_environmentproject" ("project_id", "environment_id", "is_hidden") VALUES (15, 2, NULL) RETURNING "sentry_environmentproject"."id"
2023-06-06 15:53:35.614 UTC [15705] ERROR:  duplicate key value violates unique constraint "sentry_organizationonboar_organization_id_47e98e05cae29cf3_uniq"
2023-06-06 15:53:35.614 UTC [15705] DETAIL:  Key (organization_id, task)=(2, 5) already exists.
2023-06-06 15:53:35.614 UTC [15705] STATEMENT:  INSERT INTO "sentry_organizationonboardingtask" ("organization_id", "user_id", "status", "completion_seen", "date_completed", "project_id", "data", "task") VALUES (2, NULL, 1, NULL, '2023-06-06T15:53:35.611610+00:00'::timestamptz, 13, '{}', 5) RETURNING "sentry_organizationonboardingtask"."id"
2023-06-06 15:53:35.622 UTC [15706] ERROR:  duplicate key value violates unique constraint "sentry_organizationonboar_organization_id_47e98e05cae29cf3_uniq"
2023-06-06 15:53:35.622 UTC [15706] DETAIL:  Key (organization_id, task)=(2, 5) already exists.
2023-06-06 15:53:35.622 UTC [15706] STATEMENT:  INSERT INTO "sentry_organizationonboardingtask" ("organization_id", "user_id", "status", "completion_seen", "date_completed", "project_id", "data", "task") VALUES (2, NULL, 1, NULL, '2023-06-06T15:53:35.619722+00:00'::timestamptz, 12, '{}', 5) RETURNING "sentry_organizationonboardingtask"."id"
2023-06-06 15:53:40.217 UTC [15748] ERROR:  duplicate key value violates unique constraint "sentry_environmentproject_project_id_29250c1307d3722b_uniq"
2023-06-06 15:53:40.217 UTC [15748] DETAIL:  Key (project_id, environment_id)=(11, 2) already exists.
2023-06-06 15:53:40.217 UTC [15748] STATEMENT:  INSERT INTO "sentry_environmentproject" ("project_id", "environment_id", "is_hidden") VALUES (11, 2, NULL) RETURNING "sentry_environmentproject"."id"
2023-06-06 15:53:40.225 UTC [15749] ERROR:  duplicate key value violates unique constraint "sentry_environmentproject_project_id_29250c1307d3722b_uniq"
2023-06-06 15:53:40.225 UTC [15749] DETAIL:  Key (project_id, environment_id)=(11, 2) already exists.
2023-06-06 15:53:40.225 UTC [15749] STATEMENT:  INSERT INTO "sentry_environmentproject" ("project_id", "environment_id", "is_hidden") VALUES (11, 2, NULL) RETURNING "sentry_environmentproject"."id"
2023-06-06 15:54:03.426 UTC [15898] ERROR:  duplicate key value violates unique constraint "sentry_environmentproject_project_id_29250c1307d3722b_uniq"
2023-06-06 15:54:03.426 UTC [15898] DETAIL:  Key (project_id, environment_id)=(5, 2) already exists.
2023-06-06 15:54:03.426 UTC [15898] STATEMENT:  INSERT INTO "sentry_environmentproject" ("project_id", "environment_id", "is_hidden") VALUES (5, 2, NULL) RETURNING "sentry_environmentproject"."id"
2023-06-06 15:54:37.333 UTC [16129] ERROR:  duplicate key value violates unique constraint "sentry_environmentproject_project_id_29250c1307d3722b_uniq"
2023-06-06 15:54:37.333 UTC [16129] DETAIL:  Key (project_id, environment_id)=(12, 5) already exists.
2023-06-06 15:54:37.333 UTC [16129] STATEMENT:  INSERT INTO "sentry_environmentproject" ("project_id", "environment_id", "is_hidden") VALUES (12, 5, NULL) RETURNING "sentry_environmentproject"."id"
sentry=# select max(id) from sentry_environmentproject;
  max
-------
 25791
(1 row)

sentry=# SELECT nextval('sentry_environmentproject_id_seq');
 nextval
---------
   75872
(1 row)

A common coding strategy is to have multiple application servers attempt to insert the same data into the same table at the same time and rely on the database unique constraint to prevent duplication. The “duplicate key violates unique constraint” error notifies the caller that a retry is needed. This seems like an intuitive approach, but relying on this optimistic insert can quickly have a negative performance impact on your database. In this post, we examine the performance impact, storage impact, and autovacuum considerations for both the normal INSERT and INSERT..ON CONFLICT clauses. This information applies to PostgreSQL whether self-managed or hosted in Amazon Relational Database Service (Amazon RDS) for PostgreSQL or Amazon Aurora PostgreSQL-Compatible Edition.

Understanding the differences between INSERT and INSERT..ON CONFLICT

In PostgreSQL, an insert statement can contain several clauses, such as simple insert with the values to be put into a table, an insert with the ON CONFLICT DO NOTHING clause, or an insert with the ON CONFLICT DO UPDATE SET clause. The usage and requirements for all these types differ, and they all have a different impact on the performance of the database.

Let’s compare the case of attempting to insert a duplicate value with and without the ON CONFLICT DO NOTHING clause. The following table outlines the advantages of the ON CONFLICT DO NOTHING clause.

.. Regular INSERT INSERT..ON CONFLICT DO NOTHING
Dead tuples generated Yes No
Transaction ID used up Yes No
Autovacuum has more cleanup Yes No
FreeStorageSpace used up Yes No

Let’s look at simple examples of each type of insert, starting with a regular INSERT:

postgres=> CREATE TABLE blog (
  id int PRIMARY KEY, 
  name varchar(20)
);
CREATE TABLE

postgres=> INSERT INTO blog
  VALUES (1,'AWS Blog1');
INSERT 0 1

The INSERT 0 1 depicts that one row was inserted successfully.

Now if we insert the same value of id again, it errors out with a duplicate key violation because of the unique primary key:

postgres=> INSERT INTO blog 
  VALUES (1, 'AWS Blog1');
ERROR:  duplicate key value violates unique constraint "blog_pkey"
DETAIL:  Key (n)=(1) already exists.

The following code shows how the INSERT… ON CONFLICT clause handles this violation error when inserting data:

postgres=> INSERT INTO blog 
  VALUES (1,'AWS Blog1') 
  ON CONFLICT DO NOTHING;
INSERT 0 0

The INSERT 0 0 indicates that while nothing was inserted in the table, the query didn’t error out. Although the end results appear identical (no rows inserted), there are important differences when you use the ON CONFLICT DO NOTHING clause.

In the following sections, we examine the performance impact, bloat considerations, transaction ID acceleration and autovacuum impact, and finally storage impact of a regular INSERT vs. INSERT..ON CONFLICT.

Performance impact

The following excerpt of the commit message adding the INSERT..ON CONFLICT clause describes the improvement:

“This is implemented using a new infrastructure called ‘speculative insertion’. It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted.”

This pre-check avoids the overhead of inserting a tuple into the heap to later delete it in case it turns out to be a duplicate. The heap_insert () function is used to insert a tuple into a heap. Back in version 9.5, this code was modified (along with a lot of other code) to incorporate speculative inserts. HEAP_INSERT_IS_SPECULATIVE is used on so-called speculative insertions, which can be backed out afterwards without canceling the whole transaction. Other sessions can wait for the speculative insertion to be confirmed, turning it into a regular tuple, or canceled, as if it never existed and therefore never made visible. This change eliminates the overhead of performing the insert, finding out that it is a duplicate, and marking it as a dead tuple.

For example, the following is a simple select query on a table that attempted 1 million regular inserts that were duplicates:

postgres=> SELECT count(*) FROM blog;
-[ RECORD 1 ]
count | 1

Time: 54.135 ms

For comparison, the following is a simple select query on a table that used the INSERT..ON CONFLICT statement:

postgres=> SELECT count(*) FROM blog;
-[ RECORD 1 ]
count | 1

Time: 0.761 ms

The difference in time is because of the 1 million dead tuples generated in the first case by the regular duplicate inserts. To count the actual visible rows, the entire table has to be scanned, and when it’s full of dead tuples, it takes considerably longer. On the other hand, in the case of INSERT..ON CONFLICT DO NOTHING, because of the pre-check, no dead tuples are generated, and the count(*) completes much faster, as expected for just one row in the table.

Bloat considerations

In PostgreSQL, when a row is updated, the actual process is to mark the original row deleted (old value) and then insert a new row (new value). This causes dead tuple generation, and if not cleared up by vacuum can cause bloat. This bloat can lead to unnecessary space utilization and performance loss as queries scan these dead rows.

As discussed earlier, in a regular insert, there is no duplicate key pre-check before attempting to insert the tuple into the heap. Therefore, if it’s a duplicate value, it’s similar to first inserting a row and then deleting it. The result is a dead tuple, which must then be handled by vacuum.

In this example, with the pg_stat_user_tables view, we can see these dead tuples are generated when an insert fails due to a duplicate key violation:

postgres=> SELECT relname, n_dead_tup, n_live_tup 
FROM pg_stat_user_tables 
WHERE relname = 'blog';
-[ RECORD 1 ]-------+-------
relname             | blog
n_live_tup          | 7
n_dead_tup          | 4

As we can see, n_dead_tup currently is 4. We attempt inserting five duplicate values:

postgres=> DO $$
BEGIN
FOR r IN 1..5 LOOP
	BEGIN
		INSERT INTO blog VALUES (1, 'AWS Blog1');
	EXCEPTION
		WHEN UNIQUE_VIOLATION THEN
			RAISE NOTICE 'duplicate row';
	END;
END LOOP;
END;
$$;
NOTICE:  duplicate row
NOTICE:  duplicate row
NOTICE:  duplicate row
NOTICE:  duplicate row
NOTICE:  duplicate row
DO

We now observe five additional dead tuples generated:

postgres=> SELECT relname, n_dead_tup, n_live_tup 
FROM pg_stat_user_tables 
WHERE relname = 'blog';
-[ RECORD 1 ]-------+-------
relname             | blog
n_live_tup          | 7
n_dead_tup          | 9

This highlights that even though no rows were successfully inserted, there is an increase in dead tuples (n_dead_tup).

Now, we run the following insert with the ON CONFLICT DO NOTHING clause five times:

DO $$
BEGIN
FOR r IN 1..5 LOOP
  BEGIN
    INSERT INTO blog VALUES (1, 'AWS Blog1') ON CONFLICT DO NOTHING;
  END;
END LOOP;
END;
$$;

Again, checking pg_stat_user_tables, we observe that there is no increase in n_dead_tup and therefore no dead tuples generated:

postgres=> SELECT relname, n_dead_tup, n_live_tup 
FROM pg_stat_user_tables 
WHERE relname = 'blog';
-[ RECORD 1 ]-------+-------
relname             | blog
n_live_tup          | 7
n_dead_tup          | 9

In the normal insert, we see dead tuples increasing, whereas no dead tuples are generated when we use the ON CONFLICT DO NOTHING clause. These dead tuples result in unnecessary table bloat as well as consumption of FreeStorageSpace.

Transaction ID usage acceleration and autovacuum impact

A PostgreSQL database can have two billion “in-flight” unvacuumed transactions before PostgreSQL takes dramatic action to avoid data loss. If the number of unvacuumed transactions reaches (2^31 – 10,000,000), the log starts warning that vacuuming is needed. If the number of unvacuumed transactions reaches (2^31 – 1,000,000), PostgreSQL sets the database to read-only mode and requires an offline, single-user, standalone vacuum. This is when the database reaches a “Transaction ID wraparound”, which is described in more detail in the PostgreSQL documentation. This vacuum requires multiple hours or days of downtime (depending on database size). More details on how to avoid this situation are described in Implement an Early Warning System for Transaction ID Wraparound in Amazon RDS for PostgreSQL.

Now let’s look at how these two different inserts impact the transaction ID usage by running some tests. All tests after this point are run on two different identical instances: one for regular INSERT testing and another one for the INSERT..ON CONFLICT clause.

Duplicate key with regular inserts

In the case of regular inserts, when the inserts error out, the transaction is canceled. This means if the application inserts 100 duplicate key values, 100 transaction IDs are consumed. This could lead to autovacuum runs to prevent wraparound in peak hours, which consumes resources that could otherwise be used for user workload.

For testing the impact of this, we ran a script using pgbench to repeatedly insert multiple key values into the blog table:

DO $$
BEGIN
FOR r in 1..1000000 LOOP
  BEGIN
    INSERT INTO blog VALUES (1, 'AWS Blog1');
  EXCEPTION
    WHEN UNIQUE_VIOLATION THEN
      RAISE NOTICE 'duplicate row';
  END;
END LOOP;
END;
$$;

The script runs the INSERT statement 1 million times, and if it throws an error, prints out duplicate row.

We used the following pgbench command to run it through multiple connections:

pgbench --host=iocblog.abcedfgxymxvb.eu-west-1.rds.amazonaws.com --port=5432 --username=postgres -c 1000 -f script postgres

We observed multiple notice messages, because all were duplicate values:

NOTICE:  duplicate row
NOTICE:  duplicate row
NOTICE:  duplicate row
NOTICE:  duplicate row
NOTICE:  duplicate row
NOTICE:  duplicate row
NOTICE:  duplicate row

For this test, we modified some autovacuum parameters. First, we set log_autovacuum_min_duration to 0, to log all autovacuum actions. Secondly, we set rds.force_autovacuum_logging_level to debug5 in order to log detailed information about each run.

As the script did more and more loops, the transaction ID usage increased, as shown in the following visualization.

On this test instance, only our contrived workload was being run. We used Amazon CloudWatch to observe that as soon as MaximumUsedTransactionIds hit autovacuum_freeze_max_age (default 200 million), autovacuum worker was launched to prevent wraparound. The following is a snapshot of pg_stat_activity for that time:

postgres=> select * from pg_stat_activity where query like '%autovacuum%';
-[ RECORD 1 ]----+--------------------------------------------------------------------
datid            | 14007
datname          | postgres
pid              | 3049
usesysid         | 
usename          | 
application_name | 
client_addr      | 
client_hostname  | 
client_port      | 
backend_start    | 2020-11-26 12:22:37.364145+00
xact_start       | 2020-11-26 12:22:37.403818+00
query_start      | 2020-11-26 12:22:37.403818+00
state_change     | 2020-11-26 12:22:37.403818+00
wait_event_type  | LWLock
wait_event       | ProcArrayLock
state            | active
backend_xid      | 
backend_xmin     | 205602359
query            | autovacuum: VACUUM public.blog (to prevent wraparound)
backend_type     | autovacuum worker

When we observe the query output and the graph, we can see that the autovacuum started at the time MaximumUsedTransactionIds reached 200 million (around 12:22 UTC; this was expected so as to prevent transaction ID wraparound). During the same time, we observed the following in postgresql.log:

2020-11-26 12:22:37 UTC::@:[3049]:DEBUG: blog: vac: 5971078 (threshold 50), anl: 0 (threshold 50)
.
.
2020-11-26 12:22:45 UTC::@:[3202]:WARNING: oldest xmin is far in the past
2020-11-26 12:22:45 UTC::@:[3202]:HINT: Close open transactions soon to avoid wraparound problems.
You might also need to commit or roll back old prepared transactions, or drop stale replication slots.

The preceding code shows autovacuum running and having to act on 5.9 million dead tuples in the table. The warning about wraparound problems is because of the script generating the dead tuples, and advancing the transaction IDs. As time passed, autovacuum finally completed cleaning up the dead tuples:

2020-11-26 13:05:28 UTC::@:[62084]:LOG:  automatic aggressive vacuum of table "postgres.public.blog": index scans: 2
    pages: 0 removed, 1409317 remain, 48218 skipped due to pins, 0 skipped frozen
    tuples: 115870210 removed, 96438 remain, 0 are dead but not yet removable, oldest xmin: 460304384
    buffer usage: 4132095 hits, 21 misses, 1732480 dirtied
    avg read rate: 0.000 MB/s, avg write rate: 37.730 MB/s
    system usage: CPU: user: 22.71 s, system: 1.23 s, elapsed: 358.73 s

Autovacuum had to act on approximately 115 million tuples. This is what we observed on an idle system with only a script running to insert duplicate values. The table here was pretty simple, with only two columns and two live rows. For bigger production tables, which realistically have more columns and more data, it could be much worse. Let’s now see how to avoid this by using INSERT..ON CONFLICT.

Duplicate key with INSERT..ON CONFLICT

We modified the same script as in the previous section to include the ON CONFLICT DO NOTHING clause:

DO $$
BEGIN
FOR r IN 1..1000000 LOOP
  BEGIN
    INSERT INTO blog VALUES (1, 'AWS Blog1') ON CONFLICT DO NOTHING;
  EXCEPTION
    WHEN UNIQUE_VIOLATION THEN
      RAISE NOTICE 'duplicate row';
  END;
END LOOP;
END;
$$;

We ran the script on the instance in the following steps:

// Check the live rows in the table

postgres=> SELECT * FROM blog;
 n |   name    
---+-----------
 1 | AWS Blog1
 2 | AWS Blog2
(2 rows)

postgres=> \timing
Timing is on.


postgres=> SELECT now();
              now              
-------------------------------
 2020-11-29 22:25:26.426271+00
(1 row)

Time: 2.689 ms

// Check the current transaction ID of the database (before running the script) :

postgres=> SELECT txid_current();
 txid_current 
--------------
     16373267
(1 row)

//Run the script in another session as follows :

pgbench --host=iocblog2.abcdefghivb.eu-west-1.rds.amazonaws.com --port=5432 --username=postgres -c 1000 -f script2 postgres
Password: 
starting vacuum...end.

//After some time later, go to previous session and check transaction ID along with pg_stat_activity. It increased by 4 transactions (which were the ones including our testing).

postgres=> SELECT now();
-[ RECORD 1 ]----------------------
now | 2020-11-29 22:56:59.241092+00

Time: 8.692 ms
postgres=> SELECT txid_current();
-[ RECORD 1 ]+---------
txid_current | 16373271

Time: 234.526 ms
 
//The following query shows the 1001 active transactions which are active because of the script, but aren't causing a spike in transaction IDs.

postgres=> SELECT now();
-[ RECORD 1 ]----------------------
now | 2020-11-29 22:59:44.373064+00

Time: 7.297 ms

postgres=> SELECT COUNT(*) 
FROM pg_stat_activity
WHERE query like '%blog%'
AND backend_type='client backend';
-[ RECORD 1 ]
count | 1001

Time: 181.370 ms

postgres=> SELECT now();
-[ RECORD 1 ]----------------------
now | 2020-11-29 22:59:52.661102+00

Time: 6.212 ms
postgres=> SELECT txid_current();
-[ RECORD 1 ]+---------
txid_current | 16373273

Time: 741.437 ms

The preceding testing shows that if we use INSERT..ON CONFLICT, the transaction IDs aren’t consumed. The following visualization shows the MaximumUsedTransactionIds metric.

There is still one more benefit to examine by using this clause: storage space.

Storage space considerations

In the case of the normal insert, the dead tuples that are generated result in unnecessary space consumption. Even in our small test workload, we were able to quickly end up in a storage full situation. If this happens in a production system, a scale storage is required.

Duplicate key with regular inserts

As discussed earlier, the tuples are inserted into the heap and checked if they are valid as per the constraint. If it fails, it’s marked as deleted. We can observe the impact on the FreeStorageSpace metric.

FreeStorageSpace went down all the way to a few MBs because these dead rows consume disk space. We used the following queries to monitor the script, they show that while there are only two visible tuples, space is consumed by the dead rows.

postgres=> select count(*) from blog;
-[ RECORD 1 ]
count | 2

postgres=> select pg_size_pretty (pg_table_size('blog'));
-[ RECORD 1 ]--+------
pg_size_pretty | 5718 MB
(1 row)

In our small instance, we went into Storage-full state during our script execution.

Duplicate key with INSERT..ON CONFLICT

For comparison, when we ran the script using INSERT..ON CONFLICT, there was absolutely no drop in storage. This is because the transaction is prechecked and the tuple isn’t inserted in the table. With no dead tuples generated, no space is taken up by them, and the instance doesn’t run out of storage space due to duplicate inserts.

Summary

Let’s recap each of the considerations we discussed in the previous sections.

.. Regular INSERT INSERT..ON CONFLICT DO NOTHING
Bloat considerations Dead tuples generated for each conflicting tuples inserted in the relation. Pre-check before inserting into the heap ensures no duplicates are inserted. Therefore, no dead tuples are generated.
Transaction ID considerations Each failed insert causes the transaction to cancel, which causes consumption of 1 transaction ID. If too many duplicate values are inserted, this can spike up quickly. The transaction can be backed out and not canceled in the case of a duplicate, and therefore, a transaction ID is not consumed.
Autovacuum Impact As transaction IDs increase, autovacuum to prevent wraparound is triggered, consuming resources to clean up the dead rows. No dead tuples, no transaction ID consumption, no autovacuum to prevent wraparound is triggered.
FreeStorageSpace considerations The dead tuples also cause storage consumption. No dead tuples generated, so no extra space is consumed.

In this post, we showed you some of the issues that duplicate key violations can cause in a PostgreSQL database. They can cause wasted disk storage, high transaction ID usage and unnecessary Autovacuum work.

We offered an alternative approach using the “INSERT..ON CONFLICT“ clause which avoid these problems. It is recommended to use this alternative if you are constantly observing high volumes of the “duplicate key violates unique constraint” error in your logs. You might need to make some application code changes while using this option to see whether the INSERT succeeded or failed. This can not be determined by the error anymore (as there will be none generated) but by checking the number of rows affected with the insert query.

If you have any questions, let us know in the comments section.


About the Authors

Divya SharmaDivya Sharma is a Database Specialist Solutions architect at AWS, focusing on RDS/Aurora PostgreSQL. She has helped multiple enterprise customers move their databases to AWS, providing assistance on PostgreSQL performance and best practices.

Shawn McCoy is a Senior Database Engineer for RDS & Aurora PostgreSQL. After being an Oracle DBA for many years he became one of the founding engineers for the launch of RDS PostgreSQL in 2013. Since then he has been improving the service to help customers succeed and scale their applications.

Делаю update времени:
update sometable set ts = ts + $Date_dif

в таблице sometable есть поле ts:
Column | Type | Modifiers | Storage | Description
—————————+————————+————+———-+————-
ts | integer | not null | plain |

оно же помечено как PRIMARY KEY (и не только оно)

в ts записано время в формате unix_time. Я прибавляю ко всем значениям поля ts некую разницу в формате unix_time и таким образом обновляю время в БД будто данные были записаны недавно.

Первый раз у меня всё вышло. Когда я пробовал обновить время ещё раз — выдавало ошибку:
ERROR: duplicate key value violates unique constraint

Как можно обойти это? (аналогичная проблема с и БД MySQL, выдает что-то вроде: ERROR 1062 (23000) at line 1: Duplicate entry ‘1439933444-stat-localhost’ for key ‘PRIMARY’ )

Как обойти? Есть варианты? Мне надо иметь возможность обновлять данные столько раз сколько потребуется.

В данной статье мы подробно рассмотрим процесс вставки данных. В том числе, как избежать ошибок при вставке дублирующихся данных. Все операции мы будем производить на тестовой базе PostgreSQL, дамп данной базы вы можете скачать по ссылке: https://www.postgresqltutorial.com/postgresql-sample-database/

Какой самый простой способ вставить данные в таблицу customer? Напомню, что синтаксис будет следующий:

INSERT INTO customer (active, activebool, address_id, create_date, email, first_name, last_name, last_update, store_id) VALUES (1, default, 10, default, 'test2@test.com', 'Andrey', 'Vasilyev', default, 2)

Для вставки нескольких значений, синтаксис будет следующим:

INSERT INTO customer (active, activebool, address_id, create_date, email, first_name, last_name, last_update, store_id) VALUES (1, default, 10, default, 'test2@test.com', 'Andrey', 'Vasilyev', default, 2), (1, default, 11, default, 'test4@test.com', 'Ivan', 'Ivanov', default, 2) returning customer_id;

Заметили в конце фразу «returning customer_id»? Она позволяет после вставки получить в ответ id добавленных записей.

Теперь давайте сделаем следующую очень распространённую вещь — сделаем поле email в этой таблице уникальным:

ALTER TABLE customer ADD CONSTRAINT email_unique UNIQUE (email);

И теперь давайте попробуем вставить уже существующую запись следующим образом:

INSERT INTO customer (active, activebool, address_id, create_date, email, first_name, last_name, last_update, store_id) SELECT active, activebool, address_id, create_date, email, first_name, last_name, last_update, store_id FROM customer WHERE customer_id = 10 returning customer_id

Т.е. данным запросом мы пытаемся вставить customer_id с номером 10. И при выполненгии данного запроса мы получим ошибку:

ERROR:  duplicate key value violates unique constraint "email_unique"

Итак, теперь чтобы нам добавить нового пользователя, нужно выполнить перед инструкцией insert следующий запрос:

SELECT count(1) FROM customer WHERE email = 'test2@test.com'

Т.е. проверим, что пользователя с таким email нет в базе.

Однако, есть гораздо более простой способ сделать это. Попробуем следующий запрос:

INSERT INTO customer (active, activebool, address_id, create_date, email, first_name, last_name, last_update, store_id) VALUES (1, default, 19, default, 'helen.harris@sakilacustomer.org', 'Helen', 'Harris', default, 1) on conflict do nothing

И заметьте, что мы в ответ не получим никакой ошибки, запрос будет выполнен успешно, несмотря на то, что пользователь присутствует в базе.

У этого подхода есть ещё одно преимущество. Вместо того, чтобы просто ничего не делать при обнаружении дубликата, мы можем обновить запись. Например, представим ситуацию, что пользователь Helen сменил место жительства, т.е. переехал в другой город. И при попытке повторной регистрации нам требуется обновить адрес, а также имя и фамилию в случае если они также были изменены. Сделать нам это поможет следующий запрос:

INSERT INTO customer (active, activebool, address_id, create_date, email, first_name, last_name, last_update, store_id) VALUES (1, default, 20, default, 'helen.harris@sakilacustomer.org', 'Helena', 'Harry', default, 1) on conflict (email) do update set first_name = excluded.first_name, last_name = excluded.last_name, address_id = excluded.address_id returning customer_id

Заметим, что в этом запросе мы указали поле email, по которому мы проперяем по дубликаты, указываем действие, которые мы должны совершить (do update). И затем указываем маппинг, по которому мы обновляем поля (first_name = excluded.first_name, last_name = excluded.last_name, address_id = excluded.address_id). По ключевому слову excluded мы обращаемся к нашему новому вставляемому значению, обновляя тем самым значения поля. А фразой «returning customer_id» мы получаем в ответ id клиента.

Понравилась статья? Поделить с друзьями:
  • Ошибка dts bmw
  • Ошибка dtr мерседес w203
  • Ошибка dtc940213 цепь датчика топлива
  • Ошибка dtc154416 рено меган 3 дизель
  • Ошибка dtc038096 рено меган 3 дизель