[2.x] psycopg2.IntegrityError: update or delete on table "maasserver_node" violates foreign key constraint "maasserver_event_node_id_xxx_fk_maasserver_node_id" on table "maasserver_event" DETAIL: Key (id)=(xx) is still referenced from table "maasserver_event".
Bug #1726474 reported by
Jason Hobbs
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MAAS |
Fix Released
|
Critical
|
Blake Rouse | ||
2.2 |
Won't Fix
|
Critical
|
Unassigned |
Bug Description
A "node delete" API request failed with an integrity error:
psycopg2.
DETAIL: Key (id)=(26) is still referenced from table "maasserver_event".
This is in an HA setup, 3 region and 3 rack controllers. Logs are available here:
https:/
The integrity error stack trace can be seen in the regiond.log for 10.245.208.30.
Related branches
~blake-rouse/maas:fix-1726474
- MAAS Lander: Needs Fixing
- Mike Pontillo (community): Approve
-
Diff: 410 lines (+238/-3)5 files modifiedsrc/maasserver/models/tests/test_node.py (+44/-0)
src/maasserver/utils/orm.py (+58/-1)
src/maasserver/utils/tests/test_orm.py (+93/-0)
src/metadataserver/api_twisted.py (+13/-1)
src/metadataserver/tests/test_api_twisted.py (+30/-1)
Changed in maas: | |
milestone: | none → 2.3.0beta3 |
Changed in maas: | |
importance: | Undecided → Critical |
status: | New → Triaged |
summary: |
- psycopg2.IntegrityError: update or delete on table "maasserver_node" - violates foreign key constraint + [2.x] psycopg2.IntegrityError: update or delete on table + "maasserver_node" violates foreign key constraint "maasserver_event_node_id_xxx_fk_maasserver_node_id" on table "maasserver_event" DETAIL: Key (id)=(xx) is still referenced from table "maasserver_event". |
Changed in maas: | |
status: | New → In Progress |
assignee: | nobody → Blake Rouse (blake-rouse) |
Changed in maas: | |
status: | In Progress → Fix Committed |
Changed in maas: | |
status: | Fix Committed → Fix Released |
To post a comment you must log in.
Would it be possible that you are deleting a node why it is deploying?
I believe what is happening is that Django is deleting the node, which also involves the cascade delete of all the events for that node.
1. Region A starts a transaction.
2. Region A selects all events that are related to the node.
3. Region A deletes all selected events.
4. Region B starts a transaction.
5. Region B adds a new events.
6. Region B commits transaction.
7. Region A deletes the node.
8. Region A commits the transaction (which fails, since Region B added new events)
So back to the original question was the node still deploying or reboot and events where being created when you choose to delete this node?