migrate does not work after deleting flavor

Bug #1674490 reported by Dmitry Sutyagin
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Won't Fix
Medium
Alexey Stupnikov

Bug Description

Reproduced on MOS 7.0 with latest MU to date.

steps:
1. boot an instance
2. delete it's flavor (nova flavor-delete)
3. migrate the instance.
Resulting error - "Flavor 7 could not be found." # 7 is the flavor id in MySQL.

This bug is not reproducible in Juno (MOS6.1) or in Liberty (MOS 8.0) or Mitaka (MOS9.0) -only reproducible in Kilo.

There is a matching bug upstream - https://bugs.launchpad.net/nova/+bug/1570748, but the fix is included in the latest MU and the issue is still reproducible.

Revision history for this message
Dmitry Sutyagin (dsutyagin) wrote :

The analysis so far showed that the issue is triggered here - https://github.com/openstack/nova/blob/kilo-eol/nova/compute/manager.py#L4027

For some reason, when nova-api calls https://github.com/openstack/nova/blob/kilo-eol/nova/compute/api.py#L2743 and passes new_instance_type of type 'objects.Flavor', the call is sent through RabbitMQ to nova-compute, which receives this argument as type 'dict' instead of 'objects.Flavor'. Looks like there's some problem with serialization/deserialization of this argument when it goes through oslo_messaging / RabbitMQ.

If this "if" clause in manager.py is commented out, migration works fine.

Changed in mos:
status: New → Confirmed
importance: High → Medium
assignee: nobody → MOS Maintenance (mos-maintenance)
milestone: none → 7.0-updates
Changed in mos:
milestone: 7.0-updates → 7.0-mu-8
Changed in mos:
assignee: MOS Maintenance (mos-maintenance) → Alexey Stupnikov (astupnikov)
Revision history for this message
Alexey Stupnikov (astupnikov) wrote :

I have checked what is going on under the hood and found that we have
opposite situation to the reported one: it is MOS8 that doesn't work
as it should and it is MOS7 that works just like it should work. Let
me explain the problem.

All flavor-related data is stored in instance_types table. If flavor
is deleted, it's record will be modified (its ID is added to
'deleted' column and timestamp is added to 'deleted_at' column). As a
result, nova still can get flavor's data if it is needed.

A context's read_deleted parameter is used to get such kind of data.

The thing is that we don't use this parameter to request flavor data
during migration. But for some reason MOS8 nova ignores DB
information about flavor deletion and returns incorrect flavor data
(deleted=False,deleted_at=None) [1]

[1] https://paste.mirantis.net/show/11344/

Revision history for this message
Alexey Stupnikov (astupnikov) wrote :

Closing as Invalid, as there are no problem in MOS7 code and IMO it is basically a Medium-priority feature.

Changed in mos:
status: Confirmed → Invalid
milestone: 7.0-mu-8 → 7.0-updates
Revision history for this message
Dmitry Sutyagin (dsutyagin) wrote :

This is 100% reproducible in MOS7 lab by following the instructions I've provided. Reopening.

Changed in mos:
status: Invalid → Confirmed
Revision history for this message
Alexey (aterekhin) wrote :

Agreed with the previous post of Dmitry Sutyagin, I also met this in MOS7

Revision history for this message
Alexey Stupnikov (astupnikov) wrote :

Folks, I haven't said that it is not reproducible, I have said that it works just like it should. It is not a bug if nova is configured to ignore deleted flavors during migrations and do just that. So again, you basically requesting to implement a feature for a bug with Medium priority, which is not an option.

Revision history for this message
Alexey Stupnikov (astupnikov) wrote :

I will move it back to Invalid, lets start an email tread if we need to clarify something.

Changed in mos:
status: Confirmed → Invalid
Revision history for this message
Alexey Stupnikov (astupnikov) wrote :

Oh, just got your point, will move it to Won't Fix then.

Changed in mos:
status: Invalid → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.