Table 'nova_api.cell_mappings' doesn't exist"

Bug #2061157 reported by Marian Gasparovic

This bug report will be marked for expiration in 48 days if no further activity occurs. (find out why)

6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Snap
Incomplete
Undecided
Unassigned

Bug Description

During deployment nova pod is in error reporting
`crash loop backoff: back-off 5m0s restarting failed container=nova-conductor pod=nova-0_openstack`

Pod logs show

sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) (1146, "Table 'nova_api.cell_mappings' doesn't exist")

Tags: cdo-qa
Revision history for this message
Ryan Britton (rpbritton) wrote :

This run (https://oil-jenkins.canonical.com/artifacts/d96ccecb-f7fc-42c2-a960-fc59a8359902/index.html) failed waiting for microceph, but I found the matching error and believe it is the same underlying bug:

nova:
sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) (1146, "Table 'nova_api.cell_mappings' doesn't exist")

cinder:
sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) (1146, "Table 'cinder.clusters' doesn't exist")

neutron:
sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) (1146, "Table 'neutron_server.ovn_hash_ring' doesn't exist")

Revision history for this message
James Page (james-page) wrote :

Those messages are a bit of a red herring - if you look at the complete log data for each service, the service gets restarted after these occurs and then operates with no subsequent errors.

I think the Rock pebble definition is being picked up - I thought that the default was to not start any services but that might not be the case.

Revision history for this message
James Page (james-page) wrote :

For example - the charm does not complete bootstrap of the nova unit until 08:50:53 - you see these SQL error up until that point.

Revision history for this message
James Page (james-page) wrote :

Do we have a link to the run for the original bug report?

The one linked in #1 does not exhibit the same problem (crash loop backoff).

Changed in snap-openstack:
status: New → Incomplete
Revision history for this message
Guillaume Boutry (gboutry) wrote (last edit ):

Concerning the services being started and complaining about db tables not existing is because we actually ask pebble to be up and running (having all its services up) before we run the db_sync. (cf https://opendev.org/openstack/sunbeam-charms/src/commit/3de9342ed47460df61f5f13528080fe81c24cc69/ops-sunbeam/ops_sunbeam/charm.py#L653)

I think we can be smarter there and actually take into account if a database sync run has been executed or not.

I had started some implementation to try skipping service check at some point: https://review.opendev.org/c/openstack/sunbeam-charms/+/909657 this gives a rough idea, it's not complete

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.