Volume based amphora

Bug #1901732 reported by Vern Hart
30
This bug affects 4 people
Affects Status Importance Assigned to Milestone
OpenStack Charm Guide
New
Undecided
Unassigned
OpenStack Octavia Charm
In Progress
Wishlist
Felipe Reyes
2023.1
New
Undecided
Unassigned
2023.2
New
Undecided
Unassigned
Ussuri
New
Undecided
Unassigned
Victoria
New
Undecided
Unassigned
Wallaby
New
Undecided
Unassigned
Xena
New
Undecided
Unassigned
Yoga
New
Undecided
Unassigned
Zed
New
Undecided
Unassigned

Bug Description

As of Train release, Octavia supports creating amphora instances with a cinder volume as the root disk.
https://docs.openstack.org/releasenotes/octavia/train.html

Some extra values are require in the [controller_worker] section of the octavia.conf.

 volume_driver: set to create volume backed amphorae.
 volume_size: Size of root volume for Amphora Instance
 volume_type: Type of volume for Amphorae volume root disk
 volume_create_retry_interval: wait between retries
 volume_create_timeout: timeout for volume to be created
 volume_create_max_retries: retries to create volume

The octavia charm doesn't have a config-flags config option but even if it did, these settings are not in the [DEFAULT] section so we'd need new config options.

Changed in charm-octavia:
status: New → Triaged
importance: Undecided → Wishlist
Revision history for this message
Vern Hart (vern) wrote :

I misstated the section.
These configuration options should be in the [controller_worker] section.

Vern Hart (vern)
description: updated
Revision history for this message
Andre Ruiz (andre-ruiz) wrote :
Download full text (3.3 KiB)

Ok, I manually added these options to the octavia.conf file (actually I forked the charm and changed the template):

[controller_worker]
<...>
# defaults are: size=16GB, type=None, create_retry_interval=5, create_timeout=300, create_max_retries=5
volume_driver = volume_cinder_driver
volume_size = 16
volume_type = usc01-az01-abs-octavia
volume_create_retry_interval = 5
volume_create_timeout = 300
volume_create_max_retries = 5

The usc01-az01-abs-octavia type exists:

ubuntu@jumphost:~/2021-04-12-TCS-Prod1PCB-215965$ openstack volume type list
+--------------------------------------+------------------------+-----------+
| ID | Name | Is Public |
+--------------------------------------+------------------------+-----------+
| 884217ee-d402-4644-b421-931517025172 | usc01-az01-abs-octavia | True |
| 6af193ca-970d-4b78-9b0d-b90bbceced93 | usc01-az01-abs-stable1 | True |
| a45fe4b1-ac8b-40d8-b3c1-45b00c64b075 | usc01-az01-abs-stable2 | True |
| 5e44595b-012c-41b3-a842-b8c1637a4000 | usc01-az01-abs-arbor | True |
+--------------------------------------+------------------------+-----------+

ubuntu@jumphost:~/2021-04-12-TCS-Prod1PCB-215965$ openstack volume type show usc01-az01-abs-octavia
+--------------------+------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------+------------------------------------------------------------------------------------------+
| access_project_ids | None |
| description | None |
| id | 884217ee-d402-4644-b421-931517025172 |
| is_public | True |
| name | usc01-az01-abs-octavia |
| properties | RESKEY:availability_zones='us-central-1-az01', volume_backend_name='cinder-ceph-stable2' |
| qos_specs_id | None |
+--------------------+------------------------------------------------------------------------------------------+

But when trying to create loadbalancers, I get this error:

2021-05-25 17:29:16.550 3666784 ERROR oslo_messaging.rpc.server taskflow.exceptions.WrappedFailure: WrappedFailure: [Failure: octavia.common.exceptions.ComputeBuildException: Failed to build compute instance due to: Multiattach volumes are only supported starting with compute API version 2.60. (HTTP 400) (Request-ID: req-16ce1668-84f6-4e88-84db-724cc3bc1538), Failure: octavia.common.exceptions.ComputeBuildException: Failed to build compute instance due to: Multiattach volumes are only supported starting with compute API version 2.60. (HTTP 400) (Request-ID: req-bdae919c-19c2-4ff5-b770-00a8d54a5f18)]

As y...

Read more...

Revision history for this message
Corey Bryant (corey.bryant) wrote :

@Andre, can you do an 'openstack volume show' on the created volume? I wonder if that will have any hints.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

^ That might be difficult to catch as octavia/compute/drivers/nova_driver.py deletes the volume if amphora creation fails.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

If this is a test system you could comment out the volume deletion code in the exception handling code, then 'openstack volume show' to see what it looks like.

Revision history for this message
Billy Olsen (billy-olsen) wrote :

The problem is that you've specified the volume_type options in the controller_worker section of the config file. The following options for Cinder can be specified in the [cinder] config section (though the documentation does not clearly state this, the code does). Specifying them in the [controller_worker] will not yield the results you want. The volume is likely getting created with the default volume type - which I suspect has multiattach enabled.

The correct settings would have:

[controller_worker]
...
volume_driver = volume_cinder_driver
...

[cinder]
volume_size = 16
volume_type = <volume_type>
volume_create_retry_interval = 5
volume_create_timeout = 300
volume_create_max_retries = 5

Revision history for this message
Billy Olsen (billy-olsen) wrote :

Also, since you're using the default for all values except volume_type in the [cinder] block, you really only need to specify that one.

Revision history for this message
Andre Ruiz (andre-ruiz) wrote :

Ok, that worked well. Thank you!

Changed in charm-octavia:
assignee: nobody → Nicholas Njihia (nicknjihian)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-octavia (master)
Changed in charm-octavia:
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on charm-octavia (master)

Change abandoned by "Nicholas Njihia <email address hidden>" on branch: master
Review: https://review.opendev.org/c/openstack/charm-octavia/+/810567

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-octavia (master)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on charm-octavia (master)

Change abandoned by "Nicholas Njihia <email address hidden>" on branch: master
Review: https://review.opendev.org/c/openstack/charm-octavia/+/811700

Revision history for this message
Felipe Reyes (freyes) wrote :

The patch was left orphaned - https://review.opendev.org/c/openstack/charm-octavia/+/810567 - removing the assignee and setting the bug to 'triaged' to reflect the actual status.

Changed in charm-octavia:
assignee: Nicholas Njihia (nicknjihian) → nobody
status: In Progress → Triaged
Changed in charm-octavia:
status: Triaged → In Progress
Felipe Reyes (freyes)
Changed in charm-octavia:
assignee: nobody → Felipe Reyes (freyes)
Revision history for this message
Peter Matulis (petermatulis) wrote :

@Felipe - you created a bug task for the charm-guide. Can you give a summary of what's needed? From what I understand here, some options are in the process of being added to the octavia charm. Are you thinking of a special section that describes Cinder-based root volumes for amphorae instances?

Revision history for this message
Felipe Reyes (freyes) wrote : Re: [Bug 1901732] Re: Volume based amphora

On Thu, 2024-01-11 at 22:54 +0000, Peter Matulis wrote:
> @Felipe - you created a bug task for the charm-guide. Can you give a
> summary of what's needed? From what I understand here, some options are
> in the process of being added to the octavia charm. Are you thinking of
> a special section that describes Cinder-based root volumes for amphorae
> instances?
>

Just a release notes entry.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.