[Backport 1470889] DVR: should not remove routers form down l3 agents on compute nodes

Bug #1485577 reported by Oleg Bondarev
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Fix Released
High
Oleg Bondarev

Bug Description

This is backport for https://bugs.launchpad.net/neutron/+bug/1470889

Original description
================
Currently If 'dvr' l3 agent on a compute node goes down, routers are removed from it; then when agent is up again routers are not added back - so VMs on that compute node lose all routing.

Scheduling/unscheduling of DVR routers to the l3 agent in 'dvr' mode running on a compute node is done according to dvr serviced ports created/deleted on that compute node. It doesn't make sense to reschedule router from l3 agent on compute node even if it's down - no other l3 agent can handle VMs running on that compute node.

Tags: neutron dvr
Changed in mos:
status: New → Confirmed
milestone: none → 7.0
Changed in mos:
status: Confirmed → In Progress
Revision history for this message
Fuel Devops McRobotson (fuel-devops-robot) wrote : Fix merged to openstack/neutron (openstack-ci/fuel-7.0/2015.1.0)

Reviewed: https://review.fuel-infra.org/10495
Submitter: mos-infra-ci <>
Branch: openstack-ci/fuel-7.0/2015.1.0

Commit: 68da5fa28e8e904a7f0e48fb6d8e9345fcb9c534
Author: Oleg Bondarev <email address hidden>
Date: Mon Aug 17 14:32:06 2015

DVR: do not reschedule router for down agents on compute nodes

Scheduling/unscheduling of DVR routers with l3 agents in 'dvr' mode
running on a compute nodes is done according to DVR serviced ports
created/deleted on that compute nodes. It doesn't make sense to reschedule
router from l3 agent on compute node even if it's down - no other l3 agent
can handle VMs running on that compute node.

upstream review: https://review.openstack.org/198009

Closes-Bug: #1485577
Closes-Bug: #1470889
Change-Id: Ib998b9e459dd1a9ab740fafa5d84dc3211ca0097

Changed in mos:
status: In Progress → Fix Committed
tags: added: on-verification
Revision history for this message
Kristina Berezovskaia (kkuznetsova) wrote :

Verify on
VERSION:
  feature_groups:
    - mirantis
  production: "docker"
  release: "7.0"
  openstack_version: "2015.1.0-7.0"
  api: "1.0"
  build_number: "287"
  build_id: "287"
  nailgun_sha: "46a7a2177a0b7ef91422284c1c90295fee8f5c84"
  python-fuelclient_sha: "1ce8ecd8beb640f2f62f73435f4e18d1469979ac"
  fuel-agent_sha: "082a47bf014002e515001be05f99040437281a2d"
  fuel-nailgun-agent_sha: "d7027952870a35db8dc52f185bb1158cdd3d1ebd"
  astute_sha: "a717657232721a7fafc67ff5e1c696c9dbeb0b95"
  fuel-library_sha: "43224223dab8cf9627b5ecf737e60216a3fdd114"
  fuel-ostf_sha: "1f08e6e71021179b9881a824d9c999957fcc7045"
  fuelmain_sha: "6b83d6a6a75bf7bca3177fcf63b2eebbf1ad0a85"
(vxlan, 3 controllers, 2 compute)

Steps:
1) Create net, subnet
2) Create DVR router with gateway to external net
3) Connect router and new net
4) Boot vm with floating
5) Find compute with vm
6) Stop l3 agents on this compute: service neutron-l3-agent stop
7) Wait some minutes
8) Start l3 agent: service neutron-l3-agent start
9) Wait some time
9) Check ping 8.8.8.8 from vm

Changed in mos:
status: Fix Committed → Fix Released
tags: removed: on-verification
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.