HARestarter deletes quantum port required by instance

Bug #1196479 reported by Steven Hardy
22
This bug affects 4 people
Affects Status Importance Assigned to Milestone
OpenStack Heat
Won't Fix
Low
Unassigned

Bug Description

Created from https://answers.launchpad.net/heat/+question/231665

--
Using grizzly version from heat/openstack.

Ports on template also removed when instance is deleted, is that normal? I have HARestarter policy and few alarms for instances in the template and when I issue heat-wacth command from command line:

    heat-watch set-state teststack.Test_Instance_1_killed ALARM

Related instance is deleted ok, but it will also delete Port from with instance is dependent on. That will cause Instance start to fail as Port is not found.

See below my template and heat-engine.log and nova-api.log.

Template:
{
   "AWSTemplateFormatVersion" : "2010-09-09",

   "Description" : "Test template.",

   "Parameters" : {

     "KeyName" : {
       "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instances",
       "Type" : "String"
     },

     "InstanceType" : {
       "Description" : "test instance type",
       "Type" : "String",
       "Default" : "m1.small",
       "AllowedValues" : [ "m1.small" ],
       "ConstraintDescription" : "must be a valid EC2 instance type."
     },

     "DemoDistribution": {
       "Default": "DemoV1",
       "Description" : "Demo distribution of choice",
       "Type": "String",
       "AllowedValues" : [ "DemoV1" ]
     }
   },

   "Mappings" : {
     "AWSInstanceType2Arch" : {
       "m1.small" : { "Arch" : "64" }
     },
     "DistroArch2AMI": {
       "DemoV1" : { "64" : "ubuntu1304-v5-amd64" }
     }
   },

   "Resources" : {

    "private-network": {
      "Type": "OS::Quantum::Net"
    },

    "public-network": {
      "Type": "OS::Quantum::Net",
      "Properties": {
 "value_specs": {"router:external" : true}
      }
    },

   "private-subnet": {
      "Type": "OS::Quantum::Subnet",
      "DependsOn" : "private-network",
      "Properties": {
        "network_id": { "Ref" : "private-network" },
        "ip_version": 4,
        "cidr": "10.0.0.0/24",
 "gateway_ip": "10.0.0.1",
 "allocation_pools": [{"start": "10.0.0.2", "end": "10.0.0.20"}]
      }
    },

   "public-subnet": {
      "Type": "OS::Quantum::Subnet",
      "DependsOn" : "public-network",
      "Properties": {
        "network_id": { "Ref" : "public-network" },
        "ip_version": 4,
        "cidr": "192.168.0.0/24",
 "gateway_ip": "192.168.0.1",
 "allocation_pools": [{"start": "192.168.0.2", "end": "192.168.0.20"}]
      }
    },

    "private-port": {
      "Type": "OS::Quantum::Port",
      "DependsOn" : "private-subnet",
      "Properties": {
        "network_id": { "Ref" : "private-network" }
      }
    },

    "private-port2": {
      "Type": "OS::Quantum::Port",
      "DependsOn" : "private-subnet",
      "Properties": {
        "network_id": { "Ref" : "private-network" }
      }
    },

    "router": {
 "Type": "OS::Quantum::Router",
        "DependsOn" : "public-subnet"
    },

     "router_interface": {
       "Type": "OS::Quantum::RouterInterface",
       "DependsOn" : "router",
       "Properties": {
         "router_id": { "Ref" : "router" },
         "subnet_id": { "Ref" : "private-subnet" }
       }
     },

    "router_gateway_external": {
      "Type": "OS::Quantum::RouterGateway",
      "DependsOn" : "router_interface",
      "Properties": {
        "router_id": { "Ref" : "router" },
        "network_id": { "Ref" : "public-network" }
      }
    },

     "testInst1": {
       "Type": "AWS::EC2::Instance",
       "Metadata" : {
         "AWS::CloudFormation::Init" : {
         }
       },
       "Properties": {

         "ImageId" : { "Fn::FindInMap" : [ "DistroArch2AMI", { "Ref" : "DemoDistribution" },
                           { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
         "InstanceType" : { "Ref" : "InstanceType" },
         "KeyName" : { "Ref" : "KeyName" },
  "NetworkInterfaces" : [ { "Ref" : "private-port" } ]
       }
     },

    "testInst2": {
       "Type": "AWS::EC2::Instance",
       "Metadata" : {
         "AWS::CloudFormation::Init" : {
         }
       },
       "Properties": {

         "ImageId" : { "Fn::FindInMap" : [ "DistroArch2AMI", { "Ref" : "DemoDistribution" },
                           { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
         "InstanceType" : { "Ref" : "InstanceType" },
         "KeyName" : { "Ref" : "KeyName" },
  "NetworkInterfaces" : [ { "Ref" : "private-port2" } ]
       }
     },

    "Test_Instance_1_RestartPolicy" : {
      "Type" : "OS::Heat::HARestarter",
      "Properties" : {
        "InstanceId" : { "Ref" : "testInst1" }
      }
    },
    "Test_Instance_2_RestartPolicy" : {
      "Type" : "OS::Heat::HARestarter",
      "Properties" : {
        "InstanceId" : { "Ref" : "testInst2" }
      }
    },

    "AllFailureAlarm": {
     "Type": "AWS::CloudWatch::Alarm",
     "Properties": {
        "AlarmDescription": "Restart all instances",
        "MetricName": "VmInstanceFailure",
        "Namespace": "CGC/Vm",
        "Statistic": "SampleCount",
        "Period": "300",
        "EvaluationPeriods": "1",
        "Threshold": "2",
        "ComparisonOperator": "GreaterThanThreshold",
        "AlarmActions": [ { "Ref": "Test_Instance_1_RestartPolicy" }, { "Ref": "Test_Instance_2_RestartPolicy" } ]
      }
    },

    "Test_Instance_1_killed": {
     "Type": "AWS::CloudWatch::Alarm",
     "Properties": {
        "AlarmDescription": "Restart the specific instance",
        "MetricName": "VmInstanceFailure",
        "Namespace": "CGC/Vm",
        "Statistic": "SampleCount",
        "Period": "300",
        "EvaluationPeriods": "1",
        "Threshold": "2",
        "ComparisonOperator": "GreaterThanThreshold",
        "AlarmActions": [ { "Ref": "Test_Instance_1_RestartPolicy" } ]
      }
    },

    "Test_Instance_2_killed": {
     "Type": "AWS::CloudWatch::Alarm",
     "Properties": {
        "AlarmDescription": "Restart the specific instance",
        "MetricName": "VmInstanceFailure",
        "Namespace": "CGC/Vm",
        "Statistic": "SampleCount",
        "Period": "300",
        "EvaluationPeriods": "1",
        "Threshold": "2",
        "ComparisonOperator": "GreaterThanThreshold",
        "AlarmActions": [ { "Ref": "Test_Instance_2_RestartPolicy" } ]
      }
    }
  }
}

heat-engine.log:
2013-07-01 19:57:14.279 12829 INFO heat.engine.watchrule [-] WATCH: stack:0d2f4fe7-c4f8-4c31-91d8-c92b0c83cba3, watch_name:teststack.Test_Instance_1_killed ALARM
2013-07-01 19:57:14.360 12829 DEBUG heat.engine.watchrule [-] Overriding state NODATA for watch teststack.Test_Instance_1_killed with ALARM set_watch_state /usr/local/lib/python2.7/dist-packages/heat-2013.1.1.a8.g4b48b0e-py2.7.egg/heat/engine/watchrule.py:280
2013-07-01 19:57:14.365 12829 INFO heat.engine.resources.instance [-] Test_Instance_1_RestartPolicy Alarm, restarting resource: testInst1
2013-07-01 19:57:14.365 12829 INFO heat.engine.resource [-] deleting CloudWatchAlarm "AllFailureAlarm" (inst:None db_id:624)
2013-07-01 19:57:14.701 12829 INFO heat.engine.resource [-] deleting CloudWatchAlarm "Test_Instance_1_killed" (inst:None db_id:625)
2013-07-01 19:57:15.035 12829 INFO heat.engine.resource [-] deleting Restarter "Test_Instance_1_RestartPolicy" (inst:None db_id:623)
2013-07-01 19:57:15.370 12829 INFO heat.engine.resource [-] deleting Instance "testInst1" (inst:97aa0b4e-0200-4254-8e8e-bb74e0980d2c db_id:620)
2013-07-01 19:57:17.515 12829 DEBUG heat.engine.service [-] Periodic watcher task for stack 0d2f4fe7-c4f8-4c31-91d8-c92b0c83cba3 _periodic_watcher_task /usr/local/lib/python2.7/dist-packages/heat-2013.1.1.a8.g4b48b0e-py2.7.egg/heat/engine/service.py:515
2013-07-01 19:57:20.002 12829 INFO heat.engine.resource [-] creating Instance "testInst1"
2013-07-01 19:57:20.484 12829 ERROR heat.engine.resource [-] create Instance "testInst1"
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource Traceback (most recent call last):
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource File "/usr/local/lib/python2.7/dist-packages/heat-2013.1.1.a8.g4b48b0e-py2.7.egg/heat/engine/resource.py", line 320, in create
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource self.handle_create()
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource File "/usr/local/lib/python2.7/dist-packages/heat-2013.1.1.a8.g4b48b0e-py2.7.egg/heat/engine/resources/instance.py", line 307, in handle_create
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource availability_zone=availability_zone)
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/servers.py", line 600, in create
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource **boot_kwargs)
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/base.py", line 163, in _boot
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource return_raw=return_raw, **kwargs)
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 145, in _create
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource _resp, body = self.api.client.post(url, body=body)
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 233, in post
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource return self._cs_request(url, 'POST', **kwargs)
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 217, in _cs_request
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource **kwargs)
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 199, in _time_request
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource resp, body = self.request(url, method, **kwargs)
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 193, in request
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource raise exceptions.from_response(resp, body, url, method)
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource ClientException: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-624845c9-9d73-4800-8694-80a524065d2c)
2013-07-01 19:57:20.484 12829 TRACE heat.engine.resource
2013-07-01 19:57:20.646 12829 INFO heat.engine.resource [-] creating Restarter "Test_Instance_1_RestartPolicy"
2013-07-01 19:57:20.896 12829 INFO heat.engine.resource [-] creating CloudWatchAlarm "AllFailureAlarm"
2013-07-01 19:57:21.189 12829 INFO heat.engine.resource [-] creating CloudWatchAlarm "Test_Instance_1_killed"
2013-07-01 19:58:17.544 12829 DEBUG heat.engine.service [-] Periodic watcher task for stack 0d2f4fe7-c4f8-4c31-91d8-c92b0c83cba3 _periodic_watcher_task /usr/local/lib/python2.7/dist-packages/heat-2013.1.1.a8.g4b48b0e-py2.7.egg/heat/engine/service.py:515

nova-api.log:
2013-07-01 19:57:20.479 ERROR nova.api.openstack [req-624845c9-9d73-4800-8694-80a524065d2c 1319010bf51a454a83cb1f3eb0470a4b 31f6875a21b2408c9140e729e56334a1] Caught error: Port e4d7099c-bf9b-4611-8b0a-89f84cda4bbe could not be found on n
etwork None
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack Traceback (most recent call last):
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 81, in __call__
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack return req.get_response(self.application)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack application, catch_exc_info=False)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in call_application
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack app_iter = application(self.environ, start_response)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack return resp(environ, start_response)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py", line 450, in __call__
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack return self.app(env, start_response)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack return resp(environ, start_response)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack return resp(environ, start_response)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack return resp(environ, start_response)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/Routes-1.12.3-py2.7.egg/routes/middleware.py", line 131, in __call__
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack response = self.app(environ, start_response)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack return resp(environ, start_response)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack resp = self.call_func(req, *args, **self.kwargs)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack return self.func(req, *args, **kwargs)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 890, in __call__
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack content_type, body, accept)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 942, in _process_stack
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack action_result = self.dispatch(meth, request, action_args)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 1022, in dispatch
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack return method(req=request, **action_args)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 898, in create
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack scheduler_hints=scheduler_hints)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/hooks.py", line 85, in inner
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack rv = f(*args, **kwargs)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 962, in create
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack scheduler_hints=scheduler_hints)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 676, in _create_instance
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack reservation_id, scheduler_hints)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 634, in _validate_and_provision_instance
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack QUOTAS.rollback(context, quota_reservations)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack self.gen.next()
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 522, in _validate_and_provision_instance
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack self._check_requested_networks(context, requested_networks)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 358, in _check_requested_networks
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack self.network_api.validate_networks(context, requested_networks)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", line 447, in validate_networks
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack port = quantumv2.get_client(context).show_port(port_id).get('port')
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 107, in with_params
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack ret = self.function(instance, *args, **kwargs)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 262, in show_port
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack return self.get(self.port_path % (port), params=_params)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 982, in get
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack headers=headers, params=params)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 967, in retry_request
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack headers=headers, params=params)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 912, in do_request
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack self._handle_fault_response(status_code, replybody)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 893, in _handle_fault_response
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack exception_handler_v20(status_code, des_error_body)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 80, in exception_handler_v20
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack message=error_dict)
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack QuantumClientException: Port e4d7099c-bf9b-4611-8b0a-89f84cda4bbe could not be found on network None
2013-07-01 19:57:20.479 7044 TRACE nova.api.openstack
2013-07-01 19:57:20.482 INFO nova.api.openstack [req-624845c9-9d73-4800-8694-80a524065d2c 1319010bf51a454a83cb1f3eb0470a4b 31f6875a21b2408c9140e729e56334a1] http://192.168.40.1:8774/v2/31f6875a21b2408c9140e729e56334a1/servers returned with HTTP 500

Liang Chen (cbjchen)
Changed in heat:
assignee: nobody → Liang Chen (cbjchen)
Liang Chen (cbjchen)
Changed in heat:
assignee: Liang Chen (cbjchen) → nobody
Changed in heat:
milestone: none → havana-3
importance: Undecided → Medium
Revision history for this message
Pekka Rinne (tsierkkis) wrote :

I added some trace logs to heat/engine/resources/quantum/port.py handle_delete method.

It seemed that in case instance is deleted (and its port as a side effect) this method is not executed at all and no log is written. So could this mean that the bug is located somewhere else e.g. in quantum or nova side?

Revision history for this message
Liang Chen (cbjchen) wrote :

This needs back-port 58d88f64d79ecc336a15e5898b1de1efd605f112 to grizzly.

Revision history for this message
Liang Chen (cbjchen) wrote :

Pekka,
Nova will deleted the associated port once an instance is removed. But back-porting 58d88f64d79ecc336a15e5898b1de1efd605f112 may not be enough though.

Revision history for this message
Pekka Rinne (tsierkkis) wrote :

hi

In case of implicit port creation how is then floating IP associated to it in the same template? As I think port UUID is not known.

br,
Pekka

Steven Hardy (shardy)
Changed in heat:
milestone: havana-3 → havana-rc1
Steven Hardy (shardy)
Changed in heat:
milestone: havana-rc1 → icehouse-1
Revision history for this message
Steven Hardy (shardy) wrote :

Actually it's worse than just deleting the port, HARestarter deletes *itself* when restarting an instance.

I'm attaching an example template, where if you trigger the restart by hitting the output URL, the HARestarter resource itself is deleted and recreated. This is a problem because it creates a new signal URL, and also deletes and recreates the keystone user associated with the SignalResponder.

It seems that we need a way to make HARestarter break the dependency chain, such that on recreating a resource, we don't walk the tree and delete all dependent resources too.

Revision history for this message
Steven Hardy (shardy) wrote :
Changed in heat:
status: New → Confirmed
assignee: nobody → Steven Hardy (shardy)
importance: Medium → High
tags: added: havana-rc-potential
Revision history for this message
Steve Baker (steve-stevebaker) wrote :

Attached is a yaml template which auto-fails after 60 seconds. This demonstrates the port deleting issue

Revision history for this message
Steve Baker (steve-stevebaker) wrote :

The attached template does not specify the port resources at all, and uses the instance property SubnetId instead of NetworkInterfaces, ie

      SubnetId:
        Ref: private-subnet

New ports are created on each reboot.

Please let me know if this workaround fixes your problem. I'll continue to look into any options for detaching a port when a instance is deleted so that the port resource can be reused.

Changed in heat:
assignee: Steven Hardy (shardy) → Steve Baker (steve-stevebaker)
Revision history for this message
Steve Baker (steve-stevebaker) wrote :

Corrected version of SubnetId approach

Revision history for this message
Steve Baker (steve-stevebaker) wrote :

So it looks like neutron ports can only be used once. Manually running the following also results in the neutron port being deleted:
    nova interface-detach <server> <port-id>

I don't think the fix for this will be simple, so I would recommend using the SubnetId workaround mentioned above.

This combined with shardy's observation about HARestarter being recreated means Stack.restart_resource might need quite a different approach - something which updates the dependencies rather than doing destroy/create on each one (that still wouldn't fix the deleted port issue, but might be part of the solution).

Thierry Carrez (ttx)
Changed in heat:
milestone: icehouse-1 → havana-rc2
Revision history for this message
Steve Baker (steve-stevebaker) wrote :

I think fixing this now is too high risk at this point. There is a workaround for the ports issue (use SubnetId instead)

It looks like a proper fix would mean a complete rewrite of Stack.restart_resource, plus a bit more.

tags: removed: havana-rc-potential
Changed in heat:
milestone: havana-rc2 → icehouse-1
Changed in heat:
milestone: icehouse-1 → icehouse-2
Changed in heat:
milestone: icehouse-2 → icehouse-3
Thierry Carrez (ttx)
Changed in heat:
milestone: icehouse-3 → icehouse-rc1
Revision history for this message
Steve Baker (steve-stevebaker) wrote :

I'll leave this open for now, but I think HARestarter should be deprecated.

Changed in heat:
milestone: icehouse-rc1 → next
assignee: Steve Baker (steve-stevebaker) → nobody
Revision history for this message
sireesha chintada (c-sireesha) wrote :

Sorryy !!!
I accidentally pressed on the Status button and it got changed from confirmed to new... I reverted it back..

Changed in heat:
status: Confirmed → New
status: New → Confirmed
Revision history for this message
Steve Baker (steve-stevebaker) wrote :

Nova related bug #1158684

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to heat (master)

Fix proposed to branch: master
Review: https://review.openstack.org/121702

Changed in heat:
assignee: nobody → Steve Baker (steve-stevebaker)
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on heat (master)

Change abandoned by Steve Baker (<email address hidden>) on branch: master
Review: https://review.openstack.org/121702
Reason: Prefer https://review.openstack.org/#/c/121824/1

Revision history for this message
Angus Salkeld (asalkeld) wrote :

Moved to Low as it is now deprecated, and reset to "confirmed" so as to not give the impression someone was working on it (Steve abandoned his change).

Changed in heat:
importance: High → Medium
assignee: Steve Baker (steve-stevebaker) → nobody
importance: Medium → Low
status: In Progress → Confirmed
Thomas Herve (therve)
Changed in heat:
status: Confirmed → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.