Zun

Docker do not allow create network with same cidr

Bug #1690284 reported by Shunli Zhou
14
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Zun
Fix Released
High
Kien Nguyen

Bug Description

Run zun tempest case in devstack with two neutron networks(public, private),

Create container will fail as the network with cidr options:
(Pdb) ipam_options
{'Config': [{'Subnet': u'10.0.0.0/28', 'Gateway': u'10.0.0.1'}], 'Driver': 'kuryr', 'Options': {'neutron.pool.uuid': None}}
has already exist.

[root@localhost zun]# docker network inspect 88e56a595a40
[
    {
        "Name": "1a9c8be9-897a-4415-8f25-96699a89842b-0fc39ebf282c434390199b6225697792",
        "Id": "88e56a595a4086a74b28e157d8e0b04212b9dd294c6f258f0b0dcc12337f51b8",
        "Created": "2017-05-11T15:48:48.445660824+08:00",
        "Scope": "local",
        "Driver": "kuryr",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "kuryr",
            "Options": {
                "neutron.pool.uuid": ""
            },
            "Config": [
                {
                    "Subnet": "10.0.0.0/28",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {
            "neutron.net.uuid": "1a9c8be9-897a-4415-8f25-96699a89842b",
            "neutron.pool.uuid": ""
        },
        "Labels": {}
    }
]

The docker driver do not allow two networks with same cidr. But zun create network or not is based on whether the neutro-net-id+project id network is exist. This may lead to docker-py to create network fail with same cidr.

Following error will show in docker/driver.py:

tp://192.168.2.205:9696/v2.0/subnets.json?network_id=f56dd6f5-b726-402b-9999-09f4ebdc86df used request id req-7d2f55ab-3fdf-44e6-aae0-d8ffaed86a25 from (pid=40095) _append_request_id /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:128
APIError: APIError...eate',),)
> /opt/stack/zun/zun/container/docker/driver.py(485)create_sandbox()
-> context, network_api, neutron_net['id'])
(Pdb) c
2017-05-12 10:43:39.549 ERROR zun.compute.manager [req-14f50260-b111-4e86-b8f0-834c2bd72fc3 tempest-TestContainer-1730353316 tempest-TestContainer-1730353316] Unexpected exception: Docker internal error: 500 Server Error: Internal Server Error ("IpamDriver.RequestPool: Another pool with same cidr exist. ipam and network options not used to pass pool name").
2017-05-12 10:43:39.549 TRACE zun.compute.manager Traceback (most recent call last):
2017-05-12 10:43:39.549 TRACE zun.compute.manager File "/opt/stack/zun/zun/compute/manager.py", line 84, in _do_container_create
2017-05-12 10:43:39.549 TRACE zun.compute.manager image=sandbox_image)
2017-05-12 10:43:39.549 TRACE zun.compute.manager File "/opt/stack/zun/zun/container/docker/driver.py", line 485, in create_sandbox
2017-05-12 10:43:39.549 TRACE zun.compute.manager context, network_api, neutron_net['id'])
2017-05-12 10:43:39.549 TRACE zun.compute.manager File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
2017-05-12 10:43:39.549 TRACE zun.compute.manager self.gen.throw(type, value, traceback)
2017-05-12 10:43:39.549 TRACE zun.compute.manager File "/opt/stack/zun/zun/container/docker/utils.py", line 41, in docker_client
2017-05-12 10:43:39.549 TRACE zun.compute.manager raise exception.DockerError(error_msg=six.text_type(e))
2017-05-12 10:43:39.549 TRACE zun.compute.manager DockerError: Docker internal error: 500 Server Error: Internal Server Error ("IpamDriver.RequestPool: Another pool with same cidr exist. ipam and network options not used to pass pool name").
2017-05-12 10:43:39.549 TRACE zun.compute.manager
2017-05-12 10:43:39.556 DEBUG oslo_messaging._drivers.amqpdriver [req-8d4e9f98-c3e9-46c2-af51-fdd406479f7d tempest-TestContainer-1730353316 tempest-TestContainer-1730353316] received message msg_id: bf617f68d1f145069f90b9343f37fd7c reply to reply_add568ae945e4104bd4f7189cc1139a8 from (pid=40095) __call__ /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:213

Revision history for this message
hongbin (hongbin034) wrote :

The error is raised from Kuryr: https://github.com/openstack/kuryr-libnetwork/blob/master/kuryr_libnetwork/controllers.py#L1427 . I guess the error will disappear after your kuryr installation contains this feature: https://blueprints.launchpad.net/kuryr-libnetwork/+spec/existing-subnetpool .

Changed in zun:
status: New → Won't Fix
Revision history for this message
hongbin (hongbin034) wrote :

Sorry, I found that I got into the same issue. This is definitely a bug.

Changed in zun:
status: Won't Fix → Triaged
Revision history for this message
hongbin (hongbin034) wrote :

It looks the issue is from tempest. Propose a patch for that: https://review.openstack.org/#/c/466440/

hongbin (hongbin034)
Changed in zun:
importance: Undecided → High
assignee: nobody → hongbin (hongbin034)
Revision history for this message
Shunli Zhou (shunliz) wrote :

@hongbin, could you explain the problem. I do not know the root cause of the problem and can not under the fix patch and re view it.

Revision history for this message
Shunli Zhou (shunliz) wrote :

sorry, /under/understand/.

Revision history for this message
hongbin (hongbin034) wrote :

Shunli,

My understanding of the problem is that Kuryr has limitation of finding the neutron subnet based on provided information. See here: https://github.com/openstack/kuryr-libnetwork#limitations-1 . Normally, Zun will provide the cidr of the subnet (i.e. 10.0.0.0/24) to docker (and docker pass it to kuryr). If there are more than on subnets with that cidr, kuryr won't be able to tell which one is what we want. In this case, Zun will provide the uuid of the subnetpool so that kuryr could find the subnet within that subnetpool. This error occurred if the subnet doesn't have a subnetpool. Therefore, kuryr complained about the same cidr taken by another subnet.

Revision history for this message
Shunli Zhou (shunliz) wrote :

thanks hongbin for clarify the problem.

hongbin (hongbin034)
Changed in zun:
assignee: hongbin (hongbin034) → nobody
hongbin (hongbin034)
Changed in zun:
assignee: nobody → hongbin (hongbin034)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to zun (master)

Fix proposed to branch: master
Review: https://review.openstack.org/502425

Changed in zun:
assignee: hongbin (hongbin034) → Kien Nguyen (kiennt26)
status: Triaged → In Progress
Revision history for this message
cooldharma06 (cooldharma06) wrote :

hi @kien, are you working on this bug or it got fixed, because for one of my tempest testcases i am facing the above error.

Revision history for this message
Kien Nguyen (kiennt2609) wrote :

Hi @cooldharma, I am working on this bug. I proposed a patch to fix it but it depends on another patch in kuryr-libnetwork side [2]. That patch needs to be landed first.

[1] https://review.openstack.org/#/c/502425/
[2] https://review.openstack.org/#/c/499493/

Revision history for this message
cooldharma06 (cooldharma06) wrote :

@kien, thanks for the update and patch

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to zun (master)

Reviewed: https://review.openstack.org/502425
Committed: https://git.openstack.org/cgit/openstack/zun/commit/?id=5e4676706961bef4595dd07e8505d5d6f4a91576
Submitter: Jenkins
Branch: master

commit 5e4676706961bef4595dd07e8505d5d6f4a91576
Author: Kien Nguyen <email address hidden>
Date: Mon Sep 11 15:49:50 2017 +0700

    Allow create/run container with network which has same cidr

    For e.x, we have 2 networks (network1 & network2). Both of them have subnet
    with same cidr. We can run 2 container with 2 different networks (but with
    same cidr).

    $ zun create --net network=network1 cirros
    $ zun create --net network=network2 cirros

    Change-Id: I1c57ad3d6d195a5f04b5206cde472298a999f2d3
    Closes-Bug: #1690284

Changed in zun:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/zun 1.0.0

This issue was fixed in the openstack/zun 1.0.0 release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.