Incorrect logic of subnet fetching in /IpamDriver.RequestAddress handling

Bug #1585572 reported by vikas choudhary
20
This bug affects 2 people
Affects Status Importance Assigned to Milestone
kuryr
Fix Released
High
vikas choudhary

Bug Description

If two docker networks are created with same subnet cidr, current logic of subnet retrieval will not give correct subnet.

Changed in kuryr:
assignee: nobody → vikas choudhary (choudharyvikas16)
status: New → In Progress
Revision history for this message
vikas choudhary (choudharyvikas16) wrote :

How about this approach:
Lets maintain a boot persistent PoolID-networkID mapping. Add entry at network creation time.
At port creation time, PoolID is received from libnetwork. Using mapping network-id can be fetched and then eventually appropriate subnet.

Revision history for this message
Irena Berezovsky (irenab) wrote :

Can you please add more details regarding the execution flow and the problem in the implementation that prevents it from working?

Revision history for this message
vikas choudhary (choudharyvikas16) wrote :

Sure. I have updated the details in description. I see a problem in my approach. In case of multiple nodes it will not work as there will be no sync-up. Should we use a distributed kv store like redis or etcd for storing this mapping? Or any other approach please suggest.

Revision history for this message
Mohammad Banikazemi (mb-s) wrote :

@vikas, I don't think the kv store is an issue; we could store the pool id probably as a tag for the corresponding network but the problem is that I don't think we know the association.
That is, when the pool is created there is no reference to the network and when network is created there is no reference to the pool. As far as the remote driver/plugin is concerned, it has no idea about the pools and the IPAM stuff. Am I missing something here?

The default docker IPAM driver, uses the same pool if the same cidr is specified during the network creation of multiple networks. Furthermore, it does not allow overlapping cidrs within an address space as of now.

The OpenStack user expectation is that she can specify overlapping or identical cidrs for different networks.

Changed in kuryr:
importance: Undecided → High
Revision history for this message
Mohammad Banikazemi (mb-s) wrote :

Let us assume for the moment we do not have concurrent requests, then the following may be a solution using encoding the info in the name of subnets and subnetpools (a cleaner solution with tags is possible when tags are available for these resources but not right now):

When subnet is being created (right after the pool is created, network create is called and that's where we create the subnet), find the "unbound" pool with the same cidr. Use its id as the name of the subnet. Change name of the pool to mark it as "bound".

Now, when there is request for an address from a pool, you can easily identify the subnet associated with the pool by looking for subnet with name equal to the pool id.

This will work if we don't have multiple create pool requests into the Kuryr IPAM driver in the multi-threaded version that we will have at some point.

thoughts?

Revision history for this message
vikas choudhary (choudharyvikas16) wrote :

Mohammad, let say there are two docker worker nodes. On each of them, docker network creation command is executed at same time. There will be two "unbound" pools in kuryr remote driver network create handling time. Even in single threaded kuryr, there is no syncup between kuryr drivers running on different worker nodes. Please correct my understanding if wrong.

On, "That is, when the pool is created there is no reference to the network and when network is created there is no reference to the pool":
When network is created, poolID is there is json_data. Lets store it as a tag with this network. Now when there is request for an address from IPAM, poolID is there again in json_data. Now we can filter the subnet by matching this poolID with the stored poolID(subnet->network->poolID). Thoughts?

was trying to reach you on irc but hard luck.

Revision history for this message
Mohammad Banikazemi (mb-s) wrote :

> Mohammad, let say there are two docker worker nodes. On each of them, docker network creation command is executed at same time. There will be two "unbound" pools in kuryr remote driver network create handling time. Even in single threaded kuryr, there is no syncup between kuryr drivers running on different worker nodes. Please correct my understanding if wrong.

Vikas, You are correct.

> On, "That is, when the pool is created there is no reference to the network and when network is created there is no reference to the pool": When network is created, poolID is there is json_data.

I don't think that's the case. I hope I am wrong but this is an example of what I see in create network:

{u'NetworkID': u'3795118feb3e91c6b0e99ab9919491aac9d2c426a4b6448443e0fe0dd863441d', u'IPv4Data': [{u'Pool': u'10.10.18.0/24', u'Gateway': u'10.10.18.1/24', u'AddressSpace': u''}], u'IPv6Data': [], u'Options': {u'com.docker.network.enable_ipv6': False, u'com.docker.network.generic': {}}}

The pool is referred to by the cidr only.

In general:

{
            "NetworkID": string,
            "IPv4Data" : [{
                "AddressSpace": string,
                "Pool": ipv4-cidr-string,
                "Gateway" : ipv4-address,
                "AuxAddresses": {
                    "<identifier1>" : "<ipv4-address1>",
                    "<identifier2>" : "<ipv4-address2>",
                    ...
                }
            }, ...],
            "IPv6Data" : [{
                "AddressSpace": string,
                "Pool": ipv6-cidr-string,
                "Gateway" : ipv6-address,
                "AuxAddresses": {
                    "<identifier1>" : "<ipv6-address1>",
                    "<identifier2>" : "<ipv6-address2>",
                    ...
                }
            }, ...],
            "Options": {
                ...
            }
        }

Revision history for this message
vikas choudhary (choudharyvikas16) wrote :

Mohammad, Let me try out a bit. I will get back to you soon.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to kuryr (master)

Fix proposed to branch: master
Review: https://review.openstack.org/322445

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/331050

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to kuryr (master)

Reviewed: https://review.openstack.org/331050
Committed: https://git.openstack.org/cgit/openstack/kuryr/commit/?id=709fede534ec15e4f260c78019083aff9cf19282
Submitter: Jenkins
Branch: master

commit 709fede534ec15e4f260c78019083aff9cf19282
Author: vikaschoudhary16 <email address hidden>
Date: Fri Jun 17 16:23:34 2016 +0530

    Short-Term fix for overlapping cidrs using docker options

    A quick alternative solution, until PRs to upstream Docker get accepted,
    can be Docker user passing pool name to both Kuryr ipam driver and
    Kuryr network driver using corresponding network and ipam options
    respectively:

           $sudo docker network create --driver=kuryr --ipam-driver=kuryr \
           --subnet 10.0.0.0/16 --ip-range 10.0.0.0/24 \
           -o neutron.pool.name=neutron_pool1 \
           --ipam-opt=neutron.pool.name=neutron_pool1 \
           foo
          eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364

    Now Docker user creates another network with same cidr as the previous,
    i.e 10.0.0.0/16, but with different pool name, neutron_pool2:

        $sudo docker network create --driver=kuryr --ipam-driver=kuryr \
           --subnet 10.0.0.0/16 --ip-range 10.0.0.0/24 \
           -o neutron.pool.name=neutron_pool2 \
           --ipam-opt=neutron.pool.name=neutron_pool2 \
           bar
        397badb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d786

    At ``/IpamDriver.RequestAddress``, correct subnet will be filtered out using
    corresponding pool name received from the libnetwork as explained above.

    Please refer https://review.openstack.org/#/c/326894/6

    TODO: unit test cases covering docker option scenario

    DocImpact
    Change-Id: I7090027e68e8c78219a387da66e1bd30be900ab1
    Closes-bug: #1585572

Changed in kuryr:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on kuryr (master)

Change abandoned by Antoni Segura Puimedon (<email address hidden>) on branch: master
Review: https://review.openstack.org/322445
Reason: Inactive

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.