[RFE] neutron-vpnaas OpenVPN driver

Bug #1837847 reported by Adriaan Schmidt on 2019-07-25
This bug affects 2 people
Affects Status Importance Assigned to Milestone

Bug Description

I started implementing an OpenVPN driver that allows remote client logins to Neutron networks, similar to the patches started and then abandoned by Rajesh Mohan [1].
In my specific use case this allows remote clients to join Neutron networks in a way that allows broadcast/multicast communication with the instances.
There is a PoC with code in gerrit [2].

One point of criticism of the previous implementation was the storage of VPN server secrets. I addressed this by storing them in Barbican.

There is one questionable detail in the current implementation: IP addresses of remote clients are not assigned by OpenVPN. Instead, during the connection process, a Neutron Port is created, and the IP address is assigned by the Neutron DHCP service. This is ugly, and I didn’t find a good way to clean up those ports when clients disconnect.
But, doing it this way, the only neutron-vpnaas object needed is a vpnservice, so it made a first implementation simpler. I expect to have OpenVPN assign the addresses would also require an endpoint group (to configure the address range for the VPN server) and a site connection (which may require IKE and IPsec policies as well).

Any feedback is welcome.

[1] https://review.opendev.org/#/c/70274/
[2] https://review.opendev.org/#/c/666282

Miguel Lavalle (minsel) wrote :

Hi Adriaan,

Will you be able to carry out the implementation completely?

tags: added: rfe-confirmed
removed: rfe

Hi Miguel,
I will need some guidance, also when it comes to writing appropriate tests, but yes, I can carry out the implementation.

Nate Johnston (nate-johnston) wrote :

When you say your use case is to have broadcast/multicast communication with the instances, does that mean the vpn IP needs to be within the same L2 domain? If so, then I don't see how going through Neutron IPAM can be avoided, whether it's in a pre-reservation capacity or on-demand.

Yes, the idea is to have the VPN clients in the same L2 domain.
Currently this is done by creating a Linux bridge in the qrouter namespace that connects the qr-* port with the tap device created by the VPN server. This is possibly another point we should discuss in more detail.

Technically, I can bypass Neutron IPAM. I can have the OpenVPN server assign IP addresses from a range that is not part of a Neutron allocation pool, and Neutron will never know about these clients. But it's probably not a good idea to do this.

One difficulty I faced: the VPN server calls a hook script on client connect/disconnect events. This does not run in the agent/driver context, so I'm not sure if I can make RPC calls from there (to allocate IPs, create Ports, ...).

Miguel Lavalle (minsel) wrote :

In your description you mention you create Neutron ports. In comment #4, it seems you suggest you cannot talk to Neutron. Would you clarify?

The way I create Neutron ports is a hack. It goes:

- Agent starts OpenVPN server, and is then no longer involved.

- Client initiates connection to OpenVPN, passes username and password.

- OpenVPN calls my authentication hook, where I try to get a Keystone token (via the client API) to decide if the client is allowed to connect. If I get a token I store it termporarily in a file.

- Client connection succeeds, and OpenVPN calls my learn-address hook, passing the client's mac address. Here I re-use the stored token to create the port (again via client API), as the client user who is connecting. Then I forget/delete the token.

- Client disconnects, and OpenVPN could call a hook again, where I would like to remove the port. But I no longer have a valid token to call the client API.

Miguel Lavalle (minsel) wrote :

Thanks for your responses. The Neutron drivers team is going to discuss this RFE on August 9th at 1400 UTC, in channel #openstack-meeting (freenode)

tags: added: rfe-triaged
removed: rfe-confirmed
Changed in neutron:
importance: Undecided → Wishlist
status: New → Triaged
YAMAMOTO Takashi (yamamoto) wrote :

hi Adriaan,

can you please describe how a vpnservice object for openvpn looks like?
eg. if/how the router_id field is used?
i wonder how do you describe an l2-based vpn with a vpnservice.

Miguel Lavalle (minsel) wrote :

This RFE was discussed during the drivers weekly meeting on August 9th: http://eavesdrop.openstack.org/meetings/neutron_drivers/2019/neutron_drivers.2019-08-09-14.00.log.html#l-32. The result of that discussion was the question posed in #8 above. The drivers team will wait for the submitters response

Download full text (3.4 KiB)

First of all: thanks for your interest in my OpenVPN driver.
I will be on vacation for the next two weeks, but after that I'll try to join the IRC meetings.

Just to clarify: the hook-scripts run on the server's side, and are triggered when a client connects to openvpn:
- I use an authentication hook to verify the client's username and password against Keystone (if the client can get a Keystone token, then he's allowed to login to the VPN). This I actually do via the Keystone client API, and it doesn't need any interaction with Neutron or the Neutron agent.
- After the client has connected successfully, there's another hook, giving me the client's mac address. I think that the "registration" of the client in Neutron (create a Port, assign an IP address, ...) should be done by the Neutron agent (instead of the procedure I described in #6). Can you recommend a way for the hook script to signal events to the agent (I guess I could use a local socket, or monitor a file...)?

Re the vpnservice object: I'm using the router_id in the same way the IPSEC VPN does (I think).
I start an openvpn process in the router's network namespace, and have it listen on vpnservice.external_ip (the qg-* device). From what I can tell, there is nothing special in describing an L2-based VPN, and so far I didn't feel a need to change APIs.
The difference comes when clients connect. Then instead of connecting networks via routing, I bring clients directly into the Neutron network by bridging OpenVPN's tap device with the qr-* port of the router. I'm not making any changes to routing/iptables.

The result is then:

$ ip netns exec qrouter-f4d3c149-f85a-4530-98f1-2efdbb151787 ip -4 l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master qr-br-cf47a7c9 state UP mode DEFAULT group default qlen 100
    link/ether 42:66:e6:18:a0:7a brd ff:ff:ff:ff:ff:ff
3: qr-br-cf47a7c9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 42:66:e6:18:a0:7a brd ff:ff:ff:ff:ff:ff
18: qr-cf47a7c9-dc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master qr-br-cf47a7c9 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fa:16:3e:4c:82:21 brd ff:ff:ff:ff:ff:ff
19: qg-7342ede0-d1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fa:16:3e:39:b8:67 brd ff:ff:ff:ff:ff:ff

$ ip netns exec qrouter-f4d3c149-f85a-4530-98f1-2efdbb151787 ip -4 a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet scope host lo
       valid_lft forever preferred_lft forever
3: qr-br-cf47a7c9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    inet brd scope global qr-br-cf47a7c9
       valid_lft forever preferred_lft forever
19: qg-7342ede0-d1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    inet ...


Miguel Lavalle (minsel) wrote :

Above, in #10, you state: "I think that the "registration" of the client in Neutron (create a Port, assign an IP address, ...) should be done by the Neutron agent (instead of the procedure I described in #6)."

Have you considered doing this with agent extensions: https://docs.openstack.org/neutron/latest/contributor/internals/agent_extensions.html

Miguel Lavalle (minsel) wrote :

We will revisit this RFE on September 5th, 1400UTC in the #openstack-meeting channel, freenode

Slawek Kaplonski (slaweq) wrote :


In comment #6 You wrote "- Client disconnects, and OpenVPN could call a hook again, where I would like to remove the port. But I no longer have a valid token to call the client API." - so how cleaning of such ports will be done exactly?

YAMAMOTO Takashi (yamamoto) wrote :


i'm still not sure about the api.

1. the use of router for l2 vpn looks a bit awkward
2. to which network it's connected when the router has more than one router ports?

about the way to create neutron ports, maybe you can look at the way how dhcp agent creates its ports.

Nate Johnston (nate-johnston) wrote :

In neutron, port creation is initiated from the neutron api server and pushed down to the agents; there is not (that I am aware of) an RPC method whereby an agent can initiate a request for port provisioning/deprovisioning. That request would need to be filed in such a way that the resource would be created in the same location as the OpenVPN that is constructing the VPN connection; I am not sure if that semantic exists.

Akihiro Motoki (amotoki) wrote :

This is L2 VPN not L3 VPN. You said there is no need to change the current VPNaaS API but it is unclear.
For example, "VPN service" in the current API take 'router_id' as mandatory field, so it is not clear how 'router' is involved in L2 VPN. I would like to know how the current VPNaaS API can be used in your proposal.

Miguel Lavalle (minsel) wrote :

Hi Adriaan,

We discussed this RFE again during the drivers meeting. In #13, #14, #15 and #16 you will find questions that the team still has about this proposal. We also commented that a spec with a diagrams might help us in trying to move forward with this proposal. It would also be easier to answer our questions in that spec. Here. you will find examples of current specs: https://specs.openstack.org/openstack/neutron-specs/specs/train/index.html. You can use those as format guidelines

Thanks for your proposal

Dear all,

thanks for your comments.
I uploaded a first draft of a spec for my proposal:

I hope this answers some questions.

re Routers and L2 vs. L3 VPNs (#14, #16):

Running the VPN server in the Router namespace seemed like a logical choice,
but I didn't really question it. I based my implementation on the existing
IPSEC driver and on the previous attempt at an OpenVPN driver.
The nice thing about the solution is that there's already a namespace, and
the Router has IPs on which I can have the VPN server listen.

I'm not sure how this can be moved to "properly live" in Neutron's L2.
Would that mean to have it as an extention to l2gw? I'm not familiar with l2gw
but it looks to me that it would be less flexible to have OpenVPN integrated
at that level.

re API changes (#16):

For my PoC, I managed to avoid creating any new API objects or fields, and I
used the fields already present in vpnservice. I'll provide details in
the spec. But depending on how the discussion on the Neutron Port creation
is resolved, some API changes may be required.

re creation and cleanup of Neutron Ports (#11, #13, #15):

I assumed the main difficulty would be technical: notification of a new client's
login happens by OpenVPN calling a separate script, and I need to find a way
to notify the agent who can then create the Port via RPC.

Now I understand that there may be a conceptual problem as well, and port
creation initiated by an agent is not something that is currently done anywhere.

I see two alternative solutions (which I also put into the spec draft):
1. Leave it all up to the client. We would then expect the client to create
its Port in advance, providing its MAC address. Those Ports can be persistent,
or they can be removed by the client after a disconnect.
2. (briefly mentioned in #4): Don't create Neutron ports.

YAMAMOTO Takashi (yamamoto) wrote :

maybe the script can just notify the agent via AF_LOCAL socket or fifo, instead of talking to the api servers by itself?

Miguel Lavalle (minsel) wrote :

The neutron drivers team agreed on the need of a L2 VPN to support multicast traffic to and from the remote clients. As a consequence, this RFE is approved and we will hash out the technical details in https://review.opendev.org/#/c/680990/

tags: added: rfe-approved
removed: rfe-triaged
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers