ipv6 support

Bug #309402 reported by bouilloire on 2008-12-18
This bug affects 4 people
Affects Status Importance Assigned to Milestone
Big Muscle

Bug Description

Support for ipv6 as data transport and for client <-> hub communication should be added. As it looks like the ADC protocol already has ipv6 support, that shouldn't require too much work.

Changed in dcplusplus:
importance: Undecided → Low
status: New → Confirmed

Adding this to dcpp team since we should have support for this since its in the protocol aslo changed to wishlist since we are aware that its not in the client yet.. think poy mubled something about him not having the testing capabilites or something cant recall ^^

Changed in dcplusplus:
assignee: nobody → Dcplusplus-team (dcplusplus-team)
importance: Low → Wishlist
tags: added: ipv6 support
Tehnick (tehnick) wrote :

Hi everybody,

Your information is stale.

ADC-hub uhub has this feature about year:
Testing PtokaX DC hub and even CzDC client:

Alexander Voikov (alex-voikov) wrote :

I'm working on support ipv6(i think only in ADC) in linuxdcpp...
 Anyone else need it? Now I have working file transfers between ipv6 clients, but it seems broken support ipv4. I'm working on it ;) Patch will upload later.

Alexander Voikov (alex-voikov) wrote :

Sorry for the mistake - I thought it linudcpp page.

Tehnick (tehnick) wrote :

Don't worry. This is feature for DC++ core. So this is the correct place...

Changed in linuxdcpp:
status: New → Confirmed
importance: Undecided → Wishlist
Big Muscle (bigmuscle) wrote :

When both I4 and I6 are available in INF where the client should connect to? To address specified in I4 or in I6?

Changed in strongdc:
status: New → In Progress
assignee: nobody → Big Muscle (bigmuscle)
Tehnick (tehnick) wrote :

It can be special option. And I think that IPv6 as default connection is better because this is the future.

Also you can take into account the experience of torrent clients. Programs which I saw prefer IPv6 connection when it available but use both of them.

Tehnick (tehnick) wrote :

And if you can to combine IPv6 with your realisation of DHT it will be really great feature.

Big Muscle (bigmuscle) wrote :

and is there any ADC hub able to run on IPv6? I tried that uhub but I was able to run it on IPv4 only.

Tehnick (tehnick) wrote :

1) Did you launch it in MS Windows or in another system?
2) What type of IPv6 connection have you got? (direct connection/tunnel broker/teredo)
3) What value of server_bind_addr variable (in uhub.conf) did you try? It should be something like that:
(for localhost)
Note: if server_bind_addr=any is uses only IPv4 interfaces...

Tehnick (tehnick) wrote :

Hmm, official documentation says that program automatically detects if IPv6 is supported and prefer that:
"Specify the IP address the local hub should bind to. This can be an IPv4 or IPv6 address, or one of the special addresses "any" or "loopback". When "any" or "loopback" is used, the hub will automatically detect if IPv6 is supported and prefer that."


Big Muscle (bigmuscle) wrote :

I'm on Win7 Pro x64. If I run uhub, it says that it's listening on (which is IPv4 address), If I change config from "any" to ::1, the uhub closes immediately after startup. If I try to connect to hub (listening on via ::1 or that long IPv6 address (which I found in Windows Network Center), it says something like "Can't connect because remote server actively refused the connection" (don't know the exact English translation).

Tehnick (tehnick) wrote :

Sorry, I wrote wrong example:
::1 is analog of (loopback) analog in IPv6 is ::

So you can try server_bind_addr=::

We already tried last version of uHub from it's master branch in Debian and Arch Linux ant it works fine with default settings:
2011-07-28 21:39:30 INFO: Starting uhub/0.4.0, listening on :::1511...
In full accordance with official documentation.

Unfortunately I have no any Windows distribution at my disposal. But I have asked this question to our community. And when anybody launch uHub successfully I will write here about his experience.

Did you check IPv6 interface settings in your system?
It should be something like ipconfig /all in cmd.exe

Alexander Voikov (alex-voikov) wrote :


Successfull launch with patched linuxdcpp. Tested with pathed linuxdcpp.

Alexander Voikov (alex-voikov) wrote :

in Debian unstable. uhub from git.

Tehnick (tehnick) wrote :
Download full text (4.3 KiB)

First results are not optimistic...

1) Unfortunately last development version of uHub has compilation error in MS Windows yet.

2) Last stable release 0.3.2 was compiled normally but it don't work with IPv6 interface:
C:\uhub>uhub.exe -v
2011-07-29 03:23:08 WARN: Windows system, limited to 4096 connections.
2011-07-29 03:23:08 DEBUG: Initialized select network backend.
2011-07-29 03:23:08 DEBUG: IPv6 not supported.
2011-07-29 03:23:08 FATAL: Unable to start hub service

This is the case when server_bind_addr=:: in config.

In my Debian system this version of uHub also work fine:
2011-07-28 22:50:32 INFO: Starting uhub/0.3.2, listening on :::1511...

$ netstat -a | grep 1511 | grep tcp6
tcp6 0 0 [::]:1511 [::]:* LISTEN

3) Precompiled files from http://www.extatic.org/downloads/uhub/

C:\uhub-0.2.8>uhub.exe -v
2011-07-29 03:42:10 DEBUG: IPv6 not supported.

C:\uhub-0.2.7>uhub.exe -v
2011-07-29 04:13:49 ERROR: Unknown ACL command on line 52: '.'

C:\uhub-0.2.6>uhub.exe -v
Thu, 28 Jul 2011 20:12:15 +0000 ERROR: Unknown ACL command on line 52: '.'


C:\uhub-0.2.2-3393>uhub.exe -v
Thu, 28 Jul 2011 20:20:50 +0000 DEBUG: IPv6 supported and enabled.
Thu, 28 Jul 2011 20:20:50 +0000 INFO: Starting server, listening on :::1511

C:\uhub-0.2.0-3293>uhub.exe -v
Thu, 28 Jul 2011 20:05:35 +0000 DEBUG: IPv6 supported and enabled.
Thu, 28 Jul 2011 20:05:35 +0000 INFO: Starting server, listening on :::1511

netstat says:
 TCP [::]:1511 [::]:0 LISTENING [uhub.exe]

Also I tried to launch ubub binaries from wine.

TEMP/uhub-0.2.8$ wine uhub.exe -v
fixme:advapi:RegisterEventSourceA ((null),"mpich_mpd"): stub
fixme:advapi:RegisterEventSourceW (L"",L"mpich_mpd"): stub
fixme:advapi:ReportEventA (0xcafe4242,0x0004,0x0000,0x00000000,(nil),0x0001,0x00000000,0x85e5cc,(nil)): stub
fixme:advapi:ReportEventW (0xcafe4242,0x0004,0x0000,0x00000000,(nil),0x0001,0x00000000,0x143da8,(nil)): stub
fixme:advapi:DeregisterEventSource (0xcafe4242) stub
2011-07-28 23:26:18 DEBUG: ACL: Added user 'Dj_Offset' (admin)
2011-07-28 23:26:18 DEBUG: ACL: Added user 'janvidar' (operator)
2011-07-28 23:26:18 DEBUG: ACL: Deny access for: 'Hub-Security' (deny_nick)
2011-07-28 23:26:18 DEBUG: ACL: Deny access for: 'Administrator' (deny_nick)
2011-07-28 23:26:18 DEBUG: ACL: Deny access for: 'root' (deny_nick)
2011-07-28 23:26:18 DEBUG: ACL: Deny access for: 'admin' (deny_nick)
2011-07-28 23:26:18 DEBUG: ACL: Deny access for: 'username' (deny_nick)
2011-07-28 23:26:18 DEBUG: ACL: Deny access for: 'user' (deny_nick)
2011-07-28 23:26:18 DEBUG: ACL: Deny access for: 'guest' (deny_nick)
2011-07-28 23:26:18 DEBUG: ACL: Deny access for: 'operator' (deny_nick)
2011-07-28 23:26:18 DEBUG: IPv6 not supported.
2011-07-28 23:26:18 INFO: Starting uHub/0.2.8, listening on

TEMP/uhub-0.2.2-3393$ wine uhub.exe -v
err:menubuilder:init_xdg error looking up the desktop directory
fixme:advapi:RegisterEventSourceA ((null),"mpich_mpd"): stub
fixme:advapi:RegisterEventSourceW (L"",L"mpich_mpd"): stub
fixme:advapi:ReportEventA (0xcafe4242,0x0004,0x0000,0x00000000,(nil),0x0001,0x00000000,0x85e5cc,(nil)): stub
fixme:advapi:ReportEventW (0xcafe4242,0x0004,0x0000...


Big Muscle (bigmuscle) wrote :

I tried uhub version 0.2.0-3293 and IPv6 works there correctly. Now my client is able to connect to IPv4 hubs and IPv6 hubs according to given IP address format.

Tehnick (tehnick) wrote :

This is a rather amusing situation. I know about three different implementations of IPv6 in the ADC-clients, but I have not seen any of them in the sources. Perhaps it's time to write own bike... =)

Big Muscle (bigmuscle) wrote :

IPv6 support in StrongDC++ is now almost ready - IPv4/IPv6 connections to hubs work, connections between users work correctly too. But there are two possible implementations and each one has its advantages and disadvantages. Now I have to choose one of them:

a) hybrid dual-stack
+ just one IPv6 socket is created and it supports both IPv4 and IPv6 connections
+ very easy to implement and code is almost clean
- it's not supported on older OS's (e.g. WinXP doesn't support it)
- IPv4 addresses are in IPv4-mapped IPv6 format, so conversion must be done

b) independent dual-stack
+ works on all OS's with IPv6 support (including WinXP)
+ IPv4 addresses are in IPv4 format
- two sockets need to be created - one for IPv4 and one for IPv6 connections (increases resource usage)
- almost everything needs to be doubled (you need to listen on both sockets, do blocking wait on two sockets etc.)
- a more complicated for implementation than first method (you need to check for IP address version etc.)

StrongDC++ will use the first method. But what to do when hybrid dual-stack is not supported (e.g. WinXP)? Should I fallback to second method or completely disable IPv6 support in this case?

eMTee (realprogger) wrote :

a) with IPv6 completly disabled in XP is the obivious choice. But what about *nix? Would this method be usable there?

Tehnick (tehnick) wrote :

From my point of view (developer's opinion) we should use method a). And in the case when OS doesn't support hybrid dual-stack, program should work with IPv4 only. But it should work fine. So users with old operation systems will not have one feature, but they will be able to take part in file sharing as before.

Also conversion addresses from IPv4 to IPv6 is very simple. And it can be made in sources very easily. So users even will not see the changes.

Jacek Sieka (arnetheduck) wrote :

The core will be opting for b) probably - but I'm doing for educative reasons to see how complicated it turns out...

In any case, it has a few advantages such as making it easier to connect through ipv4&6 in parallell and see which one connects faster instead of trying ipv6 first and then v4 (it stands to reason to try both if both are available)...

Big Muscle (bigmuscle) wrote :

eMTee: I think that *nix systems shouldn't be any problem. But I will try it under Ubuntu to be 100% sure.

Tehnick: yes, conversions are simple. There is also IN6ADDR_SETV4MAPPED function to simplify that. But I think that adding and removing "::ffff:" prefix from IP addresses should be fine.

arne: you can connect to ipv4&6 in parallel but what would it be for in real usage? Hybrid dual sockets allow connecting to both IP versions without redudant code for separate sockets. But I must admit that b) would be more consistent with ADC protocol which provides U6 (and therefor listening on port different from U4) should be possible.

And when I mentioned ADC protocol. There is CTM command but it doesn't specify whether <port> is for IPv4 or IPv6. It seems to be little inconsistency - UDP ports can be different for each protocol, but TCP port is only one (and nobody knows which protocol to use when I4 and I6 are both specified).

Big Muscle (bigmuscle) wrote :
Changed in strongdc:
status: In Progress → Fix Committed
Tehnick (tehnick) wrote :

I see it also works in DHT. This is great! People connected via NAT will be happy. Thanks a lot for a fast realisation.

Big Muscle (bigmuscle) wrote :

Yeah, since I touched Socket layer only, it should work everywhere - ADC, NMDC, DHT, HTTP etc. I just need to test behaviour under *nix systems. Also, I'm not sure about Util::getLocalIp() - but I know exactly that this function doesn't work under *nix correctly. Here, under Ubuntu, it returns always and not my real IP.

Jacek Sieka (arnetheduck) wrote :

Well, for one when resolving with getaddrinfo, you might get more than one ip (in truth there might be more than 2 but I cheat a little), and you're supposed to connect to all, either in parallel or in the returned addrinfo order - so for example if a domain advertises both an ipv4 and an ipv6 address but the hub/server only runs on one, you're in for a long wait if you connect sequentially (and no connection at all if you keep using just the first resolved address). The second problem is that while you might have an ipv6 address, seeing that everything is quite fresh it might not work for config reasons so it stands to reason to try ipv4 as well if available.

As to the protocol, I agree that is an issue - there are more issues such as references to a long deprecated rfc for the ipv6 address format (I haven't checked but I hope it is in line with the latest representations (rfc 5952 and 4291). In any case, until the issue is resolved, one might as well try both and see which one works (from what I can tell, most servers listen to the same port on both v4 and v6, specially the hybrid dual-stack ones - so the question is whether it makes sense to have separate u4 and u6 ports...

Jan Vidar Krey (janvidar) wrote :

Just to clarify about uhub and IPv6 support.
uhub works by using Big Muscle's option A, but all binaries released in the past have been compiled on WinXP which does not support IPV6_V6ONLY define, and thus IPv6 is disabled.

It should work on all other OSes, and if compiled on newer windowses.

Changed in adchpp:
status: New → Confirmed
importance: Undecided → Wishlist
Jacek Sieka (arnetheduck) wrote :

JVK, how do you handle I4/I6 auto-detection?

Jan Vidar Krey (janvidar) wrote :

Basically, if uhub receives a connection from an IPv4 address it removes the I6 and vice versa. It means IPv4 clients cannot talk to IPv6 clients (at least not in active mode).

Changed in dcplusplus:
status: Confirmed → In Progress
Changed in adchpp:
status: Confirmed → In Progress
Tehnick (tehnick) wrote :

I just saw the commit: http://bazaar.launchpad.net/~dcplusplus-team/dcplusplus/trunk/revision/2605

So it looks like original DC++ already has IPv6 support. Does anybody tested it?

Also I didn't understand yet is this the same realisation as in BM project or another one?

iceman50 (bdcdevel) wrote :

afaik it's a bit different from the one implemented in StrongDC, the IPv6 in DC++ well if you have anyone with IPv6 IP's it would be nice to test in pure IPv6 mode so by all means if you can help us with that you can inquire in DCDev Public hub

Big Muscle (bigmuscle) wrote :

Implementations are different. (See above)

Pirre (pierreparys) wrote :

How is ADCH (or any other hubsoft) going to verify the second IP (the one he is not using for the hubconnection) if a hub is going to support both IP4 and IP6 for H-H, the IP6 client can have SU TCP4 while connected with IP6 and that IP is not verified so can be used for CTM flood ?

Jan Vidar Krey (janvidar) wrote :

uhub strips off all IP addresses IPv4 or IPv6 and adds the one the client connected from.
So, if connected from an IPv6 address then the IPv6 address is the only address other clients can see (and vice versa for IPv4).

Big Muscle (bigmuscle) wrote :

So then the question - how hubs which are running on both version will handle this situation? Hub with 2000 users (1500 via IPv4, 500 via IPv6) - I guess 1500 users can connect only together and 500 the same? So that 500 users can't connect to the 1500 users and vice versa?

Pirre (pierreparys) wrote :

Maybe a possible way to deal with the IP verification process, have putted the rough idea it in attachement :)

cologic (cologic) wrote :

(1) I share Big Muscle's concern. This proposed HBRI extension addresses the client-hub aspect, but the client-client issues remain. This does, as Pirre has elsewhere pointed out, form a necessary condition for client-client connections.

In some circumstances, this is an insurmountable problem - if a pair of users cannot share an IP protocol, but such should be rare given the coming near-ubiquity not of pure IPv6 users, but of of dual-stacking. Therefore I suspect BM's worst-case will not become common.

(2) It's a little odd the client sends its own email address at all, but that's not a quirk new to HBRI and does render it consistent with DC++'s current behavior. The default address a client would send to a hub is (or, one would suppose, :: for IPv6) regardless. I'm just unsure under what circumstances allowing meaningfully different client-sent IPs makes sense.

(3) Otherwise, HBRI seems to function reasonably and with about as few round-trips as one can get away with. I like that it doesn't place additional constraints on which of the four combinations of (IPv4 yes/no) and (IPv6 yes/no) that a given client might be able to listen on due to ISP routing, firewalling, or NAT considerations.

Having a dual-stack hub (or effective hub, if one splits roles across machines or software, but that doesn't affect the protocol suggested) seems a minimal requirement for any solution to this, such that the hub can verify both IPv4 and IPv6 addresses. HBRI seems a near-minimal yet usable extension of ADC to allow that and as such it seems worthwhile.

poy (poy) wrote :

an alternative to HBRI that doesn't require a separate extension:

1) hubs send I6 and I4 fields in their hub INF to indicates the IP addresses they are listening on.
this step may not even be required if the "alternate" IP of the hub is available by other means (DNS record, hub list...).

2) upon receiving the initial hub INF, clients that wish to be available on both IPv6 & IPv4 networks can then connect to either the hub's I6 or the hub's I4 (obviously they would pick the one they are not already connected to).
the client then goes through the regular login process in order for the hub to associate the client's IPv6 address to its IPv4 address. the additional connection is then cut.

3) after a successful verification of a client's "alternate" IP, the hub adds that information to the client's INF and updates it so other users are notified of it.

4) when a user wants to connect to another user, it can:
- connect only to its IPv6 address if an I6 is present in that user's INF.
- connect only to its IPv4 address if an I4 is present in that user's INF.
- connect to both if both are present, then discard the one that didn't connect first.

Pirre (pierreparys) wrote :

poy : your proposal misses the possibility to link the 2 connections to 1 unique client i think.

There can be more then 1 same ip4 or ip6 in the hub already.

Nore the CID gives a uniquenes @ the moment of connecting to the second address there can be already same CID (other client) in the hub who then get a support he should not have ...

Thats exactly why the HBRI is there , there it gives the hub the original's client connection's SID and tolken so it is sure its that client that gets the support for the second ip version.

poy (poy) wrote :

linking the 2 connections to a user is the point of steps 2 & 3.

when a user B connects, if that user has the same CID but a different IP type (v6 / v4) than another user (user A) already connected, then user B's IP is added to user A's INF and user B is then discarded. user A then has 2 IP fields in its INF (I6 and I4).

Pirre (pierreparys) wrote :

:) then user B will receive all c-c communication of user A for that protocol and not even be in the hub ... not having a file etc

poy (poy) wrote :

sure, but by that time it would have been determined (with the PID / CID check) that user B is in fact the same as user A.

user B never actually appears in the user list; it is just a connection from user A in which a SUP and an INF are sent. the INF can be trimmed to only containing the PID, CID (and if a custom IP is configured, an IP). once the PID&CID have been verified, that "user B" connection is disconnected and the IP is transferred over to user A's INF.

Pirre (pierreparys) wrote :

nor the CID or the PID/CID combination is unique (instalations containing a xml) , if it would be we shouldent have so much "same cid" disconnects but CID does'nt match PID instead. maybe i am wrong but imho there is only a real implentation if a client can be sure that the INF the hub sends is correct and not a guess ...

Fredrik Ullner (ullner) on 2013-11-30
tags: added: core
removed: ipv6 support
Fredrik Ullner (ullner) on 2015-07-14
Changed in dcplusplus:
status: In Progress → Confirmed
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Bug attachments