Unable to enlist node in gMAAS

Bug #1520645 reported by Mark Shuttleworth
24
This bug affects 4 people
Affects Status Importance Assigned to Milestone
MAAS
Incomplete
Undecided
Unassigned

Bug Description

Am failing to enlist maas-1-05 in the gMAAS. The BMC for that node is 192.168.1.205. I see this in the console:

grep: write error: Broken pipe
  % Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
100 435 0 0 100 435 0 12525 --:--:-- --:--:-- --:--:-- 12794
curl: (22) The requested URL returned error: 400 BAD REQUEST
grep: write error
  % Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
100 256 0 0 100 256 0 7379 --:--:-- --:--:-- --:--:-- 7529
curl: (22) The requested URL returned error: 400 BAD REQUEST

=============================================
failed to enlist system maas server '192-168-9-123.cluster.mallards.
maas-1-08.cluster.mallards'
sleeping 60 seconds then poweroff

login with 'ubuntu:ubuntu' to debug and disable poweroff

=============================================
ci-info: no authorized ssh keys fingerprints found for user ubuntu.
ci-info: no authorized ssh keys fingerprints found for user ubuntu.

Revision history for this message
Christian Reis (kiko) wrote :

Danilo has just seen this on a Cisco UCS system:

<danilo> hi all; we are enlisting a node in MAAS using API (nodes new ...), but for one of the nodes, commissioning fails with https://private-fileshare.canonical.com/~danilo/failed-enlist-but-already-enlisted.png: it seems to use the enlist image instead of the commissioning one (if that makes any sense); the only thing I could see in the logs is https://pastebin.canonical.com/144979/
<danilo> any ideas?
<danilo> fwiw, this is with 1.8.2+bzr4041-0ubuntu1~trusty1

Revision history for this message
Данило Шеган (danilo) wrote :

Kiko asked for POST data: since the node that exhibits this problem was in a zone that we needed for testing over t he weekend, I was only able to get a run this morning. I modified the curl run inside maas-enlist.sh to include --trace-ascii option and then catted the resulting file, with the output as https://private-fileshare.canonical.com/~danilo/failed-enlistment-curl-trace-ascii.png (sorry, it's only a PNG, but don't have the time to set up better tracking).

Note that we "enlist" this node through API (calling it with https://pastebin.canonical.com/145072/: sorry for all the private links), and it still boots the enlistment image (I would expect it to go straight to commissioning image instead).

I hope this helps.

Revision history for this message
Gavin Panella (allenap) wrote :

I've filed bug 1521204 as a proposal for making this kind of issue easier to diagnose.

Revision history for this message
Andres Rodriguez (andreserl) wrote :
Download full text (3.5 KiB)

Hi All,

I've tried to reproduce this in the gMAAS and have been unable to do so. I've successfully enlisted the machine multiple times without any issues.

This, however, could have been related to the regiond daemon not running or the same for apache. Another option could have been apache2 failing to proxy the request to :5240. That being said, I'll leave this bug open, but I'll mark it incomplete in the meantime to see if we can successfully reproduce it.

Thanks.

=== Mon, 30 Nov 2015 15:43:42 +0000: successfully enlisted to 'http://192.168.9.2/MAAS/api/1.0/nodes/' with hostname '192-168-9-123.cluster.mal$
ards'
{
    "hwe_kernel": "",
    "ip_addresses": [],
    "cpu_count": 0,
    "power_type": "ipmi",
    "tag_names": [],
    "swap_size": null,
    "owner": null,
    "zone": {
        "updated": "2015-09-23T10:39:32.079",
        "description": "",
        "created": "2015-09-23T10:39:32.079",
        "_state": "<django.db.models.base.ModelState object at 0x7fdc6ce4abd0>",
        "id": 1,
        "name": "default"
    "hostname": "rich-letter.cluster.mallards", [80/1226]
    "storage": 0,
    "substatus_message": null,
    "system_id": "node-2296c700-9779-11e5-8504-000e1e989682",
    "boot_type": "fastpath",
    "memory": 0,
    "substatus_action": null,
    "disable_ipv4": false,
    "min_hwe_kernel": "",
    "status": 0,
    "power_state": "unknown",
    "substatus_name": "New",
    "routers": null,
    "physicalblockdevice_set": [],
    "boot_disk": null,
    "netboot": true,
    "osystem": "",
    "virtualblockdevice_set": [],
    "architecture": "amd64/generic",
    "interface_set": [
    "interface_set": [ [61/1226]
        {
            "name": "eth0",
            "tags": [],
            "vlan": {
                "updated": "2015-09-30T21:03:19.741",
                "name": "Default VLAN",
                "vid": 0,
                "created": "2015-09-30T21:03:19.741",
                "_state": "<django.db.models.base.ModelState object at 0x7fdc6ce4a790>",
                "fabric_id": 0,
                "mtu": 1500,
                "id": 0
            },
            "enabled": true,
            "parents": [],
            "mac_address": "00:25:90:4c:e7:ac",
            "params": "",
            "type": "physical",
            "id": 2455
        { [40/1226]
            "name": "eth1",
            "tags": [],
            "vlan": {
                "updated": "2015-09-30T21:03:19.741",
                "name": "Default VLAN",
                "vid": 0,
                "created": "2015-09-30T21:03:19.741",
                "_state": "<django.db.models.base.ModelState object at 0x7fdc6cde3ed0>",
                "fabric_id": 0,
                "mtu": 1500,
                "id": 0
            },
            "enabled": true,
            "parents": [],
            "mac_a...

Read more...

Changed in maas:
status: New → Incomplete
Revision history for this message
Mark Shuttleworth (sabdfl) wrote : Re: [Bug 1520645] Re: Unable to enlist node in gMAAS

How interesting - that node stubbornly refused to enlist itself yesterday.

Mark

Revision history for this message
Christian Reis (kiko) wrote :

Mark, if you delete the node, can you reproduce now?

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for MAAS because there has been no activity for 60 days.]

Changed in maas:
status: Incomplete → Expired
Revision history for this message
Carlos Augusto Capriotti (capriotti-carlos) wrote :

Can we re-validate this issue ? I have a very similar case.

Hello all. Excuse me for doing some CPR on this issue after it expired. I have a situation that feels like the one described here and I would like at least to share it with you.

This relates to MAAS Version 1.9.0+bzr4533-0ubuntu1 (trusty1).

First, a bit of a background: I am deploying a 7 x Dell PE 2950 cluster using MAAS. the MAAS server and its LDS (Landscape Dedicated Server) are virtual machines hosted on another hardware, because I wanted to use the firepower of the "big boys" for OpenStack, and to be able to make backups of my MAAS and Landscape environments as I progressed with the deployment.

What is a bit different among the physical boxes is that each of the Dell servers has a unique set of NICs. By default a PE 2900/2950 has two embedded NICs, but for my purposes I added an extra card on each server. Some of those cards are single port NICs, some are multiple ports.

A few of my nodes are showing the same behaviour described here, but I have some more information about the development: During boot up of the enlisting phase, when the server is being detected by MAAS (or, that's what you expect anyway) you can see references of the system (Kernel I'd suppose) trying to talk to the NICs, outputting the MAC addresses (yes, plural) of the NICs present on the system. The problem is that, on those occasions, the system seems to insist on using a MAC that is not the one set on the BIOS for PXE. From this point on, the system does not try the other MACs, insisting on the "wrong" one.

Despite the description above, all seems to go well, until the system finishes loading , clears the screen, and shows you a login prompt. This is where you would expect the enlist scripts to kick in and register the node within MAAS. THIS is where it fails with the same message of the bug description here.

Now, it gets worse: I have already deployed those same nodes before, using another MAAS server (now deleted), and the nodes deployed fine, BUT, I had a problem with another node, showing the same behaviour.

My guess here is that, upon enlisting, some script enlists the NICs present on the system, and somehow chooses which NIC to use to keep talking to MAAS, but makes a mistake, or simply is not able to refer to the same NIC that was used to load the system via PXE.

Attempting a workaround, I disconnected all unnecessary network cables, leaving only the one used by PXe, and, much to my surprise, the problem is THE SAME ! the system does not seem to be checking for link presence before using the NIC, among other things. I know that in a huge datacenter, you cannot expect disconnecting cables to make a deployment work, which makes the situation a bit more serious.

I know I have a non-trivial setup, but it is literally a real life situation with MAAS having trouble with METAL boxes, and not virtual, so, from my POV, it is a real issue.

My workaround for this is changing the NIC responsible for PXE, until I have the node enlisted as "NEW"; After that I revert the configuration to the original setting and commission the node.

Can I provide you with further info ?

Changed in maas:
status: Expired → Confirmed
Revision history for this message
Carlos Augusto Capriotti (capriotti-carlos) wrote :

Update:

MY WORKAROUND is actually a bit different: I entered the BIOS and enabled PXE boot on BOTH cards, next I removed all network cables and connected the PXE-enabled network to NIC 2 ! (this is the second NIC, as marked on the chassis of the Dell server).

This does ring a bell; In the past, on CentOS systems, I seem to recall that the sequence of the detected NICs (eth0, eth1, etc) would not match the order of the physical layout as labeled on the back of the server. If memory serves me well, they were opposite, eth0 being the last NIC on the chassis, and vice-versa.

With that I have successfully enlisted and commissioned nodes.

Again, feel random and needs some looking into, IMO.

Cheers.

Revision history for this message
Andres Rodriguez (andreserl) wrote :

Hi Carlos,

Thank you for the update on the bug. The enlistment process happens as follows:

1. Machine PXE boots of MAAS, and sees that the PXE MAC is not a recognized MAC (no node has this MAC address).
2. Machine is then told to load the kernel/initramfs, with a whole bunch of kernel parameters. However, as part of the PXE boot process, pxelinux appends the BOOTIF to the kernel command line
3. MAAS loads the ephemeral image and starts the enlistment process. At this stage, cloud-init knows which one is the BOOTIF because it has been appended by PXELINUX.

So based on your statement under "The problem is that, on those occasions, the system seems to insist on using a MAC that is not the one set on the BIOS for PXE. From this point on, the system does not try the other MACs, insisting on the "wrong" one." I wonder if either:

A. PXE Linux is making the incorrect determination as to what interface to append to the kernel command line because, while it was supposed to pxe from NIC1, it reported it had from NIC2 ?
B. Bug in the firmware

So, once a machine has finished commission, the way MAAS detects the PXE interface is by examining the PXE request MAAS receives from the NIC, and with that, we identify that MAC address, that we later use to provide configs, but never to tell the machine from which NIC to use.

Either way, all of these are just theories that I can't confirm until I see some actual output.

Is it possiblke for you to share the output you are seeing?

Thanks!

Changed in maas:
status: Confirmed → Incomplete
Revision history for this message
Andres Rodriguez (andreserl) wrote :

Also, I'd suggest an upgrade to the latest firmware and see what you come across with and see if it is the same behavior.

Revision history for this message
Carlos Augusto Capriotti (capriotti-carlos) wrote :
Download full text (4.6 KiB)

Hello Andres, hello everyone.

@Andres, regarding bug in firmware, it s always possible; I still have to
double-check, but I think the three servers where I noticed this behaviour
are equipped with a regular Intel card, 100 Mbits (yep, oldies). On the
other hand, I've just updated the firmware on all of those servers (Dell
firmware) to the latest, precisely in preparation to MAAS deployments. I
was getting some very inconsistent power readings from IPMI and that
flagged me to the firmware.

BUT, the intel cards are rogues, not covered by Dell's firmware upgrade
tool, so, i'd have to look into it.

One reminder: on a previous (failed) deployment, those very same nodes
worked fine, while a third one showed the issue. While the entire boot and
enlistment process makes total sense, I do remember the reference to the
wrong MAC address and to a boot interface other than eth0.

Regarding the output, the only way I can think of is making a video,
SUPPOSING I can make the system fail again. It is bad timing, since I've
just finished deploying this cluster.

I'll try to setup a spare MAAS server and try commissioning those three
servers again, with other disks and record it . Let's see how this week
will be. Should it be slow, I'll get my hands dirty and deploy the cluster
again. Good for my learning curve, bad for my utility bill.

I'll keep you posted.

Cheers,

Carlos

On Mon, Feb 1, 2016 at 11:29 PM, Andres Rodriguez <email address hidden>
wrote:

> Hi Carlos,
>
> Thank you for the update on the bug. The enlistment process happens as
> follows:
>
> 1. Machine PXE boots of MAAS, and sees that the PXE MAC is not a
> recognized MAC (no node has this MAC address).
> 2. Machine is then told to load the kernel/initramfs, with a whole bunch
> of kernel parameters. However, as part of the PXE boot process, pxelinux
> appends the BOOTIF to the kernel command line
> 3. MAAS loads the ephemeral image and starts the enlistment process. At
> this stage, cloud-init knows which one is the BOOTIF because it has been
> appended by PXELINUX.
>
> So based on your statement under "The problem is that, on those
> occasions, the system seems to insist on using a MAC that is not the one
> set on the BIOS for PXE. From this point on, the system does not try the
> other MACs, insisting on the "wrong" one." I wonder if either:
>
> A. PXE Linux is making the incorrect determination as to what interface
> to append to the kernel command line because, while it was supposed to pxe
> from NIC1, it reported it had from NIC2 ?
> B. Bug in the firmware
>
> So, once a machine has finished commission, the way MAAS detects the PXE
> interface is by examining the PXE request MAAS receives from the NIC,
> and with that, we identify that MAC address, that we later use to
> provide configs, but never to tell the machine from which NIC to use.
>
> Either way, all of these are just theories that I can't confirm until I
> see some actual output.
>
> Is it possiblke for you to share the output you are seeing?
>
> Thanks!
>
> ** Changed in: maas
> Status: Confirmed => Incomplete
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bug...

Read more...

Revision history for this message
Carlos Augusto Capriotti (capriotti-carlos) wrote :
Download full text (8.0 KiB)

Hello everyone.

More or less like I had mentioned several days ago, the week following my
report was not nice at all, and neither was the one next to it, so I only
had time to come back to this issue now.

And I am back with meaningful news.

I had to start my MAAS deployment from scratch, from the very install of
the MAAS server, for a number of reasons, and finally had a chance to
observe the problem of nodes refusing to be enlisted.

Here is what happened: I was preparing my MAAS environment; Just to refresh
our memory it is composed of:

Physical server running Ubuntu 14.04 (trusty); This server hosts my MAAS
server in a KVM machine, and was SUPPOSED to host my landscape server as a
KVM as well. Turns out the landscape deployment was failing over and over
and over without me finding a good explanation. and I kept deleting the
enlisted instance and deploying it again (yes, deleting the instance of the
guest machine from the known_hosts files on MAAS.

It turns ou that, trying to save some time, I was also adding, enlisting
and commissioning the physical nodes as well, and that was working like a
charm. (well, not the landscape deployment, but that is another story).

At a certain moment, after the third of fourth failed landscape deployment,
when deleting the landscape VM from MAAS, I made a huge mistake and deleted
ALL nodes from MAAS. No big deal, you'd say. You only have to enlist them
again, right ? Well, this is when all went south.

Most of the seven nodes were deployed and commissioned correctly, but two -
random. not even the same ones from last time - started refusing to be
enlisted.

Just to make sure, I triple-checked the boot configuration, and all the
"moving parts". Nothing worked.
Another thing that MAY be related, one or two nodes were a bit hard to
check on their power status. They would report being ON while were actually
OFF, and no amount of unplugging them, disconnecting and reconnecting
network cables, "forcing" them OFF or ON on MAAS would solve it. Eventually
powering them on and letting MAAS turn them off would help.

Back to the main issue, my only solution was to install a fresh MAAS
server. Lucky me, I was prepared for this kind of situation and made a
clone of my MAAS server a step before, so I rolled back to that and started
over.

Yes, you guessed, all worked like a charm.

I suspect that the delete node and re-add is the culprit here and I also
confirm that this deleting and re-adding nodes also happened MANY times on
my my previous configuration, where I was also experimenting with
Landscape, and saw myself deleting nodes many times.

I think the exclusion algorithm must me leaving something behind, and that
something breaks re-adding a node, and potentially also has some influence
on checking the power state on other properly-enlisted and commissioned
nodes.

With that I am ruling our firmware, at least as the main cause of the
issue. Unfortunately I no longer have the copy of MAAS with the logs and
everything, but if someone wants a better view on this problem, I can make
the problem happen again, and make the entire VM available.

Hope this sheds some light on the subject.

Cheers.

Carlos

On Mon, Feb ...

Read more...

Revision history for this message
Carlos Augusto Capriotti (capriotti-carlos) wrote :

Hello again, all.

More news regarding this issue. I was able to duplicate the behaviour in another a bit unexpected situation: I had just installed a fresh MAAS machine (VM), and was starting to deploy Landscape/Autopilot, when, unaware that the images were not yet downloaded, I started the other VM where I intended to have Landscape installed.

Of course the VM states there is no bootable media present, and I shut it down, waiting until the cluster is sync'ed.

Once the cluster is ready, I start the VM again, and much to my surprise, there is the error screen ! NO physical hardware, no REAL firmware, and the very same behaviour.

This rules out a possible firmware issue on my Dell servers, and also adds to the theory that there is something left behind on the MAAS server (database), that stop the node from being enlisted, in case there is a failed attempt.

One thing caught my eye: it was referring to the "name" of the node 10-0-0-154, which is the ephemeral IP assigned to the node in the moment the enlist/boot process took place.

I hope this helps.

Revision history for this message
Bilal Baqar (bbaqar) wrote :

I have hit this error multiple times. I kind off found a workaround for VMs. I removed all the nics and added only one to the Maas network. That got the node into the new state. I am hitting the exact same problem with physical servers and haven't been able to find a workaround for it. Can someone please help me out.

I am running Maas 1.9.

Revision history for this message
Bilal Baqar (bbaqar) wrote :

The following got logged at the time of the error:
2016-02-27 08:24:53 [-] 127.0.0.1 - - [27/Feb/2016:13:24:53 +0000] "POST /MAAS/api/1.0/nodes/ HTTP/1.1" 400 126 "-" "curl/7.35.0"
2016-02-27 08:24:54 [-] 127.0.0.1 - - [27/Feb/2016:13:24:53 +0000] "POST /MAAS/api/1.0/nodes/ HTTP/1.1" 400 126 "-" "curl/7.35.0"

Revision history for this message
Bilal Baqar (bbaqar) wrote :
Revision history for this message
Carlos Augusto Capriotti (capriotti-carlos) wrote :

Bilal, hi.

If your physical nodes have multiple NICs, enable PXE on another NIC,
switch it on and let it me enlisted.

When it is enlisted, configure it back the way you want (theoretically, the
original configuration) and then commission it.

This worked for me, but quite frankly, I ran into other issues and ended up
deleting everything and starting from scratch.

Cheers.

On Sat, Feb 27, 2016 at 2:48 PM, Bilal Baqar <email address hidden> wrote:

> ** Attachment added: "clusterd.log at the time of error"
>
> https://bugs.launchpad.net/maas/+bug/1520645/+attachment/4582610/+files/clusterd.log.final
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1520645
>
> Title:
> Unable to enlist node in gMAAS
>
> Status in MAAS:
> Incomplete
>
> Bug description:
> Am failing to enlist maas-1-05 in the gMAAS. The BMC for that node is
> 192.168.1.205. I see this in the console:
>
> grep: write error: Broken pipe
> % Total % Received % Xferd Average Speed Time Time Time
> Current
> Dload Upload Total Spent Left
> Speed
> 100 435 0 0 100 435 0 12525 --:--:-- --:--:-- --:--:--
> 12794
> curl: (22) The requested URL returned error: 400 BAD REQUEST
> grep: write error
> % Total % Received % Xferd Average Speed Time Time Time
> Current
> Dload Upload Total Spent Left
> Speed
> 100 256 0 0 100 256 0 7379 --:--:-- --:--:--
> --:--:-- 7529
> curl: (22) The requested URL returned error: 400 BAD REQUEST
>
> =============================================
> failed to enlist system maas server '192-168-9-123.cluster.mallards.
> maas-1-08.cluster.mallards'
> sleeping 60 seconds then poweroff
>
> login with 'ubuntu:ubuntu' to debug and disable poweroff
>
> =============================================
> ci-info: no authorized ssh keys fingerprints found for user ubuntu.
> ci-info: no authorized ssh keys fingerprints found for user ubuntu.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/maas/+bug/1520645/+subscriptions
>

Revision history for this message
Bilal Baqar (bbaqar) wrote :

I will give this a try.
When you say you deleted everything do you mean you installed and configured MAAS again.
And is there something that you did differently the second time round?

Revision history for this message
Carlos Augusto Capriotti (capriotti-carlos) wrote :

Bilal, hi.

I suppose a few more details would be interesting here.

I installed MAAS and Landscape on VMs; Before each main step I stopped the
VMs and created a clone, so, in case something "bad" or simply suspicious
happened (for instance, a node refusing to enlist), I then would roll back
to the last step and resume the operation.

Worked fine for MAAS, but not at all for Landscape.

A hint if you choose to go down that path: make sure you assign disks of 20
GB or greater, for MAAS and Landscape. I managed to run out of disk space
during Landscape and OpenStack deployments, and the install fails silently
(on the GUI anyway) about that.

Cheers.

On Sun, Feb 28, 2016 at 4:48 PM, Bilal Baqar <email address hidden> wrote:

> I will give this a try.
> When you say you deleted everything do you mean you installed and
> configured MAAS again.
> And is there something that you did differently the second time round?
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1520645
>
> Title:
> Unable to enlist node in gMAAS
>
> Status in MAAS:
> Incomplete
>
> Bug description:
> Am failing to enlist maas-1-05 in the gMAAS. The BMC for that node is
> 192.168.1.205. I see this in the console:
>
> grep: write error: Broken pipe
> % Total % Received % Xferd Average Speed Time Time Time
> Current
> Dload Upload Total Spent Left
> Speed
> 100 435 0 0 100 435 0 12525 --:--:-- --:--:-- --:--:--
> 12794
> curl: (22) The requested URL returned error: 400 BAD REQUEST
> grep: write error
> % Total % Received % Xferd Average Speed Time Time Time
> Current
> Dload Upload Total Spent Left
> Speed
> 100 256 0 0 100 256 0 7379 --:--:-- --:--:--
> --:--:-- 7529
> curl: (22) The requested URL returned error: 400 BAD REQUEST
>
> =============================================
> failed to enlist system maas server '192-168-9-123.cluster.mallards.
> maas-1-08.cluster.mallards'
> sleeping 60 seconds then poweroff
>
> login with 'ubuntu:ubuntu' to debug and disable poweroff
>
> =============================================
> ci-info: no authorized ssh keys fingerprints found for user ubuntu.
> ci-info: no authorized ssh keys fingerprints found for user ubuntu.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/maas/+bug/1520645/+subscriptions
>

Revision history for this message
Bilal Baqar (bbaqar) wrote :

Hi Carlos

I had to find out the hard way about having enough disk space. I never created clones though. Maybe that is something I can do once I have all the nodes in the ready state or a deployed state.

I wasn't able to PXE Boot on any other nic as the machine just refused to do so. I finally worked around this problem by adding the nodes manually using its mac-address and power settings. That skipped the enlistment process and went directly to commissioning.

Thanks for the help.

Revision history for this message
Stephen Nuchia (snuchia) wrote :

I'm seeing this too, after enlisting several machines I see the failure and it appears the new machines have picked up DNS information for one of the already-enlisted machines. This is on a fresh install of a new controller, re-paving a test cluster that has been up and down a lot. Fresh install on a fresh vm from the stable repo. IPMI power type, Dell r720xd machines with iDRAC7 cards.

It looks like maybe machine 2 got assigned a name, that name got associated with the dynamic IP address it was using, and it enrolled with it. Machine 2 shuts down and releases the IP address. Later machine 10 launches and is given the same IP address machine 2 had. It looks up the fqdn associated with that address and tries to enlist with it but it's a duplicate name so it fails.

Revision history for this message
Stephen Nuchia (snuchia) wrote :

To test the above hypothesis I kicked off commissioning for the machines that enrolled successfully, then ran through the failed ones and kicked them on to retry. Two of the three succeeded, I kicked off their commissioning and retried the remaining one and it succeeded. Something about the "New" state is encouraging this address / name conflict.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.