Unable to Deploy Machines; get() returned more than one Neighbour -- it returned 2!
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MAAS |
Fix Released
|
High
|
Jacopo Rota | ||
3.4 |
Fix Released
|
High
|
Unassigned |
Bug Description
Maas Snap Version: maas 3.3.3-13184-
This issue has cropped up in the past, I know it's related to the networking. However, it would be beneficial to better understand why this occurs. The MAAS Server is relatively new.
I believe this occurs when a Subnet IP Addresses is previously observed on the same or different machine.
1.) Create a LXD Virtual POD (virsh) and deploy.
2.) Compose a new machine & VM will commission successfully.
3.) Attempt to Deploy the Virtual Machine.
Error: get() returned more than one Neighbour -- it returned 2!
This appears to affect every VM that is composed.
Here are some samples of the regiond.log and maas.log, these messages occur when attempting to deploy.
==> regiond.log <==
2023-06-16 15:51:57 maasserver.preseed: [warn] WARNING: '/snap/
==> maas.log <==
2023-06-
2023-06-
2023-06-
2023-06-
==> regiond.log <==
2023-06-16 15:51:58 maasserver.
Traceback (most recent call last):
File "/usr/lib/
self.
File "/snap/
return target()
File "/snap/
task()
File "/snap/
task()
--- <exception caught here> ---
File "/snap/
result = inContext.theWork() # type: ignore[
File "/snap/
inContext.
File "/snap/
return self.currentCon
File "/snap/
return func(*args, **kw)
File "/snap/
return func(*args, **kwargs)
File "/snap/
result = func(*args, **kwargs)
File "/snap/
with connected(), post_commit_hooks:
File "/snap/
self.fire()
File "/snap/
result = func(*args, **kwargs)
File "/snap/
self.
File "/snap/
result.
File "/snap/
raise self.value.
File "/snap/
result = current_
File "/snap/
return g.throw(self.type, self.value, self.tb)
File "/snap/
yield self._claim_
File "/snap/
result = current_
File "/snap/
return g.throw(self.type, self.value, self.tb)
File "/snap/
check_failed, ips_in_use = yield deferToDatabase(
File "/snap/
result = inContext.theWork() # type: ignore[
File "/snap/
inContext.
File "/snap/
return self.currentCon
File "/snap/
return func(*args, **kw)
File "/snap/
return func(*args, **kwargs)
File "/snap/
result = func(*args, **kwargs)
File "/snap/
return func_outside_
File "/snap/
return func(*args, **kwargs)
File "/usr/lib/
return func(*args, **kwds)
File "/snap/
rack_
File "/snap/
neighbour, created = Neighbour.
File "/snap/
return getattr(
File "/snap/
return self.get(**kwargs), False
File "/snap/
raise self.model.
maasserver.
2023-06-16 15:51:58 maasserver.dhcp: [info] Successfully configured DHCPv4 on rack controller 'common (m6d7bn)'.
2023-06-16 15:51:58 maasserver.dhcp: [info] Successfully configured DHCPv6 on rack controller 'common (m6d7bn)'.
2023-06-16 15:51:59 maasserver.dhcp: [info] Successfully configured DHCPv4 on rack controller 'common (m6d7bn)'.
2023-06-16 15:51:59 maasserver.dhcp: [info] Successfully configured DHCPv6 on rack controller 'common (m6d7bn)'.
Known Workaround: Edit the machine networking -> Device properties -> Change the IP Allocation from "Auto Assign" to "DHCP"
I still don't have steps to reproduce this issue, I did add an extra ipv6 subnet last week, but have deleted it since, as it was not going to be used.
Related branches
- Anton Troyanov: Approve
- MAAS Lander: Approve
-
Diff: 52 lines (+6/-17)2 files modifiedsrc/maasserver/models/node.py (+3/-14)
src/maasserver/models/tests/test_node.py (+3/-3)
- MAAS Lander: Approve
- Alberto Donato (community): Approve
- MAAS Maintainers: Pending requested
-
Diff: 52 lines (+6/-17)2 files modifiedsrc/maasserver/models/node.py (+3/-14)
src/maasserver/models/tests/test_node.py (+3/-3)
Changed in maas: | |
assignee: | nobody → Jacopo Rota (r00ta) |
status: | New → In Progress |
importance: | Undecided → Medium |
Changed in maas: | |
status: | In Progress → Triaged |
Changed in maas: | |
importance: | Medium → High |
Changed in maas: | |
milestone: | none → 3.5.0 |
status: | Triaged → Fix Committed |
Changed in maas: | |
milestone: | 3.5.0 → 3.5.0-beta1 |
status: | Fix Committed → Fix Released |
Hey Sean! Even if you don't have the steps to reproduce this issue, do you have a database dump?
I think it would be a possible starting point for us to triage this issue