Fixed IPs not being recorded in database

Bug #1439870 reported by smarta94
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Expired
Undecided
Unassigned

Bug Description

When new VMs are spawned after deleting previous VMs, the new VMs obtain completely new ips and the old ones are not recycled to reuse. I looked into the mysql database to see where ips may be being stored and accessed by openstack to determine what the next in line should be, but didn' tmanage to find any ip information there. Has the location of this storage changed out of the fixed_ips table? Currently, this table is entirely empty:

MariaDB [nova]> select * from fixed_ips;
Empty set (0.00 sec)

despite having many vms running on two different networks:

 mysql -e "select uuid, deleted, power_state, vm_state, display_name, host from nova.instances;"
+--------------------------------------+---------+-------------+----------+--------------+--------------+
| uuid | deleted | power_state | vm_state | display_name | host |
+--------------------------------------+---------+-------------+----------+--------------+--------------+
| 14600536-7ce1-47bf-8f01-1a184edb5c26 | 0 | 4 | error | Ctest | r001ds02.pcs |
| abb38321-5b74-4f36-b413-a057897b8579 | 0 | 4 | stopped | cent7 | r001ds02.pcs |
| 31cbb003-42d0-468a-be4d-81f710e29aef | 0 | 1 | active | centos7T2 | r001ds02.pcs |
| 4494fd8d-8517-4f14-95e6-fe5a6a64b331 | 0 | 1 | active | selin_test | r001ds02.pcs |
| 25505dc4-2ba9-480d-ba5a-32c2e91fc3c9 | 0 | 1 | active | 2NIC | r001ds02.pcs |
| baff8cef-c925-4dfb-ae90-f5f167f32e83 | 0 | 4 | stopped | kepairtest | r001ds02.pcs |
| 317e1fbf-664d-43a8-938a-063fd53b801d | 0 | 1 | active | test | r001ds02.pcs |
| 3a8c1a2d-1a4b-4771-8e62-ab1982759ecd | 0 | 1 | active | 3 | r001ds02.pcs |
| c4b2175a-296c-400c-bd54-16df3b4ca91b | 0 | 1 | active | 344 | r001ds02.pcs |
| ac02369e-b426-424d-8762-71ca93eacd0c | 0 | 4 | stopped | 333 | r001ds02.pcs |
| 504d9412-e2a3-492a-8bc1-480ce6249f33 | 0 | 1 | active | libvirt | r001ds02.pcs |
| cc9f6f06-2ba6-4ec2-94f7-3a795aa44cc4 | 0 | 1 | active | arger | r001ds02.pcs |
| 0a247dbf-58b4-4244-87da-510184a92491 | 0 | 1 | active | arger2 | r001ds02.pcs |
| 4cb85bbb-7248-4d46-a9c2-fee312f67f96 | 0 | 1 | active | gh | r001ds02.pcs |
| adf9de81-3986-4d73-a3f1-a29d289c2fe3 | 0 | 1 | active | az | r001ds02.pcs |
| 8396eabf-d243-4424-8ec8-045c776e7719 | 0 | 1 | active | sdf | r001ds02.pcs |
| 947905b5-7a2c-4afb-9156-74df8ed699c5 | 55 | 1 | deleted | yh | r001ds02.pcs |
| f690d7ed-f8d5-45a1-b679-e79ea4d3366f | 56 | 1 | deleted | tr | r001ds02.pcs |
| dd1aa5b1-c0ac-41f6-a6de-05be8963242f | 57 | 1 | deleted | ig | r001ds02.pcs |
| 42688a7d-2ba2-4d5a-973f-e87f87c32326 | 58 | 1 | deleted | td | r001ds02.pcs |
| 7c1014d8-237d-48f0-aa77-3aa09fff9101 | 59 | 1 | deleted | td2 | r001ds02.pcs |
+--------------------------------------+---------+-------------+----------+--------------+--------------+

I am using neutron networking with OVS. It is my understanding that the mysql sqlalchemy is setup to leave old information accessible in mysql, but deleting the associated information manually doesn't seem to make a difference as to the fixed_ips issue I am experiencing. Are there solutions for this?

nova --version : 2.20.0 ( 2014.2.1-1.el7 running on centOS7, epel-juno release)

Tags: network
Revision history for this message
Sean Dague (sdague) wrote :

Please provide Nova version, the nova --version flag only tells us the client version.

Changed in nova:
status: New → Incomplete
Revision history for this message
smarta94 (smarta94) wrote :

Updated in original description.

description: updated
Revision history for this message
smarta94 (smarta94) wrote :

Is any work being done on this yet? I haven't seen a response since I updated the version of Nova several days ago.

Revision history for this message
Sudipta Biswas (sbiswas7) wrote :

I guess you are saying - why isn't neutron allocating the ports again of the VMs you are deleting?
When you create a port in neutron - the table neutron.ipavailabilityranges is updated.
Here the range of IPs for your subnet (from which your port would be created) - is updated with new range.
For example - Let's say you had a initial subnet pool = 192.168.1.102 - 192.168.1.110
Now if you create a port out of this pool - the ranges would be updated to: 192.168.1.103 - 192.168.1.110

Your VM ports that have been used but deleted would again be available for reallocation once you have exhausted this pool range. That is - once your available pool with increments for every allocation becomes 0.
This is the time, the pool would be reset and your previously deleted ports would be used.

Changed in nova:
assignee: nobody → Sudipta Biswas (sbiswas7)
Revision history for this message
smarta94 (smarta94) wrote :

Alright, thanks for that information - it clears some of my question up.

It is still unclear to me why the table in nova is entirely empty for the assigned fixed ips though?

Revision history for this message
Sudipta Biswas (sbiswas7) wrote :

That is happening because you are using neutron networks and not nova-networks. That table would be used in case of nova-networks. I am going to mark this bug as invalid - once you are satisfied with the answer.

Revision history for this message
Sudipta Biswas (sbiswas7) wrote :

I think I have covered the reason for marking this invalid in my previous comments.

Changed in nova:
status: Incomplete → Invalid
assignee: Sudipta Biswas (sbiswas7) → nobody
Revision history for this message
smarta94 (smarta94) wrote :

Changes in observation based on what previous comments lent for expected behavior.

Changed in nova:
status: Invalid → New
affects: nova → neutron
Revision history for this message
smarta94 (smarta94) wrote :
Download full text (6.0 KiB)

I have observed that my ipavailability ranges have not been reset after the pool has run out using the dashboard. I have deleted several of my instances and still a previously used fixed ip is not assigned, resulting in a failure of the launch.

Part of the neutron log:

2015-05-20 08:04:50.331 3533 INFO neutron.wsgi [req-fe796412-e796-4c08-a798-ab520dbdeb91 None] 10.173.3.13 - - [20/May/2015 08:04:50] "GET /v2.0/security-groups.json?tenant_id=8df11859bd554dc4bc3d6b7f824cf5d6 HTTP/1.1" 200 2433 0.068064
2015-05-20 08:04:50.340 3561 INFO neutron.wsgi [-] (3561) accepted ('10.173.3.13', 57650)
2015-05-20 08:04:50.341 3561 DEBUG keystonemiddleware.auth_token [-] Removing headers from request environment: X-Service-Catalog,X-Identity-Status,X-Roles,X-Service-Roles,X-Domain-Name,X-Service-Domain-Name,X-Project-Id,X-Service-Project-Id,X-Project-Domain-Name,X-Service-Project-Domain-Name,X-User-Id,X-Service-User-Id,X-User-Name,X-Service-User-Name,X-Project-Name,X-Service-Project-Name,X-User-Domain-Id,X-Service-User-Domain-Id,X-Domain-Id,X-Service-Domain-Id,X-User-Domain-Name,X-Service-User-Domain-Name,X-Project-Domain-Id,X-Service-Project-Domain-Id,X-Role,X-User,X-Tenant-Name,X-Tenant-Id,X-Tenant _remove_auth_headers /usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py:780
2015-05-20 08:04:50.341 3561 DEBUG keystonemiddleware.auth_token [-] Authenticating user token __call__ /usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py:708
2015-05-20 08:04:50.342 3561 DEBUG keystonemiddleware.auth_token [-] Returning cached token _cache_get /usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py:1793
2015-05-20 08:04:50.343 3561 DEBUG keystonemiddleware.auth_token [-] Authenticating service token __call__ /usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py:727
2015-05-20 08:04:50.343 3561 DEBUG keystonemiddleware.auth_token [-] Received request from user: user_id None, project_id None, roles None service: user_id None, project_id None, roles None __call__ /usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py:746
2015-05-20 08:04:50.343 3561 DEBUG routes.middleware [req-58b7ee85-3cb6-4ae2-95e8-b4be0fcb062c ] No route matched for POST /ports.json __call__ /usr/lib/python2.7/site-packages/routes/middleware.py:97
2015-05-20 08:04:50.344 3561 DEBUG routes.middleware [req-58b7ee85-3cb6-4ae2-95e8-b4be0fcb062c ] Matched POST /ports.json __call__ /usr/lib/python2.7/site-packages/routes/middleware.py:100
2015-05-20 08:04:50.344 3561 DEBUG routes.middleware [req-58b7ee85-3cb6-4ae2-95e8-b4be0fcb062c ] Route path: '/ports{.format}', defaults: {'action': u'create', 'controller': <wsgify at 57778640 wrapping <function resource at 0x3881230>>} __call__ /usr/lib/python2.7/site-packages/routes/middleware.py:102
2015-05-20 08:04:50.344 3561 DEBUG routes.middleware [req-58b7ee85-3cb6-4ae2-95e8-b4be0fcb062c ] Match dict: {'action': u'create', 'controller': <wsgify at 57778640 wrapping <function resource at 0x3881230>>, 'format': u'json'} __call__ /usr/lib/python2.7/site-packages/routes/middleware.py:103
2015-05-20 08:04:50.348 3561 DEBUG neutron.api.v2.base [req-58b7ee85-3cb6-4ae2-95e8-b4be0fcb062c None] Request body:...

Read more...

Revision history for this message
smarta94 (smarta94) wrote :

Further information on this:
I know for sure I have deleted this instance, and when I do a command line boot I get the following response:

ERROR (BadRequest): Fixed IP address 202.1.102.11 is already in use on instance dhcp909bcd27-70ea-505d-9ac8-bab85f84ee2f-5835a57c-f02b-414c-9a10-2250ce90a230. (HTTP 400) (Request-ID: req-523cf255-060f-4057-a55d-655b213d4f50)

When I look at my mysql database listing, I see only 5 "ports" associated with the router for instances, and there should be 6 available in total for instances + 1 for the router gateway address (in this case 202.1.102.11 should be free, with a range from 10-15 created initially).

Changed in neutron:
assignee: nobody → venkata anil (anil-venkata)
Revision history for this message
Eugene Nikanorov (enikanorov) wrote :

this is not a neutron bug, please be more attentive.

affects: neutron → nova
Revision history for this message
smarta94 (smarta94) wrote :

How is it not neutron related when neutron controls the networking and ip assignment? And the ipavailability table is in the neutron database.

tags: added: network
Revision history for this message
Markus Zoeller (markus_z) (mzoeller) wrote :

@venkata anil (anil-venkata):

You've set yourself as assignee of this bug when the affected project was Neutron. This changed after your assignment to Nova. Do you still intend to work on this bug?

Revision history for this message
venkata anil (anil-venkata) wrote :

Yes, I want to work on this bug. Thanks

Revision history for this message
Brent Eagles (beagles) wrote :

@smarta94: I think this bz has gotten a little confused. In comments 9 and 10 you indicate what looks like could be a real problem but is inconsistent with the original description of the bz. I think it might be best to close this bz as invalid and start over. You mention you are using horizon though... you might want to confirm that the same behavior occurs when using the command line tools. It is possible that horizon is allocating ports and passing them to nova and not cleaning up afterwards -- which would leave them allocated (this is the correct behavior for ports that are passed to nova when booting instances, btw). If you find it works with the command line tools, it is probably neither a nova or a neutron bug but a horizon bug. (The plot thickens!!!)

Changed in nova:
status: New → Incomplete
Changed in nova:
assignee: venkata anil (anil-venkata) → nobody
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.]

Changed in nova:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.