Activity log for bug #1801733

Date Who What changed Old value New value Message
2018-11-05 14:08:29 Wallace Cardoso bug added bug
2018-11-05 14:08:29 Wallace Cardoso attachment added Syslog https://bugs.launchpad.net/bugs/1801733/+attachment/5209338/+files/newbug.sys.logs
2018-11-05 14:10:01 Wallace Cardoso attachment added htop https://bugs.launchpad.net/nova/+bug/1801733/+attachment/5209339/+files/100p.png
2018-11-05 14:10:57 Wallace Cardoso description Description ============== The 'conductor-api' for 'rebuild_instance' has a vulnerability point for the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus'. When set to an invalid number of vcpus in the flavor of the instance, the compute component takes 100% of cpu consuming forever without changing the state from rebuild to active. In addition, new requests to compute component are not computed, that is, the node gets out-of-service until its restart. Maybe, this bug can be a way of using a denial-of-service attack. Steps to reproduce ===================== 1) create an instance with a flavor (VCPUS: 1, MEM: 64MB, STORAGE: 0GB) and a cirros image 0.3.4; 2) rebuild the instance with an alternative cirros image 0.4.0; 2.1) intercept the message to 'conductor' api (ComputeTaskAPI) for the method 'rebuild_instance', and change the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus' to 10000000000000000000001; 3) rebuild again the instance with the original image of the instance (cirros 0.3.4); 4) shelve the instance; Expected result ================ Even that rebuild is not an action that takes the flavor into account, should exist something for ensuring correctness of other parameters. The compute node does not stop working because of an invalid parameter. Actual result ================ The instance does not change from rebuild to active, remaining rebuilding forever, and the compute node gets innoperating until the services be restarted. 'nova-compute' consuming 100% of cpu. Environment ============== I used devstack/stable/queens, a fresh Ubuntu environment. Logs & Configs Logs attached. The fault is injected after 11:24:16. If you search for '10000000000000000000001', you will see that line: Nov 5 11:24:21 localhost nova-compute[14517]: #033[00;32mDEBUG nova.virt.hardware [#033[01;36mNone req-f97def42-9630-4165-81e5-abc0cab5c02f #033[00;36madmin admin#033[00;32m] #033[01;35m#033[00;32mBuild topologies for 10000000000000000000001 vcpu(s) 65536:65536:65536#033[00m #033[00;33m{{(pid=14517) _get_possible_cpu_topologies /opt/stack/queens/dest/nova/nova/virt/hardware.py:418}}#033[00m Description ============== The 'conductor-api' for 'rebuild_instance' has a vulnerability point for the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus'. When set to an invalid number of vcpus in the flavor of the instance, the compute component takes 100% of cpu consuming forever without changing the state from rebuild to active. In addition, new requests to compute component are not computed, that is, the node gets out-of-service until its restart. Maybe, this bug can be a way of using a denial-of-service attack. Steps to reproduce ===================== 1) create an instance with a flavor (VCPUS: 1, MEM: 64MB, STORAGE: 0GB) and a cirros image 0.3.4; 2) rebuild the instance with an alternative cirros image 0.4.0; 2.1) intercept the message to 'conductor' api (ComputeTaskAPI) for the method 'rebuild_instance', and change the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus' to 10000000000000000000001; 3) rebuild again the instance with the original image of the instance (cirros 0.3.4); 4) shelve the instance; 5) delete the instance; Expected result ================ Even that rebuild is not an action that takes the flavor into account, should exist something for ensuring correctness of other parameters. The compute node does not stop working because of an invalid parameter. Actual result ================ The instance does not change from rebuild to active, remaining rebuilding forever, and the compute node gets innoperating until the services be restarted. 'nova-compute' consuming 100% of cpu. Environment ============== I used devstack/stable/queens, a fresh Ubuntu environment. Logs & Configs Logs attached. The fault is injected after 11:24:16. If you search for '10000000000000000000001', you will see that line: Nov 5 11:24:21 localhost nova-compute[14517]: #033[00;32mDEBUG nova.virt.hardware [#033[01;36mNone req-f97def42-9630-4165-81e5-abc0cab5c02f #033[00;36madmin admin#033[00;32m] #033[01;35m#033[00;32mBuild topologies for 10000000000000000000001 vcpu(s) 65536:65536:65536#033[00m #033[00;33m{{(pid=14517) _get_possible_cpu_topologies /opt/stack/queens/dest/nova/nova/virt/hardware.py:418}}#033[00m
2018-11-05 14:11:36 Wallace Cardoso description Description ============== The 'conductor-api' for 'rebuild_instance' has a vulnerability point for the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus'. When set to an invalid number of vcpus in the flavor of the instance, the compute component takes 100% of cpu consuming forever without changing the state from rebuild to active. In addition, new requests to compute component are not computed, that is, the node gets out-of-service until its restart. Maybe, this bug can be a way of using a denial-of-service attack. Steps to reproduce ===================== 1) create an instance with a flavor (VCPUS: 1, MEM: 64MB, STORAGE: 0GB) and a cirros image 0.3.4; 2) rebuild the instance with an alternative cirros image 0.4.0; 2.1) intercept the message to 'conductor' api (ComputeTaskAPI) for the method 'rebuild_instance', and change the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus' to 10000000000000000000001; 3) rebuild again the instance with the original image of the instance (cirros 0.3.4); 4) shelve the instance; 5) delete the instance; Expected result ================ Even that rebuild is not an action that takes the flavor into account, should exist something for ensuring correctness of other parameters. The compute node does not stop working because of an invalid parameter. Actual result ================ The instance does not change from rebuild to active, remaining rebuilding forever, and the compute node gets innoperating until the services be restarted. 'nova-compute' consuming 100% of cpu. Environment ============== I used devstack/stable/queens, a fresh Ubuntu environment. Logs & Configs Logs attached. The fault is injected after 11:24:16. If you search for '10000000000000000000001', you will see that line: Nov 5 11:24:21 localhost nova-compute[14517]: #033[00;32mDEBUG nova.virt.hardware [#033[01;36mNone req-f97def42-9630-4165-81e5-abc0cab5c02f #033[00;36madmin admin#033[00;32m] #033[01;35m#033[00;32mBuild topologies for 10000000000000000000001 vcpu(s) 65536:65536:65536#033[00m #033[00;33m{{(pid=14517) _get_possible_cpu_topologies /opt/stack/queens/dest/nova/nova/virt/hardware.py:418}}#033[00m Description ============== The 'conductor-api' for 'rebuild_instance' has a vulnerability point for the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus'. When set to an invalid number of vcpus in the flavor of the instance, the compute component takes 100% of cpu consuming forever without changing the state from rebuild to active. In addition, new requests to compute component are not computed, that is, the node gets out-of-service until its restart. Maybe, this bug can be a way of using a denial-of-service attack. Steps to reproduce ===================== 1) create an instance with a flavor (VCPUS: 1, MEM: 64MB, STORAGE: 0GB) and a cirros image 0.3.4; 2) rebuild the instance with an alternative cirros image 0.4.0; 2.1) intercept the message to 'conductor' api (ComputeTaskAPI) for the method 'rebuild_instance', and change the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus' to 10000000000000000000001; 3) rebuild again the instance with the original image of the instance (cirros 0.3.4); 4) shelve the instance; 5) delete the instance; Expected result ================ Even that rebuild is not an action that takes the flavor into account, should exist something for ensuring correctness of other parameters. The compute node does not stop working because of an invalid parameter. Actual result ================ The instance does not change from rebuild to active, remaining rebuilding forever, and the compute node gets innoperating until the services be restarted. 'nova-compute' consuming 100% of cpu. Environment ============== I used devstack/stable/queens, a fresh Ubuntu environment. Logs & Configs ================= Logs attached. The fault is injected after 11:24:16. If you search for '10000000000000000000001', you will see that line: Nov 5 11:24:21 localhost nova-compute[14517]: #033[00;32mDEBUG nova.virt.hardware [#033[01;36mNone req-f97def42-9630-4165-81e5-abc0cab5c02f #033[00;36madmin admin#033[00;32m] #033[01;35m#033[00;32mBuild topologies for 10000000000000000000001 vcpu(s) 65536:65536:65536#033[00m #033[00;33m{{(pid=14517) _get_possible_cpu_topologies /opt/stack/queens/dest/nova/nova/virt/hardware.py:418}}#033[00m
2018-11-05 14:12:11 Wallace Cardoso description Description ============== The 'conductor-api' for 'rebuild_instance' has a vulnerability point for the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus'. When set to an invalid number of vcpus in the flavor of the instance, the compute component takes 100% of cpu consuming forever without changing the state from rebuild to active. In addition, new requests to compute component are not computed, that is, the node gets out-of-service until its restart. Maybe, this bug can be a way of using a denial-of-service attack. Steps to reproduce ===================== 1) create an instance with a flavor (VCPUS: 1, MEM: 64MB, STORAGE: 0GB) and a cirros image 0.3.4; 2) rebuild the instance with an alternative cirros image 0.4.0; 2.1) intercept the message to 'conductor' api (ComputeTaskAPI) for the method 'rebuild_instance', and change the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus' to 10000000000000000000001; 3) rebuild again the instance with the original image of the instance (cirros 0.3.4); 4) shelve the instance; 5) delete the instance; Expected result ================ Even that rebuild is not an action that takes the flavor into account, should exist something for ensuring correctness of other parameters. The compute node does not stop working because of an invalid parameter. Actual result ================ The instance does not change from rebuild to active, remaining rebuilding forever, and the compute node gets innoperating until the services be restarted. 'nova-compute' consuming 100% of cpu. Environment ============== I used devstack/stable/queens, a fresh Ubuntu environment. Logs & Configs ================= Logs attached. The fault is injected after 11:24:16. If you search for '10000000000000000000001', you will see that line: Nov 5 11:24:21 localhost nova-compute[14517]: #033[00;32mDEBUG nova.virt.hardware [#033[01;36mNone req-f97def42-9630-4165-81e5-abc0cab5c02f #033[00;36madmin admin#033[00;32m] #033[01;35m#033[00;32mBuild topologies for 10000000000000000000001 vcpu(s) 65536:65536:65536#033[00m #033[00;33m{{(pid=14517) _get_possible_cpu_topologies /opt/stack/queens/dest/nova/nova/virt/hardware.py:418}}#033[00m Description ============== The 'conductor-api' for 'rebuild_instance' has a vulnerability point for the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus'. When set to an invalid number of vcpus in the flavor of the instance, the compute component takes 100% of cpu consuming forever without changing the state from rebuild to active. In addition, new requests to compute component are not computed, that is, the node gets out-of-service until its restart. Maybe, this bug can be a way of using a denial-of-service attack. Steps to reproduce ===================== 1) create an instance with a flavor (VCPUS: 1, MEM: 64MB, STORAGE: 0GB) and a cirros image 0.3.4; 2) rebuild the instance with an alternative cirros image 0.4.0; 2.1) intercept the message to 'conductor' api (ComputeTaskAPI) for the method 'rebuild_instance', and change the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus' to 10000000000000000000001; 3) rebuild again the instance with the original image of the instance (cirros 0.3.4); 4) shelve the instance; 5) delete the instance; Expected result ================ Even that rebuild is not an action that takes the flavor into account, should exist something for ensuring correctness of other parameters. The compute node does not stop working because of an invalid parameter. Actual result ================ The instance does not change from rebuild to active, remaining rebuilding forever, and the compute node gets innoperating until the services be restarted. 'nova-compute' consuming 100% of cpu. Environment ============== I used devstack/stable/queens, a fresh Ubuntu environment. Logs & Configs ================= Logs attached. The fault is injected after 11:24:16. If you search for '10000000000000000000001', you will see the line below: Nov 5 11:24:21 localhost nova-compute[14517]: #033[00;32mDEBUG nova.virt.hardware [#033[01;36mNone req-f97def42-9630-4165-81e5-abc0cab5c02f #033[00;36madmin admin#033[00;32m] #033[01;35m#033[00;32mBuild topologies for 10000000000000000000001 vcpu(s) 65536:65536:65536#033[00m #033[00;33m{{(pid=14517) _get_possible_cpu_topologies /opt/stack/queens/dest/nova/nova/virt/hardware.py:418}}#033[00m
2018-11-05 14:13:06 Wallace Cardoso summary compute consuming 100% of cpu after rebuilding with invalid data parameters nova-compute consuming 100% of cpu after rebuilding with invalid data parameters
2018-11-05 14:17:46 Wallace Cardoso description Description ============== The 'conductor-api' for 'rebuild_instance' has a vulnerability point for the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus'. When set to an invalid number of vcpus in the flavor of the instance, the compute component takes 100% of cpu consuming forever without changing the state from rebuild to active. In addition, new requests to compute component are not computed, that is, the node gets out-of-service until its restart. Maybe, this bug can be a way of using a denial-of-service attack. Steps to reproduce ===================== 1) create an instance with a flavor (VCPUS: 1, MEM: 64MB, STORAGE: 0GB) and a cirros image 0.3.4; 2) rebuild the instance with an alternative cirros image 0.4.0; 2.1) intercept the message to 'conductor' api (ComputeTaskAPI) for the method 'rebuild_instance', and change the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus' to 10000000000000000000001; 3) rebuild again the instance with the original image of the instance (cirros 0.3.4); 4) shelve the instance; 5) delete the instance; Expected result ================ Even that rebuild is not an action that takes the flavor into account, should exist something for ensuring correctness of other parameters. The compute node does not stop working because of an invalid parameter. Actual result ================ The instance does not change from rebuild to active, remaining rebuilding forever, and the compute node gets innoperating until the services be restarted. 'nova-compute' consuming 100% of cpu. Environment ============== I used devstack/stable/queens, a fresh Ubuntu environment. Logs & Configs ================= Logs attached. The fault is injected after 11:24:16. If you search for '10000000000000000000001', you will see the line below: Nov 5 11:24:21 localhost nova-compute[14517]: #033[00;32mDEBUG nova.virt.hardware [#033[01;36mNone req-f97def42-9630-4165-81e5-abc0cab5c02f #033[00;36madmin admin#033[00;32m] #033[01;35m#033[00;32mBuild topologies for 10000000000000000000001 vcpu(s) 65536:65536:65536#033[00m #033[00;33m{{(pid=14517) _get_possible_cpu_topologies /opt/stack/queens/dest/nova/nova/virt/hardware.py:418}}#033[00m Description ============== The 'conductor-api' for 'rebuild_instance' has a vulnerability point for the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus'. When set to an invalid number of vcpus in the flavor of the instance, the compute component takes 100% of cpu consuming forever without changing the state from rebuild to active (or error). In addition, new requests to compute component are not computed, that is, the node gets out-of-service until its restart. Maybe, this bug can be a way of using a denial-of-service attack. Steps to reproduce ===================== 1) create an instance with a flavor (VCPUS: 1, MEM: 64MB, STORAGE: 0GB) and a cirros image 0.3.4; 2) rebuild the instance with an alternative cirros image 0.4.0; 2.1) intercept the message to 'conductor' api (ComputeTaskAPI) for the method 'rebuild_instance', and change the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus' to 10000000000000000000001; 3) rebuild again the instance with the original image of the instance (cirros 0.3.4); 4) shelve the instance; 5) delete the instance; Expected result ================ Even that rebuild is not an action that takes the flavor into account, should exist something for ensuring correctness of other parameters. The compute node does not stop working because of an invalid parameter. Actual result ================ The instance does not change from rebuild to active, remaining rebuilding forever, and the compute node gets innoperating until the services be restarted. 'nova-compute' consuming 100% of cpu. Environment ============== I used devstack/stable/queens, a fresh Ubuntu environment. Logs & Configs ================= Logs attached. The fault is injected after 11:24:16. If you search for '10000000000000000000001', you will see the line below: Nov 5 11:24:21 localhost nova-compute[14517]: #033[00;32mDEBUG nova.virt.hardware [#033[01;36mNone req-f97def42-9630-4165-81e5-abc0cab5c02f #033[00;36madmin admin#033[00;32m] #033[01;35m#033[00;32mBuild topologies for 10000000000000000000001 vcpu(s) 65536:65536:65536#033[00m #033[00;33m{{(pid=14517) _get_possible_cpu_topologies /opt/stack/queens/dest/nova/nova/virt/hardware.py:418}}#033[00m
2018-11-05 14:18:27 Wallace Cardoso description Description ============== The 'conductor-api' for 'rebuild_instance' has a vulnerability point for the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus'. When set to an invalid number of vcpus in the flavor of the instance, the compute component takes 100% of cpu consuming forever without changing the state from rebuild to active (or error). In addition, new requests to compute component are not computed, that is, the node gets out-of-service until its restart. Maybe, this bug can be a way of using a denial-of-service attack. Steps to reproduce ===================== 1) create an instance with a flavor (VCPUS: 1, MEM: 64MB, STORAGE: 0GB) and a cirros image 0.3.4; 2) rebuild the instance with an alternative cirros image 0.4.0; 2.1) intercept the message to 'conductor' api (ComputeTaskAPI) for the method 'rebuild_instance', and change the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus' to 10000000000000000000001; 3) rebuild again the instance with the original image of the instance (cirros 0.3.4); 4) shelve the instance; 5) delete the instance; Expected result ================ Even that rebuild is not an action that takes the flavor into account, should exist something for ensuring correctness of other parameters. The compute node does not stop working because of an invalid parameter. Actual result ================ The instance does not change from rebuild to active, remaining rebuilding forever, and the compute node gets innoperating until the services be restarted. 'nova-compute' consuming 100% of cpu. Environment ============== I used devstack/stable/queens, a fresh Ubuntu environment. Logs & Configs ================= Logs attached. The fault is injected after 11:24:16. If you search for '10000000000000000000001', you will see the line below: Nov 5 11:24:21 localhost nova-compute[14517]: #033[00;32mDEBUG nova.virt.hardware [#033[01;36mNone req-f97def42-9630-4165-81e5-abc0cab5c02f #033[00;36madmin admin#033[00;32m] #033[01;35m#033[00;32mBuild topologies for 10000000000000000000001 vcpu(s) 65536:65536:65536#033[00m #033[00;33m{{(pid=14517) _get_possible_cpu_topologies /opt/stack/queens/dest/nova/nova/virt/hardware.py:418}}#033[00m Description ============== The 'conductor-api' for 'rebuild_instance' has a vulnerability point for the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus'. When set to an invalid number of vcpus in the flavor of the instance, the compute component takes 100% of cpu consuming forever without changing the state from rebuild to active (or error). In addition, new requests to compute component are not computed, that is, the node gets out-of-service until its restart. Maybe, this bug can be a way of using a denial-of-service attack. Steps to reproduce ===================== 1) create an instance with the flavor (VCPUS: 1, MEM: 64MB, STORAGE: 0GB) and the cirros image 0.3.4; 2) rebuild the instance with an alternative cirros image 0.4.0; 2.1) intercept the message to 'conductor' api (ComputeTaskAPI) for the method 'rebuild_instance', and change the parameter 'rebuild_instance/args/instance/nova_object.data/flavor/nova_object.data/vcpus' to 10000000000000000000001; 3) rebuild again the instance with the original image of the instance (cirros 0.3.4); 4) shelve the instance; 5) delete the instance; Expected result ================ Even that rebuild is not an action that takes the flavor into account, should exist something for ensuring correctness of other parameters. The compute node does not stop working because of an invalid parameter. Actual result ================ The instance does not change from rebuild to active, remaining rebuilding forever, and the compute node gets innoperating until the services be restarted. 'nova-compute' consuming 100% of cpu. Environment ============== I used devstack/stable/queens, a fresh Ubuntu environment. Logs & Configs ================= Logs attached. The fault is injected after 11:24:16. If you search for '10000000000000000000001', you will see the line below: Nov 5 11:24:21 localhost nova-compute[14517]: #033[00;32mDEBUG nova.virt.hardware [#033[01;36mNone req-f97def42-9630-4165-81e5-abc0cab5c02f #033[00;36madmin admin#033[00;32m] #033[01;35m#033[00;32mBuild topologies for 10000000000000000000001 vcpu(s) 65536:65536:65536#033[00m #033[00;33m{{(pid=14517) _get_possible_cpu_topologies /opt/stack/queens/dest/nova/nova/virt/hardware.py:418}}#033[00m
2018-11-05 14:21:34 Wallace Cardoso tags fault-injection
2020-04-23 18:39:57 Artom Lifshitz nova: status New Won't Fix