Containers: nova.scheduler.manager from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up.

Bug #1827692 reported by Ricardo Perez
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
StarlingX
Fix Released
High
zhipeng liu

Bug Description

Brief Description
-----------------
On a 2+2 system, using the described ISO image, very often you will hit the following error message while trying to launch a VM:

nova.scheduler.manager from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up.

Severity
--------

<Major: System/Feature is usable but degraded>, StarlingX is up and doesn't show any system alarm or issue using "system host-list", however you aren't able to launch a VM.

Steps to Reproduce
------------------

1.- Follow the steps to setup a 2+2 environment listed here: https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnStandard

Except for building the image, you should use the ISO described later here.

2.- Using the following network cards / HW configuration listed in this document:

https://docs.google.com/spreadsheets/d/193YDIZ17blD4Sd4nafEZ5Js7kQiNP5k_PpY-bcuPW10/edit?usp=sharing

Listed as 2+2 Config 4

3.- While you are at the active controller, after finished the proper setup, create a flavor, an image and try to launch a VM:

export OS_CLOUD=openstack_helm
   openstack image create --container-format bare --disk-format qcow2 --file cirros-0.4.0-x86_64-disk.img
   cirros
   openstack flavor create --ram 512 --disk 1 --vcpus 1 my_tiny
   openstack flavor list
   openstack flavor set $UUID_my_tiny --property hw:mem_page_size=large #is the same behavior if you don't
    use the mem_page_size parameter in your flavor
   openstack server create --image cirros --flavor my_tiny --network public-net0 vm5

Expected Behavior
------------------
You should see 100% of completion with no errors, and the VM up and running.

Actual Behavior
----------------
after executing "openstack server create", the system application start working in create a VM, however if you use "openstack server list" you will see that the VM has been created with an ERROR.

Also you will see that the "network" field is empty even do you indicate the network to be used in your server create command.

Reproducibility
---------------
<Reproducible/Intermittent>
State if the issue is 3 to 4 times in 10 fresh ISO installs. And at this point it's very random, can occur 3 times in a row, or you can't see until your 10th installation.

System Configuration
--------------------
2 + 2 with the Hardware configuration described in the following document:

https://docs.google.com/spreadsheets/d/193YDIZ17blD4Sd4nafEZ5Js7kQiNP5k_PpY-bcuPW10/edit?usp=sharing

Listed as 2+2 Config 4

Compute-0 : Intel® Server System R2208WFTZS
MGMT - Baseboard X722 10G Ethernet Card (eno1)
DATA - Intel X520 DA SPF+ 10G Ethernet Card

Compute-1: Intel® Wolf Pass 1U 8x2.5in HDD Skylake SP - R1208WFTYS-IDD
MGMT - Baseboard X722 10G Ethernet Card (eno2)
DATA - Mellanox Cx4

Controller-0: Intel® Wolf Pass 1U 8x2.5in HDD Skylake SP - R1208WFTYS-IDD
OAM - Baseboard X722 10G Ethernet Card (eno1)
MGMT - Baseboard X722 10G Ethernet Card (eno2)

Controller-1: Intel® Server System R2208WFTZS
OAM - Baseboard X722 10G Ethernet Card (eno1)
MGMT - Baseboard X722 10G Ethernet Card (eno2)

Branch/Pull Time/Commit
-----------------------
SW_VERSION="19.01"
BUILD_TARGET="Unknown"
BUILD_TYPE="Informal"
BUILD_ID="n/a"

JOB="n/a"
BUILD_BY="slin14"
BUILD_NUMBER="n/a"
BUILD_HOST=""
BUILD_DATE="2019-03-22 14:55:16 +0000"

BUILD_DIR="/"
WRS_SRC_DIR="/localdisk/designer/slin14/starlingx/cgcs-root"
WRS_GIT_BRANCH="master"
CGCS_SRC_DIR="/localdisk/designer/slin14/starlingx/cgcs-root/stx"
CGCS_GIT_BRANCH="master"

Last Pass
---------
The same ISO was behaving well for PVMs creation using the same hardware, but different networks configuration:

Please refer to the document listed here, and Configurations 1 and 2, there where working with no issues.
https://docs.google.com/spreadsheets/d/193YDIZ17blD4Sd4nafEZ5Js7kQiNP5k_PpY-bcuPW10/edit?usp=sharing

Timestamp/Logs
--------------

1.-
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | compute-0 | worker | unlocked | enabled | available |
| 4 | compute-1 | worker | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+

2.-
controller-0:~# export OS_CLOUD=openstack_helm
controller-0:~# openstack server create --image cirros --flavor my_tiny --network public-net0 vm5
+-------------------------------------+------------------------------------------------+
| Field | Value |
+-------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | v45fxLKG9iAW |
| config_drive | |
| created | 2019-05-03T20:46:43Z |
| flavor | my_tiny (85b97244-9180-4eaa-babb-87dbda31d268) |
| hostId | |
| id | 263c81b4-f94d-46ca-9cbb-7075a3e00b40 |
| image | cirros (662fbd36-2048-4050-af48-9ffbbde8d3d1) |
| key_name | None |
| name | vm5 |
| progress | 0 |
| project_id | 80fdf56ba59f4ae48891bd6211f76d56 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2019-05-03T20:46:43Z |
| user_id | 6c1c3b373236464cb8bb2488066c9b1c |
| volumes_attached | |
+-------------------------------------+------------------------------------------------+

3.-
controller-0:~# openstack server list
+--------------------------------------+------+--------+----------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+------+--------+----------+--------+---------+
| 263c81b4-f94d-46ca-9cbb-7075a3e00b40 | vm5 | ERROR | | cirros | my_tiny |
| 043292da-c50f-4842-bb6f-37524133bb31 | vm4 | ERROR | | cirros | my_tiny |
| 2611a52c-cbb6-48ba-b257-d518c1418c07 | vm3 | ERROR | | cirros | my_tiny |
+--------------------------------------+------+--------+----------+--------+---------+

4.-
controller-0:~# kubectl get pod -n openstack | grep nova
nova-api-metadata-7ff98c5b78-x4qxs 1/1 Running 1 5h12m
nova-api-metadata-7ff98c5b78-xszzg 1/1 Running 1 5h12m
nova-api-osapi-557445949d-fckqn 1/1 Running 0 5h12m
nova-api-osapi-557445949d-q6qdh 1/1 Running 0 5h12m
nova-api-proxy-786d49cc6b-htzrl 1/1 Running 0 5h12m
nova-bootstrap-dtmwm 0/1 Completed 0 5h12m
nova-cell-setup-cf5bp 0/1 Completed 0 5h12m
nova-compute-compute-0-75ea0372-dzz4w 2/2 Running 0 5h12m
nova-compute-compute-1-3dfb81d6-hlgq9 2/2 Running 0 5h12m
nova-conductor-cd6bd89d9-jqddx 1/1 Running 0 5h12m
nova-conductor-cd6bd89d9-w6bs9 1/1 Running 0 5h12m
nova-consoleauth-898c6f66f-cv7mq 1/1 Running 0 5h12m
nova-consoleauth-898c6f66f-v4tln 1/1 Running 0 5h12m
nova-db-init-9hqtd 0/3 Completed 0 5h12m
nova-db-sync-wt25b 0/1 Completed 0 5h12m
nova-ks-endpoints-fqcs2 0/3 Completed 0 5h12m
nova-ks-service-wkhdd 0/1 Completed 0 5h12m
nova-ks-user-mh5mw 0/1 Completed 2 5h12m
nova-novncproxy-6845f57667-f2pd2 1/1 Running 0 5h12m
nova-novncproxy-6845f57667-jkznn 1/1 Running 0 5h12m
nova-placement-api-9c7cd6995-jckjw 1/1 Running 0 5h12m
nova-placement-api-9c7cd6995-xv8rc 1/1 Running 0 5h12m
nova-rabbit-init-9qcjq 0/1 Completed 0 5h12m
nova-scheduler-794ff64fcc-kc2w4 1/1 Running 0 5h12m
nova-scheduler-794ff64fcc-zvgbw 1/1 Running 0 5h12m
nova-storage-init-s98js 0/1 Completed 0 5h12m

5.-

controller-0:~# kubectl logs nova-scheduler-794ff64fcc-kc2w4 -n openstack
+ exec nova-scheduler --config-file /etc/nova/nova.conf
Deprecated: Option "idle_timeout" from group "database" is deprecated. Use option "connection_recycle_time" from group "database".
Deprecated: Option "idle_timeout" from group "api_database" is deprecated. Use option "connection_recycle_time" from group "api_database".
2019-05-03 16:03:55,279.279 1 WARNING nova.scheduler.filters.disk_filter [req-6042d528-0503-447b-b811-73564877b90a - - - - -] The DiskFilter is deprecated since the 19.0.0 Stein release. DISK_GB filtering is performed natively using the Placement service when using the filter_scheduler driver. Furthermore, enabling DiskFilter may incorrectly filter out baremetal nodes which must be scheduled using custom resource classes.
2019-05-03 16:03:55,298.298 17 INFO nova.service [-] Starting scheduler node (version 19.1.0)
2019-05-03 16:03:55,300.300 18 INFO nova.service [-] Starting scheduler node (version 19.1.0)
2019-05-03 16:03:55,301.301 19 INFO nova.service [-] Starting scheduler node (version 19.1.0)
<snip>.....
2019-05-03 16:06:09,809.809 71 INFO nova.scheduler.host_manager [req-b02bf3e6-4ea6-4bec-b76f-dea2fb73876d - - - - -] Host mapping not found for host compute-0. Not tracking instance info for this host.
2019-05-03 16:06:09,809.809 89 INFO nova.scheduler.host_manager [req-b02bf3e6-4ea6-4bec-b76f-dea2fb73876d - - - - -] Received a sync request from an unknown host 'compute-0'. Re-created its InstanceList.
2019-05-03 16:06:09,809.809 71 INFO nova.scheduler.host_manager [req-b02bf3e6-4ea6-4bec-b76f-dea2fb73876d - - - - -] Received a sync request from an unknown host 'compute-0'. Re-created its InstanceList.
2019-05-03 17:02:59,874.874 26 INFO nova.scheduler.manager [req-4774a9c3-a048-4eb3-90e7-aaf35a6beef4 6c1c3b373236464cb8bb2488066c9b1c 80fdf56ba59f4ae48891bd6211f76d56 - default default] Got no allocation candidates from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up.
<log end>...

Test Activity
-------------
Fature Testing OVS - DPDK

Revision history for this message
Ricardo Perez (richomx) wrote :
Download full text (4.5 KiB)

controller-0:~# kubectl describe node compute-0
Name: compute-0
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=compute-0
                    openstack-compute-node=enabled
                    openvswitch=enabled
                    sriov=enabled
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 192.168.206.50/24
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 03 May 2019 15:31:54 +0000
Taints: <none>
Unschedulable: false
Conditions:
  Type Status LastHeartbeatTime LastTransitionTime Reason Message
  ---- ------ ----------------- ------------------ ------ -------
  OutOfDisk False Fri, 03 May 2019 21:42:50 +0000 Fri, 03 May 2019 15:31:54 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
  MemoryPressure False Fri, 03 May 2019 21:42:50 +0000 Fri, 03 May 2019 15:31:54 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
  DiskPressure False Fri, 03 May 2019 21:42:50 +0000 Fri, 03 May 2019 15:31:54 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
  PIDPressure False Fri, 03 May 2019 21:42:50 +0000 Fri, 03 May 2019 15:31:54 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
  Ready True Fri, 03 May 2019 21:42:50 +0000 Fri, 03 May 2019 15:32:06 +0000 KubeletReady kubelet is posting ready status
Addresses:
  InternalIP: 10.10.57.59
  Hostname: compute-0
Capacity:
 cpu: 112
 ephemeral-storage: 20027216Ki
 memory: 196618812Ki
 pods: 110
Allocatable:
 cpu: 112
 ephemeral-storage: 18457082236
 memory: 196516412Ki
 pods: 110
System Info:
 Machine ID: 318e5af8729f4377ae2289ba08b7b594
 System UUID: 00D7627F-0786-E811-906E-00163566263E
 Boot ID: c57e6e3c-b277-4bd1-b256-9d537a51b674
 Kernel Version: 3.10.0-957.1.3.el7.1.tis.x86_64
 OS Image: CentOS Linux 7 (Core)
 Operating System: linux
 Architecture: amd64
 Container Runtime Version: docker://18.3.1
 Kubelet Version: v1.12.3
 Kube-Proxy Version: v1.12.3
PodCIDR: 172.16.3.0/24
Non-terminated Pods: (10 in total)
  Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
  --------- ---- ------------ ---------- --------------- -------------
  kube-system calico-node-rkdfn 250m (0%) 0 (0%) 0 (0%) 0 (0%)
  kube-system kube-...

Read more...

Revision history for this message
Ricardo Perez (richomx) wrote :
Download full text (4.6 KiB)

controller-0:~# kubectl describe node compute-1
Name: compute-1
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=compute-1
                    openstack-compute-node=enabled
                    openvswitch=enabled
                    sriov=enabled
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 192.168.206.194/24
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 03 May 2019 15:31:21 +0000
Taints: <none>
Unschedulable: false
Conditions:
  Type Status LastHeartbeatTime LastTransitionTime Reason Message
  ---- ------ ----------------- ------------------ ------ -------
  OutOfDisk False Fri, 03 May 2019 21:43:20 +0000 Fri, 03 May 2019 15:31:21 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
  MemoryPressure False Fri, 03 May 2019 21:43:20 +0000 Fri, 03 May 2019 15:31:21 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
  DiskPressure False Fri, 03 May 2019 21:43:20 +0000 Fri, 03 May 2019 15:31:21 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
  PIDPressure False Fri, 03 May 2019 21:43:20 +0000 Fri, 03 May 2019 15:31:21 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
  Ready True Fri, 03 May 2019 21:43:20 +0000 Fri, 03 May 2019 15:31:33 +0000 KubeletReady kubelet is posting ready status
Addresses:
  InternalIP: 10.10.57.228
  Hostname: compute-1
Capacity:
 cpu: 64
 ephemeral-storage: 20027216Ki
 memory: 97534592Ki
 pods: 110
Allocatable:
 cpu: 64
 ephemeral-storage: 18457082236
 memory: 97432192Ki
 pods: 110
System Info:
 Machine ID: 32f53d4e0dab4fd3b7e27cdcfd9e8f3d
 System UUID: 8005C491-C91B-E811-906E-00163566263E
 Boot ID: d45c3e35-8252-4249-800a-a4b7a83ac266
 Kernel Version: 3.10.0-957.1.3.el7.1.tis.x86_64
 OS Image: CentOS Linux 7 (Core)
 Operating System: linux
 Architecture: amd64
 Container Runtime Version: docker://18.3.1
 Kubelet Version: v1.12.3
 Kube-Proxy Version: v1.12.3
PodCIDR: 172.16.2.0/24
Non-terminated Pods: (11 in total)
  Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
  --------- ---- ------------ ---------- --------------- -------------
  kube-system calico-node-v5kfl 250m (0%) 0 (0%) 0 (0%) 0 (0%)
  kube-system kube-pr...

Read more...

Revision history for this message
Ricardo Perez (richomx) wrote :

ALL NODES Full Log Files

Revision history for this message
Ghada Khalil (gkhalil) wrote :

Is this an issue with the standard cengn loads as well? Or is it specific to the custom load used for ovs-dpdk testing?

Changed in starlingx:
status: New → Incomplete
Revision history for this message
Ricardo Perez (richomx) wrote :

Hi Ghada, this issue has been seen while testing ovs-dpdk, not sure about the cengn loads.

Revision history for this message
Cristopher Lemus (cjlemusc) wrote :

Is it possible that these issues/errors are related to what was fixed on these bugs?:

https://bugs.launchpad.net/starlingx/+bug/1825814
https://bugs.launchpad.net/starlingx/+bug/1825814

Memory exhaust?

Revision history for this message
Cristopher Lemus (cjlemusc) wrote :

Sorry, I pasted the same bug twice, this is the other one: https://bugs.launchpad.net/starlingx/+bug/1826308

Revision history for this message
Ghada Khalil (gkhalil) wrote :

I don't believe the above issues are related. As noted, there is an explicit error from nova in this case. Assigning to the distro.openstack project lead for further triage.

tags: added: stx.distro.openstack
Changed in starlingx:
assignee: nobody → Bruce Jones (brucej)
Revision history for this message
Cindy Xie (xxie1) wrote :

@Cristopher @Ricardo, because we already have OVS/DPDK upversion patches merged, can you please re-test the issue on master Cengen build and check if the issue is still there?

Revision history for this message
Ricardo Perez (richomx) wrote :

@Cindy, sure, we can re-test, as soon as we can get an stable ISO for 2+2 configuration.

Cindy Xie (xxie1)
Changed in starlingx:
assignee: Bruce Jones (brucej) → Cindy Xie (xxie1)
Revision history for this message
Cindy Xie (xxie1) wrote :

@Ricardo, can you please report back with your test results using latest ISO?

Revision history for this message
Ricardo Perez (richomx) wrote :

@cindy, right now i'm trying to setup the environment for QAT, I'll let you know as soon as I have some hardware to test this again, or let me just get some progress on QAT, so I can switch to this again. Thanks.

Revision history for this message
Cindy Xie (xxie1) wrote :

not sure yet if this LP is related to LP: https://storyboard.openstack.org/#!/story/2005750, but temporary assign to Zhipeng as he is working on SB2005750.

Changed in starlingx:
assignee: Cindy Xie (xxie1) → zhipeng liu (zhipengs)
Bruce Jones (brucej)
Changed in starlingx:
importance: Undecided → High
tags: added: stx.2.0
Revision history for this message
zhipeng liu (zhipengs) wrote :

Hi Ricardo,

Could you help to verify this issue with below EB.
This EB is using latest openstack-placement, and basic test pass.

http://dcp-dev.intel.com/pub/starlingx/stx-eng-build/68/outputs/

Thanks!
Zhipeng

Revision history for this message
zhipeng liu (zhipengs) wrote :

Dear Ricardo,

Please retest it with latest daily build below, as it integrated latest openstack-placement.
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190621T013000Z/outputs/iso/

Thanks!
Zhipeng

Revision history for this message
Ricardo Perez (richomx) wrote :

Hi @zhipengs, sorry for my late reply, lately I have been focused on other tasks related to QAT. Currently, we are a little bit short of hardware to perform the validation. However, let me prioritize this task for this week. And I'll let you know the results.

Thanks in advance
-Ricardo

Revision history for this message
zhipeng liu (zhipengs) wrote :

Thanks Ricardo!

Revision history for this message
yong hu (yhu6) wrote :

@zhipeng, as talked in OpenStack.distro meeting, please make a build with updated Nova stein branch and share the build with Ricardo.

Revision history for this message
zhipeng liu (zhipengs) wrote :

Hi Ricardo,

Please use below EB to verify this issue.
http://dcp-dev.intel.com/pub/starlingx/stx-eng-build/118/outputs

This EB uses new nova branch stein.2
I have verified basic VM creation and remove in multi-node setup.

Thanks
Zhipeng

Cindy Xie (xxie1)
tags: added: stx.retestneeded
Changed in starlingx:
status: Incomplete → Fix Released
status: Fix Released → Fix Committed
status: Fix Committed → In Progress
Revision history for this message
Ricardo Perez (richomx) wrote :

Hi, we are trying to get the proper hardware, because this issue was reported with specific network cards combination. However, currently we are running our regression test and this is adding difficulties to get the hardware.

Revision history for this message
Ricardo Perez (richomx) wrote :

Using this build:

###
### StarlingX
### Built from master
###

OS="centos"
SW_VERSION="19.01"
BUILD_TARGET="Host Installer"
BUILD_TYPE="Formal"
BUILD_ID="20190705T013000Z"

JOB="STX_build_master_master"
<email address hidden>"
BUILD_NUMBER="170"
BUILD_HOST="starlingx_mirror"
BUILD_DATE="2019-07-05 01:30:00 +0000"

The issue is not present, however, we have to confirm in this specific hardware configuration:

Listed as 2+2 Config 4

Compute-0 : Intel® Server System R2208WFTZS
MGMT - Baseboard X722 10G Ethernet Card (eno1)
DATA - Intel X520 DA SPF+ 10G Ethernet Card

Compute-1: Intel® Wolf Pass 1U 8x2.5in HDD Skylake SP - R1208WFTYS-IDD
MGMT - Baseboard X722 10G Ethernet Card (eno2)
DATA - Mellanox Cx4

Controller-0: Intel® Wolf Pass 1U 8x2.5in HDD Skylake SP - R1208WFTYS-IDD
OAM - Baseboard X722 10G Ethernet Card (eno1)
MGMT - Baseboard X722 10G Ethernet Card (eno2)

Controller-1: Intel® Server System R2208WFTZS
OAM - Baseboard X722 10G Ethernet Card (eno1)
MGMT - Baseboard X722 10G Ethernet Card (eno2)

Revision history for this message
zhipeng liu (zhipengs) wrote :

Thanks Ricardo!
I think you can test it with original build you tested before in the same setup.
If original build can reproduce the bug, and current EB couldn’t in your current setup,
That should be OK!

Zhipeng

Revision history for this message
Ricardo Perez (richomx) wrote :

We have tested with duplex configuration and the issue is no longer seen. By now you can mark this bug to be closed. Thanks.

Revision history for this message
Ricardo Perez (richomx) wrote :

After our StarlingX Test Meeting, we agreed the following about this specific bug. It has to be tested again with the same specific hardware that it was originally reported.

In order to do so, and due current hardware availability, this will be done until we finish the Regression Testing, so we can allocate the proper hardware setup.

Once that we perform that, and based on the result of such testing we can provide the proper inputs to decide if we can close this bug or not.

Thanks.

Revision history for this message
yong hu (yhu6) wrote :

@Ricardo, could we run testing for this LP in this week?
I think we've just completed the regression testing lately?

Revision history for this message
Ricardo Perez (richomx) wrote :

Hi Yong, we are setting up the specified hardware, however, we have been hitting this issue constantly.

https://bugs.launchpad.net/starlingx/+bug/1839696

We are flashing the latest green ISO to see if we can at least perform the proper system setup.

Thanks
-Ricardo

Revision history for this message
Ricardo Perez (richomx) wrote :
Download full text (4.2 KiB)

After setting up the proper hardware. It looks that the issue is no longer present in the following ISO image:

controller-0:/home/sysadmin# cat /etc/build.info
###
### StarlingX
### Release 19.01
###

OS="centos"
SW_VERSION="19.01"
BUILD_TARGET="Host Installer"
BUILD_TYPE="Formal"
BUILD_ID="r/stx.2.0"

JOB="STX_BUILD_2.0"
<email address hidden>"
BUILD_NUMBER="14"
BUILD_HOST="starlingx_mirror"
BUILD_DATE="2019-08-08 01:30:00 +0000"

controller-0:/home/sysadmin# openstack server create --image cirros --flavor my_tiny --network public-net0 --security-group security1 richo1
+-------------------------------------+------------------------------------------------+
| Field | Value |
+-------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | pjPqYs7jfCPJ |
| config_drive | |
| created | 2019-08-13T09:39:39Z |
| flavor | my_tiny (200d48ce-52bb-4ce5-9768-113c1ff0926a) |
| hostId | |
| id | 68a88f0c-65c7-499a-8e7e-aa16ddc47a47 |
| image | cirros (2c56b5dc-109e-44c5-b0c1-f3d611814abd) |
| key_name | None |
| name | richo1 |
| progress | 0 |
| project_id | c863ebfbbcbc4683859acbf4dc5f8d6b |
| properties | |
| security_groups | name='dd393bca-b9c7-4f86-8594-7ff80b8b40b3' |
| status ...

Read more...

Revision history for this message
yong hu (yhu6) wrote :

Now it's confirmed the previous issue was not seen as expected, so close this LP with "Fix Released".

Due to the lack of HW resource, this LP has been open for long time since the placement containerization patches were merged.

Changed in starlingx:
status: In Progress → Fix Released
Revision history for this message
Yang Liu (yliu12) wrote :

Remove retestneeded label based on Ricardo's comments.

tags: removed: stx.retestneeded
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.