neutron routers are down

Bug #1811941 reported by Amer Hwitat on 2019-01-16
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Undecided
Unassigned

Bug Description

 it seems that Red Hat Openstack 14 has problems implementing Virtual Bridges .. it affects Neutron service (Networking) on Openstack , Network interfaces on the Routers should not be DOWN state , I need a fix if there is any ... see attached

Amer Hwitat (amer.hwitat) wrote :

sorry I'm using Red Hat 7.0

Brian Haley (brian-haley) wrote :

First, if this is a bug specific to OSP, you should file the bug in Red Hat bugzilla.

Second, what commands did you run to create and configure the router? The jpg doesn't show anything except a router connected to the external network (maybe there is a private subnet hiding behind the pop-up?).

Changed in neutron:
status: New → Incomplete
Amer Hwitat (amer.hwitat) wrote :
Download full text (3.4 KiB)

I used this to install on RHEL 7
########################################################################
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl disable firewalld
systemctl stop firewalld
setenforce 0

systemctl restart network
systemctl status network
########################################################################
subscription-manager list --available

subscription-manager attach --pool=

subscription-manager repos --enable=rhel-7-server-optional-rpms \
  --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms
subscription-manager repos --enable=rhel-7-server-openstack-14-rpms
subscription-manager repos --enable=rhel-7-server-openstack-14-devtools-rpms
subscription-manager repos --enable=rhel-7-server-openstack-14-tools-rpms

yum repolist enabled #enable all

subscriptiion-manager repos --enable=

yum install -y yum-plugin-priorities yum-utils

yum install openstack-selinux

rpm -q --whatprovides rubygem-json ###### rubygem-json-1.7.7-20.el7.x86_64

yum install -y openstack-packstack
############################################################################
I used this command to create external network:
neutron net-create External1 --provider:network_type flat --provider:physical_network br-ex --router:external=true --shared
neutron net-create External2 --provider-physical-network provider --provider:physical_network eno16777736 --router:external=true --shared
openstack subnet create --network provider \
  --allocation-pool start=192.168.43.1,end=192.168.43.240 \
  --dns-nameserver 192.168.43.1 --gateway 192.168.43.1 \
  --subnet-range 192.168.43.0/24 provider
I tried the three external networks separately
#############################################################################
I created the routers and subnets on Horizon

my external network is 192.168.43.0/24 my IP is 192.168.43.77 I used for installing openstack packstack with answer file and https enabled

I also used this solution and configured br-ex on my network for openstack also configuered related neutron .ini and plugins .ini files

I also used this solution:
mysql
  > create database neutron;
  > grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'server';
  > grant all privileges on neutron.* to 'neutron'@'%' identified by 'server';
  > quit

export | grep OS_declare -x OS_AUTH_URL="https://192.168.43.77:5000/v3"

source admin-openrc.sh
openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cle...

Read more...

Amer Hwitat (amer.hwitat) wrote :

I also followed the following links for answer:

https://www.linuxtechi.com/install-use-openvswitch-kvm-centos-7-rhel-7/

https://ask.openstack.org/en/question/109367/how-to-debug-the-routers-interface-all-the-interfaces-status-are-down/?answer=118926#post-id-118926

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/networking_guide/sec-connect-instance

https://ask.openstack.org/en/question/25234/one-router-port-is-always-down/

https://openstack-xenserver.readthedocs.io/en/latest/10-install-networking-neutron-on-controller.html

https://bugzilla.redhat.com/show_bug.cgi?id=1054857

Created a Bridge also using:

ovs-vsctl add-br br-ex
ip addr add 192.168.43.77/24 dev br-ex
ip addr flush dev eno16777736
ip addr add 192.168.43.77/24 dev br-ex
ovs-vsctl add-port br-ex eno16777736
ip link set dev br-ex up

virsh net-define /tmp/ovs-network.xml \
Network ovs-network defined from /tmp/ovs-network.xml
########################################################################

Amer Hwitat (amer.hwitat) wrote :

Ohh yes there is an internal network configured as 172.17.0.0/16 and it's internal network also the interface created using Horizon and was Down, for the External network it's created by CLI, as I didn't find this option in Horizon for Red Hat Openstack 13 ... :) excuse me , I didn't get so much sleep today and yesterday trying to workaround this problem, I also sent the Red Hat about this problem on Buzilla after you mentioned it, thanks to you ...

Amer Hwitat (amer.hwitat) wrote :

Both networks are Active on Horizon (External, and Internal), but the interfaces is DOWN ...

Brian Haley (brian-haley) wrote :

Your instructions above do not show you starting the l3-agent, which will wire and bring the router interfaces up.

> systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
> systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service

$ systemctl enable neutron-l3-agent.service
$ systemctl start neutron-l3-agent.service

This is listed in the next block in the install guide, https://docs.openstack.org/neutron/latest/install/controller-install-rdo.html

As a data point, issues like this are best discussed on <email address hidden> since they aren't always bugs.

Amer Hwitat (amer.hwitat) wrote :

I'm thinking where to post the Snapshots of this problem for you

Amer Hwitat (amer.hwitat) wrote :

well I did enable it as you said earlier but it didn't help before ... I forgot to mention it in the answer sheet I posted first sorry, but I want to ask,if there is an option to create External netwroks on Horizon as it seems there isn't any, I mean in RH Openstack, unlike RDO Openstack, anyway, my VM crashed and I deleted it from my HD, it seems that I'm using it on my laptop as allinone you know with 5 GB of RAM which slows me down, I'm doing this for testing purposes and started another fresh installation now, my new network, does it really needs a Bridge , I mean br-ex to work properly, or I might use eno16777736 interface for external network directly.

Thanks Brian

Amer Hwitat (amer.hwitat) wrote :

ps -ef|grep -i neutron
root 113380 108515 9 10:54 ? 00:00:02 /usr/bin/python2 /usr/bin/neutron-db-manage upgrade heads

it's upgrading I think it will run and I will do the External and Internal check with a router on Horizon again

Thanks

Amer Hwitat (amer.hwitat) wrote :

yeah Brian I have posted on:

https://bugzilla.redhat.com/show_bug.cgi?id=1666598

it seems that there is no body in the room :)

I have finished installing a new installation of Opensack 13, RH Openstack 13

I will provide you with details about the error of neutron if it didn't go wrong this time, I will configure a physical External interface using:
neutron net-create External2 --provider-physical-network provider --provider:physical_network eno16777736 --router:external=true --shared

connect create subnet on it and interface.

then I'm going to create an internal network, and create an interface for the new internal network on the same router and look for what is going to happen, will provide you with details...

Best regards
Amer

Amer Hwitat (amer.hwitat) wrote :
Download full text (4.3 KiB)

neutron net-create External2 --provider:network_type flat --provider:physical_network eno16777736 --router:external=true --shared
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2019-01-17T17:05:57Z |
| description | |
| id | 0f9adf89-6ee2-4e68-8559-fc474dedb30d |
| ipv4_address_scope | |
| ipv6_address_scope | |
| is_default | False |
| mtu | 1500 |
| name | External2 |
| port_security_enabled | True |
| project_id | a76eea958a6e435a93e3ffc7a36c7970 |
| provider:network_type | flat |
| provider:physical_network | eno16777736 |
| provider:segmentation_id | |
| qos_policy_id | |
| revision_number | 1 |
| router:external | True |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | a76eea958a6e435a93e3ffc7a36c7970 |
| updated_at | 2019-01-17T17:05:57Z |
+---------------------------+--------------------------------------+
[root@localhost ~(keystone_admin)]# neutron router-create router1
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Created a new router:
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2019-01-17T17:09:36Z |
| description | |
| distributed | False |
| external_gateway_info | |
| flavor_id | |
| ha | False ...

Read more...

Amer Hwitat (amer.hwitat) wrote :

neutron port-list --fixed-ips ip_address=192.168.43.1 ip_address=172.17.20.5 ip_address=172.17.20.152
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+------+----------------------------------+-------------------+--------------------------------------------------------------------------------------+
| id | name | tenant_id | mac_address | fixed_ips |
+--------------------------------------+------+----------------------------------+-------------------+--------------------------------------------------------------------------------------+
| 53c7f03e-27b6-419d-a098-61823dad013b | | a76eea958a6e435a93e3ffc7a36c7970 | fa:16:3e:94:b5:b2 | {"subnet_id": "4f61307a-6f89-4edc-8c9b-f4afc9c9bde6", "ip_address": "172.17.20.152"} |
| b2639fd0-a2ff-4616-9f69-1a86240a65ce | | a76eea958a6e435a93e3ffc7a36c7970 | fa:16:3e:ff:0c:d1 | {"subnet_id": "4f61307a-6f89-4edc-8c9b-f4afc9c9bde6", "ip_address": "172.17.20.5"} |
+--------------------------------------+------+----------------------------------+-------------------+--------------------------------------------------------------------------------------+

Amer Hwitat (amer.hwitat) wrote :

I added static routes :

neutron router-update router1 --routes type=dict list=true destination=192.168.43.0/24,nexthop=172.17.20.150
neutron router-update router1 --routes type=dict list=true destination=172.17.0.0,nexthop=192.168.43.1

still the interfaces is down ...

Amer Hwitat (amer.hwitat) wrote :

I'm not using br-ex do you want me to configure and use it

best regards

Amer Hwitat (amer.hwitat) wrote :

issued the following:

systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service

had the error in the first statement it's the neutron-openvswitch-agent the Unit is not installed ... isntalling now

issued the last two lines ... a gateway on the router appeared but it's down ..

Amer Hwitat (amer.hwitat) wrote :

neutron port-list --fixed-ips ip_address=192.168.43.1 ip_address=172.17.20.5 ip_address=172.17.20.152
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+------+----------------------------------+-------------------+--------------------------------------------------------------------------------------+
| id | name | tenant_id | mac_address | fixed_ips |
+--------------------------------------+------+----------------------------------+-------------------+--------------------------------------------------------------------------------------+
| 53c7f03e-27b6-419d-a098-61823dad013b | | a76eea958a6e435a93e3ffc7a36c7970 | fa:16:3e:94:b5:b2 | {"subnet_id": "4f61307a-6f89-4edc-8c9b-f4afc9c9bde6", "ip_address": "172.17.20.152"} |
| b2639fd0-a2ff-4616-9f69-1a86240a65ce | | a76eea958a6e435a93e3ffc7a36c7970 | fa:16:3e:ff:0c:d1 | {"subnet_id": "4f61307a-6f89-4edc-8c9b-f4afc9c9bde6", "ip_address": "172.17.20.5"} |
+--------------------------------------+------+----------------------------------+-------------------+--------------------------------------------------------------------------------------+

neutron router-update router1 --routes type=dict list=true destination=192.168.43.0/24,nexthop=172.17.20.150
neutron router-update router1 --routes type=dict list=true destination=172.17.0.0,nexthop=192.168.43.1

#####################################################################################################

/etc/cinder/cinder.conf

added

[keystone_authtoken]
auth_uri = http://keystone_ip:5000
auth_url = http://keystone_ip:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = services
username = cinder
password = [ccinder password]

systemctl restart openstack-nova-api
systemctl restart openstack-nova-cert
systemctl restart openstack-nova-consoleauth
systemctl restart openstack-nova-scheduler
systemctl restart openstack-nova-conductor
systemctl restart openstack-nova-novncproxy
systemctl restart neutron-server
systemctl restart neutron-dhcp-agent
systemctl restart neutron-l3-agent
systemctl restart neutron-metadata-agent
systemctl restart neutron-openvswitch-agent
systemctl restart openstack-cinder-api
systemctl restart openstack-cinder-backup
systemctl restart openstack-cinder-scheduler
systemctl restart openstack-cinder-volume

sudo service --status-all | grep nova
sudo service --status-all | grep neutron
sudo service --status-all | grep cinder

and

systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service

just to make sure because my system rebooted while sleeping

Amer Hwitat (amer.hwitat) wrote :

well the same thing with or without a br-ex

I create a router with External and Internal Interfaces ... both are down

I have tried static routes and ports and everything it's just stuck here ...

I think there is something with the installation again ... running packstack again ...

if you want snapshots I will provide you with it ...

not solved

Amer Hwitat (amer.hwitat) wrote :
Download full text (98.7 KiB)

my commands log with some outpu is like this:

Creat Bridge

ovs-vsctl add-br br-ex
ip addr add 192.168.43.110/24 dev br-ex
ip addr flush dev eno16777736
ip addr add 192.168.43.110/24 dev br-ex
ovs-vsctl add-port br-ex eno16777736
ip link set dev br-ex up

virsh net-define /tmp/ovs-network.xml \
Network ovs-network defined from /tmp/ovs-network.xml
########################################################################
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl disable firewalld
systemctl stop firewalld
setenforce 0

systemctl restart network
systemctl status network
########################################################################
subscription-manager list --available

subscription-manager attach --pool=

subscription-manager repos --enable=rhel-7-server-optional-rpms \
  --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms
subscription-manager repos --enable=rhel-7-server-openstack-14-rpms
subscription-manager repos --enable=rhel-7-server-openstack-14-devtools-rpms
subscription-manager repos --enable=rhel-7-server-openstack-14-tools-rpms
subscription-manager repos --enable=rhel-7-server-openstack-13-rpms
subscription-manager repos --enable=rhel-7-server-openstack-13-devtools-rpms
subscription-manager repos --enable=rhel-7-server-openstack-13-tools-rpms
yum repolist enabled #enable all

subscriptiion-manager repos --enable=

sudo yum -y install yum-plugin-priorities yum-utils

yum install openstack-selinux

rpm -q --whatprovides rubygem-json ###### rubygem-json-1.7.7-20.el7.x86_64

#do not update with ### yum update ### it will upgrade the RHEL 7 to 7.6 Maipo if you want to keep 7.0 on your VM

########################################################################
Repo ID: rhel-7-server-openstack-8-source-rpms
Repo Name: Red Hat OpenStack Platform 8 for RHEL 7 (Source RPMs)
Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/$releasever/$basear
           ch/openstack/8/source/SRPMS
Enabled: 0

Repo ID: rhel-7-server-openstack-11-devtools-debug-rpms
Repo Name: Red Hat OpenStack Platform 11 Developer Tools for RHEL 7 (Debug RPMs)
Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/7Server/$basearch/o
           penstack-devtools/11/debug
Enabled: 0

Repo ID: rhel-7-server-rhceph-2-mon-debug-rpms
Repo Name: Red Hat Ceph Storage MON 2 for Red Hat Enterprise Linux 7 Server
           (Debug RPMs)
Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/$releasever/$basear
           ch/ceph-mon/2/debug
Enabled: 0

Repo ID: rhel-7-server-insights-3-debug-rpms
Repo Name: Red Hat Insights 3 (for RHEL 7 Server) (Debug RPMs)
Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/7Server/$basearch/i
           nsights/3/debug
Enabled: 0

Repo ID: rhel-7-server-satellite-maintenance-6-beta-debug-rpms
Repo Name: Red Hat Satellite Maintenance 6 Beta (for RHEL 7 Server) (Debug RPMs)
Repo URL: https://cdn.redhat.com/content/beta/rhel/server/7/$basearch/sat-maint
           enance/6/debug
Enabled: 0

Repo ID: rhel-7-server-satellite-tools-6.2-rpms
Repo Name: Red Hat Satellite Tools 6.2 (for RHEL 7 Server) (RPMs)
Repo URL: https://cdn.redhat.com...

Amer Hwitat (amer.hwitat) wrote :
Download full text (98.6 KiB)

Creat Bridge

ovs-vsctl add-br br-ex
ip addr add 192.168.43.110/24 dev br-ex
ip addr flush dev eno16777736
ip addr add 192.168.43.110/24 dev br-ex
ovs-vsctl add-port br-ex eno16777736
ip link set dev br-ex up

virsh net-define /tmp/ovs-network.xml \
Network ovs-network defined from /tmp/ovs-network.xml
########################################################################
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl disable firewalld
systemctl stop firewalld
setenforce 0

systemctl restart network
systemctl status network
########################################################################
subscription-manager list --available

subscription-manager attach --pool=

subscription-manager repos --enable=rhel-7-server-optional-rpms \
  --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms
subscription-manager repos --enable=rhel-7-server-openstack-14-rpms
subscription-manager repos --enable=rhel-7-server-openstack-14-devtools-rpms
subscription-manager repos --enable=rhel-7-server-openstack-14-tools-rpms
subscription-manager repos --enable=rhel-7-server-openstack-13-rpms
subscription-manager repos --enable=rhel-7-server-openstack-13-devtools-rpms
subscription-manager repos --enable=rhel-7-server-openstack-13-tools-rpms
yum repolist enabled #enable all

subscriptiion-manager repos --enable=

sudo yum -y install yum-plugin-priorities yum-utils

yum install openstack-selinux

rpm -q --whatprovides rubygem-json ###### rubygem-json-1.7.7-20.el7.x86_64

#do not update with ### yum update ### it will upgrade the RHEL 7 to 7.6 Maipo if you want to keep 7.0 on your VM

########################################################################
Repo ID: rhel-7-server-openstack-8-source-rpms
Repo Name: Red Hat OpenStack Platform 8 for RHEL 7 (Source RPMs)
Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/$releasever/$basear
           ch/openstack/8/source/SRPMS
Enabled: 0

Repo ID: rhel-7-server-openstack-11-devtools-debug-rpms
Repo Name: Red Hat OpenStack Platform 11 Developer Tools for RHEL 7 (Debug RPMs)
Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/7Server/$basearch/o
           penstack-devtools/11/debug
Enabled: 0

Repo ID: rhel-7-server-rhceph-2-mon-debug-rpms
Repo Name: Red Hat Ceph Storage MON 2 for Red Hat Enterprise Linux 7 Server
           (Debug RPMs)
Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/$releasever/$basear
           ch/ceph-mon/2/debug
Enabled: 0

Repo ID: rhel-7-server-insights-3-debug-rpms
Repo Name: Red Hat Insights 3 (for RHEL 7 Server) (Debug RPMs)
Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/7Server/$basearch/i
           nsights/3/debug
Enabled: 0

Repo ID: rhel-7-server-satellite-maintenance-6-beta-debug-rpms
Repo Name: Red Hat Satellite Maintenance 6 Beta (for RHEL 7 Server) (Debug RPMs)
Repo URL: https://cdn.redhat.com/content/beta/rhel/server/7/$basearch/sat-maint
           enance/6/debug
Enabled: 0

Repo ID: rhel-7-server-satellite-tools-6.2-rpms
Repo Name: Red Hat Satellite Tools 6.2 (for RHEL 7 Server) (RPMs)
Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/7Server/$basearch/s...

Amer Hwitat (amer.hwitat) wrote :
Download full text (6.8 KiB)

I did the following to make sure that the installation is right and to test neutron:

Creat Bridge

ovs-vsctl add-br br-ex
ip addr add 192.168.43.110/24 dev br-ex
ip addr flush dev eno16777736
ip addr add 192.168.43.110/24 dev br-ex
ovs-vsctl add-port br-ex eno16777736
ip link set dev br-ex up

virsh net-define /tmp/ovs-network.xml \
Network ovs-network defined from /tmp/ovs-network.xml
########################################################################
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl disable firewalld
systemctl stop firewalld
setenforce 0

systemctl restart network
systemctl status network
########################################################################
subscription-manager list --available

subscription-manager attach --pool=

subscription-manager repos --enable=rhel-7-server-optional-rpms \
  --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms
subscription-manager repos --enable=rhel-7-server-openstack-14-rpms
subscription-manager repos --enable=rhel-7-server-openstack-14-devtools-rpms
subscription-manager repos --enable=rhel-7-server-openstack-14-tools-rpms
subscription-manager repos --enable=rhel-7-server-openstack-13-rpms
subscription-manager repos --enable=rhel-7-server-openstack-13-devtools-rpms
subscription-manager repos --enable=rhel-7-server-openstack-13-tools-rpms
yum repolist enabled #enable all

subscriptiion-manager repos --enable=

sudo yum -y install yum-plugin-priorities yum-utils

yum install openstack-selinux --skip-broken
yum install python2* --skip-broken
yum install *pyparsing* --skip-broken
yum install *urllib3* --skip-broken
yum install *chardet* --skip-broken
yum install *gnocchi* --skip-broken
yum install *puppet* --skip-broken
yum install -y mariadb-server-galera --skip-broken
yum install -y *openvswitch* --skip-broken
#yum install -y openstack.* #saves time installing with packstack#yum install -y puppet hiera openssh-clients tar nc rubygem-json
yum install -y puppet hiera openssh-clients tar nc rubygem-json --skip-broken
yum install -y *rabbitmq* --skip-broken
yum install -y *ironic* --skip-broken
rpm -q --whatprovides rubygem-json ###### rubygem-json-1.7.7-20.el7.x86_64

#do not update with ### yum update ### it will upgrade the RHEL 7 to 7.6 Maipo if you want to keep 7.0 on your VM
yum install -y openstack-packstack --skip-broken

then I installed and configured using:

packstack --answer-file=/root/answer.txt --timeout=9999999999999 --debug

and it worked now the installation is complete for sure now:

[root@localhost amer]# packstack --answer-file=/root/answer.txt --timeout=99999999999999 --debug
Welcome to the Packstack setup utility

The installation log file is available at: /var/tmp/packstack/20190118-111616-VhYx_9/openstack-setup.log

Installing:
Clean Up [ DONE ]
Discovering ip protocol version [ DONE ]
Setting up ssh keys [ DONE ]
Preparing servers [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Preparing pre-install entries [ DONE ]
Setting up CA...

Read more...

Amer Hwitat (amer.hwitat) wrote :
Download full text (11.2 KiB)

created using Horizon Dashboard the Internal network on Admin Identity

Internal 172.17.0.0/16 shared

Created Project 1 Identity to create an External Network on Horizon Dashboard

External 192.168.43.0/24 shared

allocated Floating IP to Project 1

192.168.43.156 Floating IP to Project 1

back to admin on Network topology created a router called router associated with External Network as a gateway

the router shows that the interface with gateway is down

created an interface for Internal Network and also it shows that the Interface is down

I don't know what to do next ... lets try CLI

wait a minute it shows that internal interface is Active now an Green after I created a port (Port 1) on I don't really remember now if it's on Internal or External , also created a Floating IP for both

anyway I have created the following:
[root@localhost ~]# cd /root
[root@localhost ~]# ls
anaconda-ks.cfg initial-setup-ks.cfg packstackca
answer.txt keystonerc_admin
[root@localhost ~]# source keystonerc_admin
[root@localhost ~(keystone_admin)]# cat keystonerc_admin
unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD='server'
    export OS_AUTH_URL=http://192.168.43.110:5000/v3
    export PS1='[\u@\h \W(keystone_admin)]\$ '

export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3
    [root@localhost ~(keystone_admin)]# neutron net-create External1 --provider:ork_type flat --provider:physical_network br-ex --router:external=true --shared
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Unable to create the flat network. Physical network br-ex is in use.
Neutron server returns request_ids: ['req-8bf4b0af-399e-46eb-abd5-326de1291bcf']
[root@localhost ~(keystone_admin)]# neutron net-create External2 --provider:network_type flat --provider:physical_network eno16777736 --router:external=true --shared
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2019-01-18T18:03:56Z |
| description | |
| id | 8fc13dae-3b11-41d2-be47-56b44aadb317 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| is_default | False |
| mtu | 1500 |
| name | External2 |
| port_security_enabled | True |
| project_id | b5a20ad3401446909409bc9dc77288dd |
| provider:network_type ...

Amer Hwitat (amer.hwitat) wrote :

login as: root
root@192.168.43.110's password:
Last login: Fri Jan 18 10:31:11 2019 from desktop-dnumv0h
'abrt-cli status' timed out
[root@localhost ~]# neutron router-interface-add router1 Internal
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Auth plugin requires parameters which were not given: auth_url
[root@localhost ~]# export OS_USERNAME=admin
[root@localhost ~]# export OS_PASSWORD=server
[root@localhost ~]# export OS_TENANT_NAME=admin
[root@localhost ~]# export OS_AUTH_URL=http://localhost:5000/v3
[root@localhost ~]# neutron router-interface-add router1 Internal
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-dea0f91b-cb72-4ab3-a350-a681e39e0071)
[root@localhost ~]#
[root@localhost ~]# cd /root
[root@localhost ~]# ls
--allocation-pool answer.txt initial-setup-ks.cfg packstackca
anaconda-ks.cfg --dns-nameserver keystonerc_admin --subnet-range
[root@localhost ~]# . keystonerc_admin
[root@localhost ~(keystone_admin)]# openstack

it's very slow ... so I think the case is I run the VM on 16GB from another Laptop ... well in the end the Internal interface of 172.17.20.1 gave GREEN light of being Active ... the case now is that why the External gateway interface gave RED Down ...

this changes the case back to why the gateway interface is DOWN ... which takes me back to:
https://ask.openstack.org/en/question/25234/one-router-port-is-always-down/

Thanks alot for the support ..

Amer Hwitat (amer.hwitat) wrote :
Download full text (18.1 KiB)

I have installed openstack completely without Errors
build the External internal router using CLI and Horizon ... there is an option to create External Networks but inside the Project that you create not on amdin
I have waited because my VM is very slow ... then the router gave an interface with internal network GREEN at last ... the interface with Extrenal network is not it's (RED-DOWN)
then the openstack crashed (Horizon Dashboard) ... then my system became very very slow ...
then I executed some CLI commands to trouble shoot ... the Horizon httpd was down for the dashboard and I learned that I have to install ironic baremetal to clean node because all openstack CLI commannds didn't execute ...
then my laptop crashed again ... this is when I learned that software really crashes hardware and hits very bad :)
then I rebooted and executed

packstack --answer-file=/root/answer.txt --timeout=99999999999999999 --debug #### again I enabled the ironic install to y

then executed the following

export OS_USERNAME=admin
export OS_PASSWORD=server
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://localhost:5000/v3

neutron net-create External1 --provider:network_type flat --provider:physical_network br-ex --router:external=true --shared
neutron net-create External2 --provider:network_type flat --provider:physical_network eno16777736 --router:external=true --shared
neutron net-create External3 --provider-physical-network provider --provider:physical_network eno16777736 --router:external=true --shared
openstack subnet create --network provider \
  --allocation-pool start=192.168.43.2,end=192.168.43.240 \
  --dns-nameserver 192.168.43.1 --gateway 192.168.43.1 \
  --subnet-range 192.168.43.0/24 provider

mysql
create database neutron;
grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'server';
grant all privileges on neutron.* to 'neutron'@'%' identified by 'server';
quit

export | grep OS_declare -x OS_AUTH_URL="https://192.168.64.128:5000/v3"

#source admin-openrc.sh
. /root/keystonerc_admin
openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service

which is good and bad ... it seems my last configuration was not deleted from the system it gave me the following:

[root@localhost ~]# cat keystonerc_admin
unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD='server'
    export OS_AUTH_URL=http://192.168.43.110:5000/v3
...

Amer Hwitat (amer.hwitat) wrote :
Download full text (4.0 KiB)

I have the ip a command results the following:

[root@localhost ~(keystone_admin)]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000
    link/ether 00:0c:29:37:19:1f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::20c:29ff:fe37:191f/64 scope link
       valid_lft forever preferred_lft forever
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 76:43:64:bc:be:22 brd ff:ff:ff:ff:ff:ff
4: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 9a:ed:b6:78:0c:48 brd ff:ff:ff:ff:ff:ff
5: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 56:40:b4:95:4f:4c brd ff:ff:ff:ff:ff:ff
7: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:32:ff:42 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:32:ff:42 brd ff:ff:ff:ff:ff:ff
65: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 00:0c:29:37:19:1f brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.110/24 brd 192.168.43.255 scope global br-ex
       valid_lft forever preferred_lft forever
    inet6 fe80::6c25:72ff:fe0e:974f/64 scope link
       valid_lft forever preferred_lft forever
[root@localhost ~(keystone_admin)]#

as you can see br-ex is state UNKNOWN

I can ping from inside VM and ping from outside VM, before i was not able to ping outside VM because windows 10 firewall turned on for some reason,windows update maybe did it ...

turned off the windows firewall, shared the NIC card and still the same problem ...

[root@localhost network-scripts]# systemctl status network -l
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
   Active: active (exited) since Sat 2019-01-19 16:44:27 EST; 3h 39min ago
     Docs: man:systemd-sysv-generator(8)
    Tasks: 0

Jan 19 16:44:08 localhost systemd[1]: Starting LSB: Bring up/down networking...
Jan 19 16:44:14 localhost network[38911]: Bringing up loopback interface: [ OK ]
Jan 19 16:44:14 localhost ovs-vsctl[39058]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex
Jan 19 16:44:25 localhost network[38911]: Bringing up interface br-ex: [ OK ]
Jan 19 16:44:26 localhost ovs-vsctl[39250]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eno16777736 -- add-port br-ex eno16777736
Jan 19 16:44:26 localhost network[38911]: Bringing up interface eno16777736: [ OK ]
Jan 19 16:44:2...

Read more...

Amer Hwitat (amer.hwitat) wrote :
Download full text (8.3 KiB)

Ok ... finally done the router is pingable from my windows installation with both interfaces is up and running ... nova computation CPU on the router, and it took some time for the external gateway interface to be up and active (GREEN) ... I did install though the ironic baremetal on a fresh VM of Openstack ... I have now three VMs ... one for Linux only ... the second with Openstack installed allinone (one Node) ... and the third is the VM I'm working on now (copied it to External Drive for future reference) and it'sm running well on 5GB .. slow but the Horizon is fare to configure Networks and Routers ... though I need a physical router as gateway for my windows to ping the internal network subnet for now I'm preparing static routes to do that ..

I did the following on CLI and the rest is on Horizon ...

systemctl restart network #### and check brifge br-ex status with ip a if Down or UNKOWN

grep rewrite /etc/httpd/conf.modules.d/* ## recreate modules in httpd

systemctl restart httpd
systemctl restart openstack*
systemctl status openstack*
#########################################################################################
gedit /etc/neutron/plugin.ini #### define openvswitch ovs and flat networks

File : openvswitch_agent.ini

integration_bridge=br-int
tenant_network_type = local,flat,vlan
network_vlan_ranges = physnet1
enable_tunneling = True
bridge_mappings = physnet1:br-ex
physical_interface_mappings = physnet1:br-ex

#/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
#change br-eth0 to your "data network" bridge
bridge_mappings = physnet1:br-eth0
network_vlan_ranges = physnet1

systemctl restart neutron-openvswitch-agent

In addition, turning on verbose and debugging mode in the neutron conf can help alot in tracking down this issue.

#/etc/neutron/neutron.conf
[DEFAULT]
verbose = True
debug = True
#########################################################################################
systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
#########################################################################################
export OS_USERNAME=admin
export OS_PASSWORD=server
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.43.110:5000/v3

curl -v http://192.168.43.110:35357
  curl -v http://192.168.43.110:35357

neutron net-create External1 --provider:network_type flat --provider:physical_network br-ex --router:external=true --shared
neutron net-create External2 --provider:network_type flat --provider:physical_network eno16777736 --router:external=true --shared
neutron net-create External3 --provider-physical-network provider --provider:physical_network br-ex --router:external=true --shared

openstack subnet create --network provider \
  --allocation-pool start=192.168.43.2,end=192.168.43.240 \
  --dns-nameserver 192.168.43.1 --gateway 192.168.43.1 \
  --s...

Read more...

Amer Hwitat (amer.hwitat) wrote :
Download full text (12.5 KiB)

systemctl restart network #### and check brifge br-ex status with ip a if Down or UNKOWN

grep rewrite /etc/httpd/conf.modules.d/* ## recreate modules in httpd

systemctl restart httpd
systemctl restart openstack*
systemctl status openstack*
#########################################################################################
gedit /etc/neutron/plugin.ini #### define openvswitch ovs and flat networks

File : openvswitch_agent.ini

integration_bridge=br-int
tenant_network_type = local,flat,vlan
network_vlan_ranges = physnet1
enable_tunneling = True
bridge_mappings = physnet1:br-ex
physical_interface_mappings = physnet1:br-ex

#/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
#change br-eth0 to your "data network" bridge
bridge_mappings = physnet1:br-eth0
network_vlan_ranges = physnet1

systemctl restart neutron-openvswitch-agent

In addition, turning on verbose and debugging mode in the neutron conf can help alot in tracking down this issue.

#/etc/neutron/neutron.conf
[DEFAULT]
verbose = True
debug = True
#########################################################################################
systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
#########################################################################################
export OS_USERNAME=admin
export OS_PASSWORD=server
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.43.110:5000/v3

curl -v http://192.168.43.110:35357
  curl -v http://192.168.43.110:35357

neutron net-create External1 --provider:network_type flat --provider:physical_network extnet --router:external=true --shared
neutron net-create External2 --provider:network_type flat --provider:physical_network extnet --router:external=true --shared
neutron net-create External3 --provider-physical-network provider --provider:physical_network br-ex --router:external=true --shared

openstack subnet create --network provider \
  --allocation-pool start=192.168.43.2,end=192.168.43.240 \
  --dns-nameserver 192.168.43.1 --gateway 192.168.43.1 \
  --subnet-range 192.168.43.0/24 provider

openstack network create --provider-network-type flat \
                         --provider-physical-network extnet \
                         --external \
                         --share \
                         external_network
[root@localhost ~(keystone_admin)]# openstack network create --provider-network-type flat \
> --provider-physical-network extnet \
> --external \
> --share \
> external_network
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
...

Changed in neutron:
status: Incomplete → Invalid
Amer Hwitat (amer.hwitat) wrote :

https://access.redhat.com/support/cases/#/case/02293077

Red hat opened a case for this issue I think they are working on it because I had also this Bug:

Jan 19 03:47:01 localhost.localdomain dhclient[86963]: Please report for this software via the Red Hat Bugzilla site:
Jan 19 03:47:01 localhost.localdomain dhclient[86963]: http://bugzilla.redhat.com
Jan 19 03:47:01 localhost.localdomain dhclient[86963]: ution.
Jan 19 03:47:01 localhost.localdomain dhclient[86963]: exiting.
Jan 19 03:47:01 localhost.localdomain network[86591]: failed.
Jan 19 03:47:01 localhost.localdomain network[86591]: [FAILED]
Jan 19 03:47:01 localhost.localdomain systemd[1]: network.service: control process exited, code=exited status=1
Jan 19 03:47:01 localhost.localdomain systemd[1]: Failed to start LSB: Bring up/down networking.
Jan 19 03:47:01 localhost.localdomain systemd[1]: Unit network.service entered failed state.
Jan 19 03:47:01 localhost.localdomain systemd[1]: network.service failed.
[root@localhost network-scripts]#

portal gives error loading snapshots:

Error 500 Internal Server Error Message: {"message":"Failed to create attachment metadata","detailMessage":"Update failed. First exception on row 0 with id 5002K00000dA2nuQAC; first error: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record: []"}

Amer Hwitat (amer.hwitat) wrote :
Download full text (4.0 KiB)

| Case Information |
---------------------------------------
https://access.redhat.com/support/cases/#/case/02293077
Case Title : neutron routers are down
Case Number : 02293077
Case Open Date : 2019-01-16 11:19:30
Severity : 4 (Low)
Problem Type : Other

Most recent comment: On 2019-01-21 22:14:48, Hwitat, Amer commented:
"Hi ..

your portal gives the following error message when uploading snapshots
images so you don't have all the snapshots available,I have attached it to
this email message:

Error 500 Internal Server Error Message: {"message":"Failed to create
attachment metadata","detailMessage":"Update failed. First exception on row
0 with id 5002K00000dA2nuQAC; first error: UNABLE_TO_LOCK_ROW, unable to
obtain exclusive access to this record: []"}

Best regards
Amer Hwitat
00966545127195
IT Instructor
[image: launch instance successfully.JPG]
[image: launch instance successfully 3.JPG]
[image: launch instance successfully 2nd time.JPG]
[image: launch instance successfully 2.JPG]
[image: launch instance successfully 2 nova error.JPG]
[image: launch instance successfully 2 nova error details.JPG]
[image: launch instance successfully 2 nova error details 3.JPG]
[image: launch instance successfully 2 nova error details 2.JPG]
[image: is back after crash and swift server restart.JPG]
[image: crashing 1st time.JPG]
[image: associating floating Ips.JPG]
[image: associating floating Ips errors.JPG]
[image: associating floating Ips errors 2.JPG]
[image: volumes.JPG]
[image: using internet explorer is good in terms of usage (Memory).JPG]
[image: usage same overview with IE.JPG]
[image: usage nova.JPG]
[image: usage nova instances launched.JPG]
[image: usage nova Errors starts after swift is down.JPG]
[image: usage nova Errors starts after swift is down 2.JPG]
[image: usage MS inetrnet explorer usage is low (Mem) versus very rich
Firefox and chrome.JPG]
[image: usage instance allocating Floating IP.JPG]
[image: usage 3 nova the falling service.JPG]
[image: usage 3 nova the falling service novas is taking time to
respond.JPG]
[image: usage 3 nova the falling service novas is taking time to respond
2nd error.JPG]
[image: usage 3 nova the falling service nova is responding and Glance.JPG]
[image: usage 3 nova the falling service nova is responding 2nd time.JPG]
[image: usage 3 nova the falling service nova is not responding.JPG]
[image: usage 3 nova the falling service neutron is responding.JPG]
[image: usage 3 nova the falling service Glance is not responding.JPG]
[image: usage 3 nova the falling service again swift is down in less than 5
minutes.JPG]
[image: usage 3 nova the falling service 5.JPG]
[image: usage 3 nova the falling service 4.JPG]
[image: usage 3 nova the falling service 3.JPG]
[image: usage 3 nova launching instance RHEL 7.6 minimal tiny flavor.JPG]
[image: usage 3 nova launching instance RHEL 7.6 minimal tiny flavor
associating floating IPs.JPG]
[image: usage 3 at overview.JPG]
[image: usage 2 RHEL 7 minimal nova.JPG]
[image: usage 2 at overview.JPG]
[image: usage 1 at overview.JPG]
[image: windows ping with static route failure 3.JPG]
[image: windows ping with static route failure 2.JPG]
...

Read more...

Amer Hwitat (amer.hwitat) wrote :

Thanks Brian for the support

Amer Hwitat (amer.hwitat) wrote :

[root@localhost network-scripts]# systemctl status network -l
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
   Active: failed (Result: exit-code) since Sat 2019-01-19 03:47:01 EST; 21s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 86319 ExecStop=/etc/rc.d/init.d/network stop (code=exited, status=0/SUCCESS)
  Process: 86591 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE)
    Tasks: 0

Jan 19 03:47:01 localhost.localdomain dhclient[86963]: Please report for this software via the Red Hat Bugzilla site:
Jan 19 03:47:01 localhost.localdomain dhclient[86963]: http://bugzilla.redhat.com
Jan 19 03:47:01 localhost.localdomain dhclient[86963]: ution.
Jan 19 03:47:01 localhost.localdomain dhclient[86963]: exiting.
Jan 19 03:47:01 localhost.localdomain network[86591]: failed.
Jan 19 03:47:01 localhost.localdomain network[86591]: [FAILED]
Jan 19 03:47:01 localhost.localdomain systemd[1]: network.service: control process exited, code=exited status=1
Jan 19 03:47:01 localhost.localdomain systemd[1]: Failed to start LSB: Bring up/down networking.
Jan 19 03:47:01 localhost.localdomain systemd[1]: Unit network.service entered failed state.
Jan 19 03:47:01 localhost.localdomain systemd[1]: network.service failed.
[root@localhost network-scripts]#

sorry this is the complete command for NIC, it crashed because there was another ifcfg-eno16777736_1 inside the folder /etc/sysconfig/network-scripts/

and this caused my NIC to become inactive but that was part of my rush to fix the problem, I sent RH the snapshots on their email, and it's in the process now, bottom line, swift server keeps going down affecting all nodes, you know it's the volume container, so I got that also because I had everything installed and on a slow VM

Amer Hwitat (amer.hwitat) wrote :

also I forgot to tell you that I had in the last installation to install ironic baremetal to clean the nodes because once filled with requests neutron and openstack servers (nodes) commands and Horizon interface get stuck not executing commands unless they are cleaned of requests on devices, also I used a work around each time to clear requests from the servers on line and it's a simple command:

curl -v http://192.168.43.110:35357

it clears the controller node from requests so that neutron can work for the requests you ask for in CLI and Horizon.

besides I configured ironic server to work automatically on cleaning nodes once the nodes are stopped and disabled that's why I issued the command below to clean the nodes on CLI:

openstack endpoint create ironic \
  --region RegionOne internal \
  https://192.168.43.110:5000/v3

openstack baremetal node list

openstack baremetal node clean node \
    --clean-steps '[{"interface": "deploy", "step": "erase_devices_metadata"}]'

openstack baremetal node clean node \
    --clean-steps '[{"interface": "deploy", "step": "erase_devices"}]'

openstack network show external_network -f value -c id

or automated cleaning happens if you included this in /etc/ironic/ironic.conf
[DEFAULT]

[conductor]
automated_clean=true
[deploy]
erase_devices_priority=0
[ilo]
clean_priority_erase_devices=0
[deploy]
erase_devices_iterations=1
##############################################################################
the devices will go to cleaning and out the restart or simply reboot you VM

systemctl disable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl stop neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl disable neutron-l3-agent.service
systemctl stop neutron-l3-agent.service

Best regards

Amer Hwitat (amer.hwitat) wrote :

are you sure you are good with out the snapshots because I took a snap for every serious error I had ... for me it was a good pilot project to get to now the ecosystem of openstack ...

Amer Hwitat (amer.hwitat) wrote :

anway I added snapshots to this site,most gamers do post on it, I used it for our purpose :

https://imgur.com/a/X6bEPop

Amer Hwitat (amer.hwitat) wrote :

swift crashes because of network sockets error ...

this means either openvswitch server or neutron server , also it means that there is definitely something wrong with allinone installation (like all the things in one node will definitely give problems as the system is overwhelmed) ...

still I'm asking are you still sure that it's not neutron bug, or an issue .. it might not be visible on multi-node deployment of under-cloud director because it's spoiled with various resources to use, like poor people like me who plays PS4 games most of the time, we tend to have one system in our private labs you know ... :)

anyway I have this as configuration of openvswitch for NIC br-ex and eno16777736 physical card (I mean it's a the VM physical) ...

correct settings:

TYPE=OVSBridge
DEVICE=br-ex
DEVICETYPE=ovs
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
UUID=c0addf73-98fe-464c-804a-b40110fd6157
ONBOOT=yes
IPADDR=192.168.43.110
PREFIX=24
GATEWAY=192.168.43.1
DNS=192.168.43.1
BOOTPROTO=static
NM_CONTROLLED=no
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
PROXY_METHOD=none
BROWSER_ONLY=no

HWADDR=00:0C:29:37:19:1F
TYPE=OVSPort
DEVICE=eno16777736
DEVICETYPE=ovs
UUID=c0addf73-98fe-464c-804a-b40110fd6157
OVS_BRIDGE=br-ex
BOOTPROTO=none
ONBOOT=yes
PROMISC=yes
PROXY_METHOD=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
NM_CONTROLLED=no
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=eno16777736
PROXY_METHOD=none
BROWSER_ONLY=no

it gives me an issue of UNKOWN state of the br-ex when I execute the start and shows on status of the network.service that it's OVS issue ...

the VM is pingable and running well though ... but I don't know if it still have issues with openvswitch server and neutron ...

the problem now is of three parts:

openvswitch server
neutron server
swift server

I also pointed out that there was a problem in my windows system that it turned firewall on preventing me from troubleshooting well ... but I turned it off and got on the right track back in one hour ... finding out that this was the case then ...

I also had when I finished installation well (completely) installed ironic to clean nodes , I told you about that earlier, it's not installed by default on allinone answer files and you have to edit the answer file to add this ...

clear the requests from the controller ...

I also tuned my resources on windows to give the VM more resources and more memory .. like turning off and not using rich web browsers like Firefox and chrome .. I used MS internet explorer it'slight weight on the system .. definitely not my favorite

the result is something is wrong with the --allinone deployment in terms of installation, configuration and run, the other thing it from my side , I should upgrade to a new laptop with more resources to use ... :)

thanks a alot ...

best regards

Amer Hwitat (amer.hwitat) wrote :

[root@localhost neutron]# systemctl status network
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
   Active: active (exited) since Tue 2019-01-22 09:13:04 EST; 58min ago
     Docs: man:systemd-sysv-generator(8)
  Process: 10275 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)
    Tasks: 0

Jan 22 09:12:58 localhost.localdomain systemd[1]: Starting LSB: Bring up/down networking...
Jan 22 09:12:59 localhost.localdomain network[10275]: Bringing up loopback interface: [ OK ]
Jan 22 09:12:59 localhost.localdomain ovs-vsctl[10407]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex
Jan 22 09:13:04 localhost.localdomain network[10275]: Bringing up interface br-ex: [ OK ]
Jan 22 09:13:04 localhost.localdomain ovs-vsctl[10573]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eno16777736 -- add-port...o16777736
Jan 22 09:13:04 localhost.localdomain network[10275]: Bringing up interface eno16777736: [ OK ]
Jan 22 09:13:04 localhost.localdomain systemd[1]: Started LSB: Bring up/down networking.

someone suggested to me to edit the l3_agent.ini and put some stuff in the DEFAUL section and run the l3_agent again

I took a snapshot of an un-tampered l3_agent.ini and sent it on the web for this guy ... it seems that he is Old school OPS and wants to inter dragon thing ...

I just want to point out this it seems that the problem in my case is with openvswitch and neutron because swift uses network sockets and crashes if something wrong with the connection maybe it's the fault of the bridge ... I'm not sure ...

it should as of the latest installation to work on the go install run thing ... but I'm not that much sure about it ... can you please tell me what is wrong with OVS configuration in the perviouse comment ...

it's the only thing I did in the last installation ...

Amer Hwitat (amer.hwitat) wrote :

and buy the way .. red hat case opened for me was closed for a recommendation of upgrade the resources that I use ... for a new bg fat VM rather than 6GB with 4 cores CPU ...

https://bugs.launchpad.net/neutron/+bug/1811941
https://access.redhat.com/support/cases/#/case/02293077
https://postimg.cc/gallery/29c2o4ftu/
https://imgur.com/a/X6bEPop

Amer Hwitat (amer.hwitat) wrote :
Download full text (6.4 KiB)

Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. <class 'keystoneauth1.exceptions.connection.ConnectTimeout'> (HTTP 500) (Request-ID: req-a0fa98c2-4707-42e6-b31d-1021c8538428)

this was caused because of connection timeout default settings in the configuration files, and it's because all nodes (Glance, Swift, Nova, Neutron, ceilometer ..etc) have a default timeout for binding IPs , client connections, and connection timeout mainly most are set to 30 seconds or less 0.1 s in some cases ...

[DEFAULT]
functions get passed: conf, name, log_to_console, log_route, fmt, logger, adapted_logger
bind_port = 8080
workers = 4
user = swift
###################
bind_ip= 0.0.0.0
bind_timeout = 999
client_timeout = 999
conn_timeout = 999
node_timeout = 999
swift_dir = /etc/swift
############################
this is proxy-server.conf at /etc/swift/ added below hashes

/usr/lib/python2.7/site-packages/swift/common/wsgi.py

import ConfigParser, io

because I want to make sure that conf.get reads from conf files like proxy-server.conf and object-server.conf, well it's a mod_wsgi run on httpd, in which by the way I need to restart it each time I reboot, I have to check this out:
https://stackoverflow.com/questions/9327554/mod-wsgi-python-conf-parser

added some lines to object-server.conf at /etc/swift

bind_timeout = 999
client_timeout = 999
conn_timeout = 999

didn't really have to change anything to proxy-server.conf, even though I added the same above parameters to it, restarted swift services and servers: systemctl restart openstack*swift* bottom line the swift (volume node group), when is failed, it affects (controller nodes), (compute nodes), like physically a PC can not run with out a storage, but it can run with out network for example, but in our case here network sockets is what run everything inside OSP 14, OSP as a whole. so simple it is done once , configure python script, configure swift conf files, make sure that the br-ex is configured right from the beginning. and if you are running it on VM on a different platform, like windows 10, make sure that your firewall is disabled, and defender also, I don't know why but it keeps turning on real time protection automatically, check settings... add static routes on VM and on Windows, to ping your Virtual router outer and inner interfaces .. and that's it you're good to go just some remarks about Horizon in this slow case ... it keeps logging out and gives some other error on the screen, be patient and wait it will re-tune itself, but you have to logon each time there is a lag on the system..

my bad I have installed the full rich GUI interface of RHEL 7.6,and full Daemons (services), and not the infrastructure server which consumes really some more RAM and CPU ..

during the process I had some erros like watchdog Error on CPU on VM, and there is a parser server that needs also to be configured for timeout too

[root@localhost log]#
Message from syslogd@localhost at Jan 23 02:23:31 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [ovsdb-server:10088]

so this error below really affected neutron, ...

Read more...

Amer Hwitat (amer.hwitat) wrote :

this case scenario will not likely happen on HA servers Deployment Datacenters where the network infrastructure is super fast (Fiber backbone switches and SAN storage solutions),but it can happen in case of congested networks with thousands of users connecting to the main cloud site ...

even with a DR site ready to take over when Main is down, the traffic is UN-predictable on the internet and needs on-call technical support staff 24/7 to re-traffic

back to swift and neutron and nova the representatives of compute , controller (Network), Volume they all run on binding IPs and opening sockets on python modules on httpd (Apache) , one fails , the other fails or lag ...

this is my point of view ..

Amer Hwitat (amer.hwitat) wrote :

not to mention if you are using SR-VIO to connect to live interconnected networks ... it would cause a problem ...

Amer Hwitat (amer.hwitat) wrote :

well the solution from my side is to guarantee that all openstack servers will be running in case of a timeout (connection drop), is to go and edit all .conf files of nova keystone swift cinder glance neutron ...etc and to add timeout=999999999.. or some specific reasonable amout of time, combined with a fix to the openvswitch way of handling this because it's not fault tolerant or fail over component and needs a patch on your repos immediatley ... I recommend to make it redirect to the loopback NIC which is running any way and not used most of the time .

otherwise whoever who reads this article and benefit from illegally you will find your self in the delemma of possible service drop in case of DOS attack, and might find youn servers fried and down loose amount of considerable money the worst case scenario ... it reall that software can fireup CPUs in this case, eventhough I don't think so it going to make your main site down, and you will get in the trouble of bringing everything back and possible lose of customers if you give a real time service like telecommunication, also eventhough I didn't test this scenario live on a multi-node environment and I don't think it's going to be extreme case like this but this error I had with openvswitch and made my VM crash is CPU related...

[root@localhost log]#
Message from syslogd@localhost at Jan 23 02:23:31 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [ovsdb-server:10088]

so make sure test it on a test environment that has Director and to test on nodes (control,compute,volume) because it has a branching affect and it causes underdog (undercloud) problem, overdog (overcloud) problem, and a watchdog (OVS) problem... :)))))))

like when you have problems with connectivity re-route traffic to loopback so that you don't crash, or make a failover core switch in the local network to make backup plan to your datacenter, that is common sense.

I know what I do with this installation is not common sense but the worst case scenario is that your Main site looses connectivity, and DR site comes up and running , then your customers won't be interrupted and they will have good experience with your network, but you will have a headache bringing back the Main Hardware and OS to their status before loosing the power for example (backup generators won't allow), or Main backbone infrastructure is down for any reason, then you might spend hours bringing back and rescue and restore your Main site, if you don't have a failover OVS which is not present at the moment in my installation, and I don't know if it exists, computer systems do not sweet talk to you or pay you complement when they fail , they hit hard, and you get back lashes from you management, I learned this back in 2002 when I was put in the situation of bringing back data (OS) from a mirrored RAID HD, when it should have hotspaire mechanism to it in the Bios and I didn't need to do anything except to change the HD manually and hot plug it, and you know what it didn't work, this was luckly the OS not the Archived RAID 5 DataBase...

that's also why I work as Academic not Technical side ..

Amer Hwitat (amer.hwitat) wrote :

I have these file that logs most of what I did to make the VM work ... I have some other stuff also that might be a good benefit to share ... and excuse me for acting rude ... but it was important for me to finish evaluating to my friend's case on the double .. I have recommended to him to leave the infrastructure as is and to plan patches and upgrades to the system ... he is going not to be working directly on this layer .. openstack ... he is going to work on the CBAM layer above it, but we had to make sure everything in the lower layer of IaaS infrastructure to be orchestrated by their team and to feed back on certain things ... anyway I'm now Academic not Technical ... I teach for a living ... then I might consider to go back developing or Technical on Oracle DB, or SAP as I'm a functional guy in this area, Technical stuff is not my terf, I worked on Unix based servers to configure Oracle EBS and SAP ERP, there is a lot of differences than to work pure networking ... CISCO , Huwayi, Juniper , which I think is OSP contributors also ... besides some other companies that collaborate build and maintain OSP.

though I must say that RHEL 7 reminds me of Compaq True 64 Unix, before compaq broke and sold to HP, and it became HP Compaq True 64 , the good old 2003 days, and also it has some looks of SUN Solaris on Sparc before being sold to Oracle, and as I have worked on HP UX 11 also for configuring Oracle DB , and IBM AIX 5 Unix in 2010 , it seems that Linux flavors of Unix are dominating the world of datacenters more than RISC machines UNIX servers and Mainframes ...

nothing to me is Compared to commodore 64 a super fast home computer in the 80's with 64 K RAM and built in Basic interpreter, fast tape and 5.25" Floppy with built in sound chip that rocked the mass world as and still being the most sold computer ever outselling Apple and IBM, it's these days that I want to say that never going be repeated ever ...

in the mean time, I advised my friend to keep doing his job as he works on a layer above the openstack and he is not in charge of the infrastructure layer that runs openstack as backbone for their telecom industry for the DR site so he wouldn't be affected unless there is a real emergency that operates his site, he is working on orchestration of VMs that are physically on openstack but logically managed by him and the orchestrator is not of openstack and the manager too ... and the bottom layer is openstack and the middle layer is the manager the upper layer is the dashboards and root cause analysis software which is also not of openstack, this is a good experience for me too to get to talk to professionals like you.

in the mean time also the VM is working fine now, no crash in swift's object-server service until now, but keystone and nova do sometimes give me hard time as I didn't set the timeout=9999999999999999 to their .conf files ... looool

Amer Hwitat (amer.hwitat) wrote :

runtime failure log file

Amer Hwitat (amer.hwitat) wrote :
Amer Hwitat (amer.hwitat) wrote :
Amer Hwitat (amer.hwitat) wrote :

swift.log file found in /var/log/swift/ which tells nothing about the crashes , how strange

Amer Hwitat (amer.hwitat) wrote :

Ahhh and neutron is working fine ... smooth ... I ping the internal network from my VM outside world

Amer Hwitat (amer.hwitat) wrote :

now I have created an image at Glance , container at swift, volume at cinder, and an Instance of RHEL 7 minimal tiny flavor, and ready to launch and test connectivity, but it's giving some errors and adjusting back, eventually it created the instance, well and gave me some 500 http errors and keystone keeps logging me out, so ... now I might take the advice and upgrade the RAM add another DDR 3 hard to find RAM to my crancky laptop .. it works just remember add imports to python2.7 wsgi.py ConfigParer and io two lines or one line of your choice ... for the function conf.get to actually get from related .conf files ... I really didn't check the 1000+ lines of code all, but it's the headers on the beginning of the script, also not sure if they are embedded in another imported library on the header, just to make sure it reads from it, add it .. it won't harm, redundancy of reference is legal ..

and set the timeout=99999999 for conn, client, binding IP to 0.0.0.0 not your real IP so that it works on any IP, .. etc

adjust the timeout=9999999 to all .conf files of openstack and enjoy the solution on any slow network with low specs VM

Amer Hwitat (amer.hwitat) wrote :

do the following:

install ironic baremetal### configure to automatic clean

Just edit .conf files of all services in openstack and add timeout=99999999 , for conn_timeoute ... all timeout variantes

edit wsgi.py

/usr/lib/python2.7/site-packages/swift/common/wsgi.py

add to the header

import ConfigParser,io

grep rewrite /etc/httpd/conf.modules.d/* ## recreate modules in httpd

systemctl restart httpd
systemctl restart openstack*

no longer affects: openstack (Ubuntu)
Amer Hwitat (amer.hwitat) wrote :
Download full text (4.1 KiB)

[root@amer ~(keystone_admin)]# openstack compute service delete 3 7 9 10 11 12 16 18 19 22 23
Failed to delete compute service with ID '3': Service 3 not found. (HTTP 404) (Request-ID: req-6aff9769-4611-48b2-95e4-796f4e88a47e)
Failed to delete compute service with ID '7': Service 7 not found. (HTTP 404) (Request-ID: req-0ea78e10-b109-4ad8-8d7b-ddd2065e8fac)
Failed to delete compute service with ID '10': Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
<class 'nova.exception.HostMappingNotFound'> (HTTP 500) (Request-ID: req-fbfcaebf-9451-44e2-96a6-ae1ae441d91e)
Failed to delete compute service with ID '18': Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
<class 'nova.exception.HostMappingNotFound'> (HTTP 500) (Request-ID: req-739e9293-9e35-45c8-84ac-c9a36bfa10a7)
4 of 11 compute services failed to delete.
[root@amer ~(keystone_admin)]#

I have changed the hosts file ... it gave the above

again

[root@amer ~(keystone_admin)]# openstack compute service list
+----+------------------+-----------------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+-----------------------+----------+---------+-------+----------------------------+
| 10 | nova-compute | localhost | nova | enabled | down | 2019-01-19T13:14:06.000000 |
| 18 | nova-compute | localhost.localdomain | nova | enabled | down | 2019-01-25T15:34:10.000000 |
| 24 | nova-scheduler | amer.domain.com | internal | enabled | up | 2019-01-25T16:29:29.000000 |
| 26 | nova-consoleauth | amer.domain.com | internal | enabled | up | 2019-01-25T16:29:34.000000 |
| 27 | nova-conductor | amer.domain.com | internal | enabled | up | 2019-01-25T16:29:35.000000 |
| 29 | nova-compute | amer.domain.com | nova | enabled | up | 2019-01-25T16:29:33.000000 |
+----+------------------+-----------------------+----------+---------+-------+----------------------------+
[root@amer ~(keystone_admin)]# openstack compute service delete 10 18
Failed to delete compute service with ID '10': Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
<class 'nova.exception.HostMappingNotFound'> (HTTP 500) (Request-ID: req-576dc490-5ac3-4605-8c7f-c0391527842b)
Failed to delete compute service with ID '18': Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
<class 'nova.exception.HostMappingNotFound'> (HTTP 500) (Request-ID: req-544d1089-cc74-40f2-a5b4-2ff7ccd24fa5)
2 of 2 compute services failed to delete.
[root@amer ~(keystone_admin)]#

OK ... again

[root@amer ~(keystone_admin)]# systemctl restart *nova*
[root@amer ~(keystone_admin)]# openstack compute service list
+----+------------------+-----------------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | Sta...

Read more...

Amer Hwitat (amer.hwitat) wrote :

sorry I changed the /etc/hosts not the /etc/hostname so I used the hostnamectl to set the name

Amer Hwitat (amer.hwitat) wrote :

/etc/hostname file overrides the new command hostnamectl set-hostname it should be vise versa , anyway the command also
openstack compute service --disable gives failure on certain services with hosts that do not exist anymore like localhost
openstack compute service delete ID --- doesn't delete some services giving the error that you have to report to launchpad.net ...

anyway I will go for fresh installation

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.