Fresh install - neutron processes fail on controller nodes
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack-Ansible |
Fix Released
|
Critical
|
Unassigned |
Bug Description
This is a fresh install, using OVS as neutron backend. Controller nodes (where network nodes are collocated also) continuous spit out:
Dec 27 22:19:24 sjc-lnxserver-122 systemd[12112]: neutron-
Dec 27 22:19:24 sjc-lnxserver-122 systemd[12112]: neutron-
Dec 27 22:19:24 sjc-lnxserver-122 systemd[12113]: neutron-
Dec 27 22:19:24 sjc-lnxserver-122 systemd[12113]: neutron-
Dec 27 22:19:24 sjc-lnxserver-122 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=unconfined msg='unit=
Dec 27 22:19:24 sjc-lnxserver-122 systemd[12114]: neutron-
Dec 27 22:19:24 sjc-lnxserver-122 systemd[12114]: neutron-
Dec 27 22:19:24 sjc-lnxserver-122 systemd[12115]: neutron-
Dec 27 22:19:24 sjc-lnxserver-122 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=unconfined msg='unit=
Dec 27 22:19:24 sjc-lnxserver-122 systemd[12115]: neutron-
An immediate effect of this is that there is no dhcp functionality for created networks.
user_variables:
neutron_
neutron_
neutron_
- router
- metering
openstack_
birbilakos (birbilis) wrote : | #1 |
Dmitriy Rabotyagov (noonedeadpunk) wrote : | #2 |
birbilakos (birbilis) wrote : | #3 |
root@sjc-
total 12
-rw-r----- 1 root root 503 Dec 27 18:32 ovn-plugin.filters
-rw-r----- 1 root root 2223 Dec 27 18:32 rootwrap.filters
-rw-r----- 1 root root 839 Dec 27 18:32 vpnaas.filters
root@sjc-
-r--r----- 1 root root 434 Dec 27 18:33 /etc/sudoers.
root@sjc-
# Ansible managed
Defaults:neutron !requiretty
Defaults:neutron secure_
neutron ALL = (root) NOPASSWD: /openstack/
neutron ALL = (root) NOPASSWD: /openstack/
neutron ALL = (root) NOPASSWD: /openstack/
birbilakos (birbilis) wrote : | #4 |
Things tried so far (to no avail)
- chown neutron:neutron eveything under: /openstack/
- changed /etc/systemd/
[Service]
Type = simple
User = neutron
Group = neutron
to User = root
which just produced a different error:
Dec 28 11:38:21 sjc-lnxserver-122 neutron-
Dec 28 11:38:21 sjc-lnxserver-122 neutron-
birbilakos (birbilis) wrote : | #5 |
Forgot to mention that OS is Ubuntu 22.04 LTS, no selinux is enabled, neither ufw.
birbilakos (birbilis) wrote : | #6 |
# cat /etc/neutron/
# Command filters to allow privsep daemon to be started via rootwrap.
#
# This file should be owned by (and only-writeable by) the root user
[Filters]
# By installing the following, the local admin is asserting that:
#
# 1. The python module load path used by privsep-helper
# command as root (as started by sudo/rootwrap) is trusted.
# 2. Any oslo.config files matching the --config-file
# arguments below are trusted.
# 3. Users allowed to run sudo/rootwrap with this configuration(*) are
# also allowed to invoke python "entrypoint" functions from
# --privsep_context with the additional (possibly root) privileges
# configured for that context.
#
# (*) ie: the user is allowed by /etc/sudoers to run rootwrap as root
#
# In particular, the oslo.config and python module path must not
# be writeable by the unprivileged user.
# PRIVSEP
# oslo.privsep default neutron context
privsep: PathFilter, privsep-helper, root,
--config-file, /etc/(?!\.\.).*,
--privsep_context, neutron.
--privsep_
# NOTE: A second `--config-file` arg can also be added above. Since
# many neutron components are installed like that (eg: by devstack).
# Adjust to suit local requirements.
# DEBUG
sleep: RegExpFilter, sleep, root, sleep, \d+
# EXECUTE COMMANDS IN A NAMESPACE
ip: IpFilter, ip, root
ip_exec: IpNetnsExecFilter, ip, root
# METADATA PROXY
haproxy: RegExpFilter, haproxy, root, haproxy, -f, .*
haproxy_env: EnvFilter, env, root, PROCESS_TAG=, haproxy, -f, .*
# DHCP
dnsmasq: CommandFilter, dnsmasq, root
dnsmasq_env: EnvFilter, env, root, PROCESS_TAG=, dnsmasq
# DIBBLER
dibbler-client: CommandFilter, dibbler-client, root
dibbler-client_env: EnvFilter, env, root, PROCESS_TAG=, dibbler-client
# L3
radvd: CommandFilter, radvd, root
radvd_env: EnvFilter, env, root, PROCESS_TAG=, radvd
keepalived: CommandFilter, keepalived, root
keepalived_env: EnvFilter, env, root, PROCESS_TAG=, keepalived
keepalived_
keepalived_
# OPEN VSWITCH
ovs-ofctl: CommandFilter, ovs-ofctl, root
ovsdb-client: CommandFilter, ovsdb-client, root
Dmitriy Rabotyagov (noonedeadpunk) wrote : | #7 |
So things looks pretty much correctly to me from your pastes, as missing/wrong sudoers or rootwrap filters would be close to the only culprit I would expect.
Using root user I think is not expected by privsep, so it indeed can crash if you're trying to use it for root, without further configuration (by just replacing username for the service).
I think the last thing worth checking, would be apparmor profiles for Debian/Ubuntu or SELinux for CentOS/Rocky Linux. We currently also do not support installation with enabled SELinux, for instance.
With that apparmor profiles for haproxy, dnsmasq and ping should be disabled to run inside namespaces:
https:/
birbilakos (birbilis) wrote : | #8 |
I have not installed/enabled selinux so it can't be the culprit.
I'd assume if it's apparmor causing this, disabling would do the trick?
# systemctl disable apparmor
birbilakos (birbilis) wrote : | #9 |
well, this didn't seem to make any difference:
# systemctl status apparmor.service
○ apparmor.service - Load AppArmor profiles
Loaded: loaded (/lib/systemd/
Dec 28 15:36:29 sjc-lnxserver-121 systemd[4565]: neutron-
Dec 28 15:36:29 sjc-lnxserver-121 systemd[4566]: neutron-
Dec 28 15:36:29 sjc-lnxserver-121 systemd[4565]: neutron-
Dec 28 15:36:29 sjc-lnxserver-121 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=unconfined msg='unit=
Dec 28 15:36:29 sjc-lnxserver-121 systemd[4566]: neutron-
Dec 28 15:36:29 sjc-lnxserver-121 systemd[4567]: neutron-
Dec 28 15:36:29 sjc-lnxserver-121 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=unconfined msg='unit=
Dec 28 15:36:29 sjc-lnxserver-121 systemd[4567]: neutron-
Dec 28 15:36:29 sjc-lnxserver-121 systemd[4568]: neutron-
Dec 28 15:36:29 sjc-lnxserver-121 systemd[1]: neutron-
Dec 28 15:36:29 sjc-lnxserver-121 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=unconfined msg='unit=
Dec 28 15:36:29 sjc-lnxserver-121 systemd[1]: neutron-
Dec 28 15:36:29 sjc-lnxserver-121 systemd[4568]: neutron-
birbilakos (birbilis) wrote : | #10 |
I just did a complete reinstall of the environment, with the same config (haven't yet logged in horizon) and still hitting this issue so it appears to happen consistently at least on Ubuntu 22.04 server.
Dmitriy Rabotyagov (noonedeadpunk) wrote : | #11 |
Can you kindly paste output of following resulted config files:
* /etc/neutron/
* /etc/neutron/
And also some more insight into your network configuration, like `ip a` and/or `brctl show`, since commenting out `network_interface: "br-ext"` with the comment of having connectivity issue afterwards smells somewhat fishy.
We don't see any issues in CI, so I'd assume that smth is actually off with the specific configuration, if playbooks are not failing during the deployment.
birbilakos (birbilis) wrote (last edit ): | #12 |
root@sjc-
[ml2]
type_drivers = flat,vlan,vxlan
tenant_
mechanism_drivers = openvswitch
extension_drivers = port_security
# ML2 flat networks
[ml2_type_flat]
flat_networks = flat
# ML2 VLAN networks
[ml2_type_vlan]
network_vlan_ranges = physnet1:40:400
# ML2 VXLAN networks
[ml2_type_vxlan]
vxlan_group = 239.1.1.1
vni_ranges = 1:1000
[ml2_type_geneve]
vni_ranges =
max_header_size = 38
# Security groups
[securitygroup]
enable_
enable_ipset = True
root@sjc-
[ovs]
local_ip = 172.29.240.121
bridge_mappings = flat:br-
[agent]
l2_population = False
tunnel_types = vxlan
enable_
extensions =
# Security groups
[securitygroup]
firewall_driver = iptables_hybrid
enable_
enable_ipset = True
Here's my netplan config:
network:
version: 2
ethernets:
eno1:
mtu: 1500
eno2:
mtu: 1500
bonds:
bond0:
interfaces:
- eno1
mtu: 1500
parameters:
lacp-rate: fast
mode: active-backup
vlans:
bond0.10:
id: 10
link: bond0
bond0.20:
id: 20
link: bond0
bond0.30:
id: 30
link: bond0
bond0.40:
id: 40
link: bond0
bridges:
br-mgmt:
addresses:
- "172.29.236.121/22"
interfaces:
- bond0.10
mtu: 1500
nameservers:
addresses:
- DNS1
- DNS2
search:
- test.net
br-storage:
addresses:
- "172.29.244.121/22"
interfaces:
- bond0.20
mtu: 1500
openvswitch: {}
br-vxlan:
addresses:
- "172.29.240.121/22"
interfaces:
- bond0.30
mtu: 1500
br-ext:
addresses:
- "10.xxx.yyy/24"
interfaces:
- bond0
routes:
- to: default
via: 10.xxx.yyy
nameservers:
addresses:
- DNS1
- DNS2
br-vlan:
interfaces: [bond0.40]
mtu: 1500
Dmitriy Rabotyagov (noonedeadpunk) wrote : | #13 |
Ok, that looks not right to me. I'm not sure though if it's related to the errors you see or not, but worth fixing network config first anyway.
Can you also provide output "ovs-vsctl show" just in case? I will try to come up with some proposal tomorrow morning.
birbilakos (birbilis) wrote : | #14 |
I appreciate all the help Dmitrivy :)
root@sjc-
197d0057-
Bridge br-storage
fail_mode: standalone
Port "662d3225_eth2"
Port "6134f004_eth2"
Port br-storage
Port bb08d3b9_eth2
Port bond0.20
Bridge br-provider
fail_mode: secure
Port br-vlan
Port br-provider
Bridge br-public
fail_mode: secure
Port br-public
ovs_version: "2.17.8"
Dmitriy Rabotyagov (noonedeadpunk) wrote : | #15 |
1. Weird thing that you don't have `container_
But at the same time, netplan contains defenition that br-storage is an OVS bridge. So I'd suggest either to edit netplan and make storage a regular bridge, or add `container_
`provider_networks` for br-storage
2. br-ext can not be a bond0 device, if you want to have br-vlan. Idea of br-vlan, is that Neutron will try to spawn a VLANs on the interface if topic, which is impossible to do from bond0.40.
So basically, br-vlan should have bond0, while br-ext some bond0.40
3. You really should not use br-ext as a default gateway. Mainly, because on default interface you should held different networks, while br-ext is designed to handle customer public networks (passed to VM). If you're limited on amount of VLANs - better to combine smth with br-mgmt (like - drop br-storage)
4. Then in openstack_
birbilakos (birbilis) wrote : | #16 |
Thank you Dmitriy. I'm trying to adapt based on your advice but still need some help to understand how things would work. I only have one bond available and removed br-ext from the equation. Instead routed internet traffic through the vlan mgmt bridge. I'm unsure though what you use as haproxy_
Here's the relevant updated netplan config. Please let me know what you think.
network:
version: 2
ethernets:
eno1:
mtu: 1500
eno2:
mtu: 1500
bonds:
bond0:
interfaces:
- eno1
mtu: 1500
parameters:
lacp-rate: fast
mode: active-backup
vlans:
bond0.10:
id: 10
link: bond0
bond0.20:
id: 20
link: bond0
bond0.30:
id: 30
link: bond0
bridges:
br-mgmt:
addresses:
- "172.29.236.{{ target_
interfaces:
- bond0.10
mtu: 1500
routes:
- to: default
via: 172.29.236.120
nameservers:
addresses:
- DNS1
- DNS
br-storage:
addresses:
- "172.29.244.{{ target_
interfaces:
- bond0.20
mtu: 1500
br-vxlan:
addresses:
- "172.29.240.{{ target_
interfaces:
- bond0.30
mtu: 1500
br-vlan:
interfaces:
- bond0
mtu: 1500
Relevant openstack_
cidr_networks: &cidr_networks
management: 172.29.236.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22
used_ips:
- "172.29.
- "172.29.
- "172.29.
global_overrides:
cidr_networks: *cidr_networks
internal_
#
# The below domain name must resolve to an IP address
# in the CIDR specified in haproxy_
# If using different protocols (https/http) for the public/internal
# endpoints the two addresses must be different.
#
external_
management_
provider_
- network:
ip_from_q: "management"
type: "raw"
- all_containers
- hosts
- network:
ip_from_q: "storage"
type: "raw"
- glance_api
- cinder_api
- cinder_volume
- nova_compute
- network:
type: "vlan"
range: "40:400"
net_name: "vlan"
- neutron_
- network:
type: "flat"
net_name: "f...
Dmitriy Rabotyagov (noonedeadpunk) wrote : | #17 |
Ok, let me provide you with our typical configuration then - maybe it will explain a bit what I mean.
Eventually, we do not have a separate "flat" network - we provide public network through a VLAN, thus we don't have any FLAT network at all.
However, you can add a FLAT one to your configuration if you want to, it still have to be on VLAN though:
- network:
type: "flat"
net_name: "flat"
- neutron_
So our typical configuration would look like that.
openstack_
management_
tunnel_bridge: br-vxlan
provider_
- network:
- all_containers
- hosts
type: raw
ip_from_q: container
- network:
- glance_api
- cinder_volume
- nova_compute
type: raw
ip_from_q: storage
- network:
- neutron_
ip_from_q: tunnel
type: vxlan
range: 65537:69999
net_name: vxlan
- network:
- neutron_
type: vlan
range: 40:400
net_name: vlan
And here is how netplan config looks like:
network:
bonds:
bond0:
- eno1
- eno2
mtu: 9000
bridges:
br-mgmt:
- 172.29.236.X/22
- bond0.10
mtu: 1500
br-storage:
- 172.29.244.X/22
- bond0.20
mtu: 9000
br-vxlan:
- 172.29.240.X/22
- bond0.30
mtu: 9000
ethernets:
eno1:
match:
mtu: 9000
eno2:
match:
...
birbilakos (birbilis) wrote : | #18 |
I don't see br-vlan defined in the netplan config, while there is one defined in openstack_
Also, what's the difference between these two options in the mgmt network:
birbilakos (birbilis) wrote : | #19 |
I should also clarify that switch limitations only allow me to have public access via untagged traffic. I.e. I cannot use vlan for accessing the outside of my openstack perimeter. As such it would either need to go through untagged interface/bond or via a router for which I'm using my deployment node.
Hence comes the question about what to use for haproxy_
Dmitriy Rabotyagov (noonedeadpunk) wrote : | #20 |
IIRC, br-vlan is an OVS bridge that's created and handled by neutron.
But if you can't use vlan tag to get outside of perimeter and have only one interface - I think you can't have vlan networks in neutron.
Then I'm really not sure how to handle outgoing traffic from VMs in a good way either.
I mean - you should probably have only vxlan even then. Then VMs will have only internal networks and will be able to reach out only through net nodes (through neutron routers and floating IPs).
And then likely you in fact need br-ext as an OVS bridge with IP/default route on it, and use it as a flat network, which is not shared (so usable only for floating IPs).
Potential other way around would be to create a veth pair that will be added to the bridge with public network and then other side of pair to a br-ext under neutron.
So there are potential way through, but you kinda need to understand what you are doing.
birbilakos (birbilis) wrote : | #21 |
Yes, the environment is quite limited for now. Given the constrains maybe I can use vlans only for internal traffic and not for getting outside the perimeter. E.g. mgmt and storage networks basically. I understand that vxlan would still allow me to give floating IPs to VMs (?) which is good enough for me.
I'm unsure still though of the config for this: "And then likely you in fact need br-ext as an OVS bridge with IP/default route on it, and use it as a flat network, which is not shared (so usable only for floating IPs)."
Dmitriy Rabotyagov (noonedeadpunk) wrote : | #22 |
For floating IPs to work you still need some "public" network in neutron. The difference is here that you don't need to have such public network on computes - only on net nodes (where neutron_l3_agent runs).
birbilakos (birbilis) wrote (last edit ): | #23 |
Hi Dmitriy,
Could you advice if the below configuration makes sense:
- removed any vlan type networks, kept vxlan and flat in hopes that routing would be achievable via the network nodes (?)
- br-ext is part of the br-public network and used for the haproxy_
openstack_
management_
tunnel_bridge: br-vxlan
provider_
- network:
ip_from_q: "management"
type: "raw"
- all_containers
- hosts
- network:
ip_from_q: "storage"
type: "raw"
- glance_api
- cinder_api
- cinder_volume
- nova_compute
- network:
type: "flat"
net_name: "flat"
- neutron_
- network:
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
- neutron_
netplan config (same for both controller/network as well as compute nodes):
network:
version: 2
ethernets:
eno1:
mtu: 1500
eno2:
mtu: 1500
bonds:
bond0:
interfaces:
- eno1
mtu: 1500
parameters:
lacp-rate: fast
mode: active-backup
vlans:
bond0.10:
id: 10
link: bond0
bond0.20:
id: 20
link: bond0
bond0.30:
id: 30
link: bond0
bridges:
br-mgmt:
addresses:
- "172.29.236.{{ target_
interfaces:
- bond0.10
mtu: 1500
br-storage:
addresses:
- "172.29.244.{{ target_
interfaces:
- bond0.20
mtu: 1500
br-vxlan:
addresses:
- "172.29.240.{{ target_
interfaces:
- bond0.30
mtu: 1500
br-ext:
addresses:
- "10.xxx.yyy.{{ target_
interfaces:
- bond0
mtu: 1500
routes:
- to: default
via: 10.xxx.yyy
nameservers:
addresses:
- DNS1
- DNS2
birbilakos (birbilis) wrote : | #24 |
I went ahead and reconfigured things using veth pairs vethb1, vethb2, using systemd configuration:
[NetDev]
Name=vethb1
Kind=veth
[Peer]
Name=vethb2
netplan:
network:
version: 2
ethernets:
eno1:
mtu: 1500
eno2:
mtu: 1500
vethb1: {}
vethb2: {}
bonds:
bond0:
interfaces:
- eno1
mtu: 1500
parameters:
lacp-rate: fast
mode: active-backup
vlans:
bond0.10:
id: 10
link: bond0
bond0.20:
id: 20
link: bond0
bond0.30:
id: 30
link: bond0
bridges:
br-mgmt:
addresses:
- "172.29.236.121/22"
interfaces:
- bond0.10
mtu: 1500
br-storage:
addresses:
- "172.29.244.121/22"
interfaces:
- bond0.20
mtu: 1500
br-vxlan:
addresses:
- "172.29.240.121/22"
interfaces:
- bond0.30
mtu: 1500
br-ext:
addresses:
- "10.xxx.yyy.zzz/24"
interfaces:
- bond0
mtu: 1500
routes:
- to: default
via: 10.xxx.yyy.1
nameservers:
addresses:
- DNS1
- DNS2
search:
- test.net
openstack_
management_bridge: br-mgmt
tunnel_bridge: br-vxlan
provider_
- network:
ip_from_q: "management"
type: "raw"
- all_containers
- hosts
- network:
ip_from_q: "storage"
type: "raw"
- glance_api
- cinder_api
- cinder_volume
- nova_compute
- network:
type: "flat"
net_name: "flat"
- neutron_
- network:
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
- neutron_
user_variables:
---
debug: false
# global_overrides:
# enable_logging: "yes"
install_method: source
haproxy_
haproxy_
haproxy_
haproxy_
neutron_
neutron_
neutron_
- router
- metering
And yet, I still get the same 'permission denied' errors in the controller/network nodes. As such, I don't see how the network configuration might play a role here...
For completeness, here's the openstack-ansible code base that I use:
commit 8bcd9198ff00363
Dmitriy Rabotyagov (noonedeadpunk) wrote : | #25 |
So what's the resulted configuration of
* /etc/neutron/
* /etc/neutron/
birbilakos (birbilis) wrote : | #26 |
root@sjc-
[ml2]
type_drivers = flat,vlan,vxlan
tenant_
mechanism_drivers = openvswitch
extension_drivers = port_security
# ML2 flat networks
[ml2_type_flat]
flat_networks = flat
# ML2 VLAN networks
[ml2_type_vlan]
network_vlan_ranges =
# ML2 VXLAN networks
[ml2_type_vxlan]
vxlan_group = 239.1.1.1
vni_ranges = 1:1000
[ml2_type_geneve]
vni_ranges =
max_header_size = 38
# Security groups
[securitygroup]
enable_
enable_ipset = True
root@sjc-
[ovs]
local_ip = 172.29.240.121
bridge_mappings = flat:br-public
[agent]
l2_population = False
tunnel_types = vxlan
enable_
extensions =
# Security groups
[securitygroup]
firewall_driver = iptables_hybrid
enable_
enable_ipset = True
birbilakos (birbilis) wrote : | #27 |
Btw, I do not see why would these services attempt to run on the host, as opposed to a container (?)
neutron-dhcp-agent
neutron-l3-agent
neutron-
neutron-
Dmitriy Rabotyagov (noonedeadpunk) wrote : | #28 |
Yes, these services run on bare metal hosts, not in containers by default
birbilakos (birbilis) wrote : | #29 |
I have not found anything related to them in apparmor profiles, which seem to be installed during openstack deployment. These processes just refuse to start up with permission denied for user neutron so I doubt an interface missconfig can be at play here.
birbilakos (birbilis) wrote : | #30 |
Well... it turns out that for some reason /openstack folder doesn't have the right perms to allow anyone besides root access ('x' bit is missing). Once I did chmod +x /openstack processes started as expected!
As such I believe that this is a legitimate bug.
I'm still unable though to access the haproxy_
Dmitriy Rabotyagov (noonedeadpunk) wrote : | #31 |
I'm not able to reproduce that behavior then...
In all my test deployments from 28.0.0 I see proper permissions on /openstack:
# stat /openstack/
File: /openstack/
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fc01h/64513d Inode: 3141121 Links: 4
Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2023-11-17 13:55:06.942741664 +0000
Modify: 2023-11-13 12:39:44.415935380 +0000
Change: 2023-11-13 12:39:44.415935380 +0000
Birth: 2023-11-13 12:22:25.355064045 +0000
#
And I am not able to reproduce the behavior you're talking about.
Don't you accidnetally have the /openstack as a separate mount point or being pre-created in a way?
Dmitriy Rabotyagov (noonedeadpunk) wrote (last edit ): | #32 |
Can you kindly provide output of `ls -l /openstack`?
I'm trying to narrow down component that might mess up permissions but I fail to find one so far.
OpenStack Infra (hudson-openstack) wrote : Fix proposed to openstack-ansible-lxc_hosts (master) | #33 |
Fix proposed to branch: master
Review: https:/
Changed in openstack-ansible: | |
status: | New → In Progress |
Dmitriy Rabotyagov (noonedeadpunk) wrote : | #34 |
Ok, I finally boiled down what messes up the permissions, and that is regression caused by this patch: https:/
birbilakos (birbilis) wrote : | #35 |
Hi Dmitriy,
I'm happy you have RCAed the issue :)
I now have a working openstack env. Thank you so much for all your help and commitment in this great project!
Changed in openstack-ansible: | |
importance: | Undecided → Critical |
OpenStack Infra (hudson-openstack) wrote : Fix merged to openstack-ansible-lxc_hosts (master) | #36 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: master
commit bd011b0eeef76c4
Author: Dmitriy Rabotyagov <email address hidden>
Date: Thu Jan 4 15:31:46 2024 +0100
Fix permissions for base directories
With fixing linters [1] I have accidentally set incorrect mode for base directories
to 0644 while it should be 0755.
[1] https:/
Closes-Bug: #2047593
Change-Id: Ied402f4f22ac33
Changed in openstack-ansible: | |
status: | In Progress → Fix Released |
OpenStack Infra (hudson-openstack) wrote : Fix proposed to openstack-ansible-lxc_hosts (stable/2023.2) | #37 |
Fix proposed to branch: stable/2023.2
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix merged to openstack-ansible-lxc_hosts (stable/2023.2) | #38 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/2023.2
commit b0a0a7ce82b6ff9
Author: Dmitriy Rabotyagov <email address hidden>
Date: Thu Jan 4 15:31:46 2024 +0100
Fix permissions for base directories
With fixing linters [1] I have accidentally set incorrect mode for base directories
to 0644 while it should be 0755.
[1] https:/
Closes-Bug: #2047593
Change-Id: Ied402f4f22ac33
(cherry picked from commit bd011b0eeef76c4
Can you kindly provide output of "ls -l /etc/neutron/ rootwrap. d"? d/neutron_ sudoers exists?
Also can you check that file /etc/sudoers.