I have two nics:
eth0 direct connection without a switch between the two nodes (10.0.0.x) , hostnames: node1-direct, node2-direct
eth1 with a switch 192.168.x.x (hostnames: node1, node2)
using the node[12]-direct hostname in a simple cluster.conf a ran in the same problem like ITec.
replacing the hostnames (-> node[12]) it works. adding altname
to the corresponding clusternode it works:
<altname name="node1-direct" port="5406" mcast="239.192.122.46" />
running: corosync-cfgtool -s # it show the two rings
Printing ring status.
Local node ID 2
RING ID 0
id = 192.168.25.52
status = ring 0 active with no faults
RING ID 1
id = 10.0.0.52
status = ring 1 active with no faults
Hi
I had/have similar problems
I have two nics:
eth0 direct connection without a switch between the two nodes (10.0.0.x) , hostnames: node1-direct, node2-direct
eth1 with a switch 192.168.x.x (hostnames: node1, node2)
using the node[12]-direct hostname in a simple cluster.conf a ran in the same problem like ITec.
replacing the hostnames (-> node[12]) it works. adding altname 239.192. 122.46" />
to the corresponding clusternode it works:
<altname name="node1-direct" port="5406" mcast="
running: corosync-cfgtool -s # it show the two rings
Printing ring status.
Local node ID 2
RING ID 0
id = 192.168.25.52
status = ring 0 active with no faults
RING ID 1
id = 10.0.0.52
status = ring 1 active with no faults
maybe the helps..
KLor