I verified the test case using the package available in bionic-proposed and I confirm it is working as expected. I set up a 3 nodes cluster on AWS to test this.
Note: When installing fence-agents also install the Suggested dependencies, otherwise the 'fence_aws' command will not work.
If I go to node2 and run the following command to reject connections from the network interface in use the node is properly fenced (in this case rebooted):
ubuntu@node2:~$ sudo iptables -A INPUT -i eth0 -j REJECT
After some minutes the node2 gets back online.
I also tested it without pacemaker in a standalone mode. I ran the following command to do that:
I verified the test case using the package available in bionic-proposed and I confirm it is working as expected. I set up a 3 nodes cluster on AWS to test this.
Note: When installing fence-agents also install the Suggested dependencies, otherwise the 'fence_aws' command will not work.
ubuntu@node1:~$ cat /etc/os-release /www.ubuntu. com/" /help.ubuntu. com/" /bugs.launchpad .net/ubuntu/" POLICY_ URL="https:/ /www.ubuntu. com/legal/ terms-and- policies/ privacy- policy" CODENAME= bionic CODENAME= bionic 093f875f9f2ffa1 db pcmk_host_ map="node1: i-093f875f9f2ff a1db;node2: i-08649fdfb0a74 bc9f;node3: i-0394f790feeba 28b0" 08649fdfb0a74bc 9f pcmk_host_ map="node1: i-093f875f9f2ff a1db;node2: i-08649fdfb0a74 bc9f;node3: i-0394f790feeba 28b0" 0394f790feeba28 b0 pcmk_host_ map="node1: i-093f875f9f2ff a1db;node2: i-08649fdfb0a74 bc9f;node3: i-0394f790feeba 28b0" options: \ false \ 1.1.18- 2b07d5c5a9 \ infrastructure= corosync \ name=clubionic \ action= reboot \ policy= stop
NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.5 LTS"
VERSION_ID="18.04"
HOME_URL="https:/
SUPPORT_URL="https:/
BUG_REPORT_URL="https:/
PRIVACY_
VERSION_
UBUNTU_
ubuntu@node1:~$ dpkg -l | grep fence-agents
ii fence-agents 4.0.25-2ubuntu1.2 amd64 Fence Agents for Red Hat Cluster
ubuntu@node1:~$ sudo crm configure show
node 1: node1
node 2: node2
node 3: node3
primitive fence-node1 stonith:fence_aws \
params access_key=xxxx secret_key="xxxx" region=us-east-2 plug=i-
primitive fence-node2 stonith:fence_aws \
params access_key=xxxx secret_key="xxxx" region=us-east-2 plug=i-
primitive fence-node3 stonith:fence_aws \
params access_key=xxxx secret_key="xxxx" region=us-east-2 plug=i-
location l-fence-node1 fence-node1 -inf: node1
location l-fence-node2 fence-node2 -inf: node2
location l-fence-node3 fence-node3 -inf: node3
property cib-bootstrap-
have-watchdog=
dc-version=
cluster-
cluster-
stonith-enabled=on \
stonith-
no-quorum-
If I go to node2 and run the following command to reject connections from the network interface in use the node is properly fenced (in this case rebooted):
ubuntu@node2:~$ sudo iptables -A INPUT -i eth0 -j REJECT
After some minutes the node2 gets back online.
I also tested it without pacemaker in a standalone mode. I ran the following command to do that:
ubunt@node3:~$ sudo fence_aws --plug= <instance- id> --action=reboot --region=us-east-2 --access-key="xxx" --secret-key="xxx" --verbose