100 nodes scaling blueprint in 6.0 implies there could be perfomance issues at nodes, so we must enhance our diagnostic snapshot tool ASAP.
At least it should provide:
1) All /var/log/atop/atop* binary files from all nodes (for now we have it only for master node)
2) command output for 'pcs status' from controllers
3) command output for 'cibadmin -Q' from controllers (note, some sensitive info (passwords from resources' parameters) will be included as well)
4) command output for 'rabbitmqctl report' from controllers
100 nodes scaling blueprint in 6.0 implies there could be perfomance issues at nodes, so we must enhance our diagnostic snapshot tool ASAP.
At least it should provide:
1) All /var/log/atop/atop* binary files from all nodes (for now we have it only for master node)
2) command output for 'pcs status' from controllers
3) command output for 'cibadmin -Q' from controllers (note, some sensitive info (passwords from resources' parameters) will be included as well)
4) command output for 'rabbitmqctl report' from controllers