garbd does not work when specifying cluster address without the port

Bug #1365193 reported by Jaime Sicam
16
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC
Status tracked in 5.6
5.5
Fix Released
Undecided
Unassigned
5.6
Fix Released
Undecided
Unassigned

Bug Description

Tested on: Percona-XtraDB-Cluster-garbd-3.x86_64 0:3.7-1.3254.rhel6

If I try running this to join cluster without specifying port 4567:
garbd -a gcomm://192.168.1.90 -g my_centos_cluster
2014-09-03 23:45:04.669 INFO: CRC-32C: using "slicing-by-8" algorithm.
2014-09-03 23:45:04.669 INFO: Read config:
 daemon: 0
 name: garb
 address: gcomm://192.168.1.90
 group: my_centos_cluster
 sst: trivial
 donor:
 options: gcs.fc_limit=9999999; gcs.fc_factor=1.0; gcs.fc_master_slave=yes
 cfg:
 log:

2014-09-03 23:45:04.671 INFO: protonet asio version 0
2014-09-03 23:45:04.671 INFO: Using CRC-32C for message checksums.
2014-09-03 23:45:04.671 INFO: backend: asio
2014-09-03 23:45:04.672 WARN: access file(gvwstate.dat) failed(No such file or directory)
2014-09-03 23:45:04.672 INFO: restore pc from disk failed
2014-09-03 23:45:04.673 INFO: GMCast version 0
terminate called after throwing an instance of 'gu::NotSet'
Aborted

If I specify the port, it works:
 garbd -a gcomm://192.168.1.90:4567 -g my_centos_cluster
2014-09-03 23:46:21.084 INFO: CRC-32C: using "slicing-by-8" algorithm.
2014-09-03 23:46:21.084 INFO: Read config:
 daemon: 0
 name: garb
 address: gcomm://192.168.1.90:4567
 group: my_centos_cluster
 sst: trivial
 donor:
 options: gcs.fc_limit=9999999; gcs.fc_factor=1.0; gcs.fc_master_slave=yes
 cfg:
 log:

2014-09-03 23:46:21.086 INFO: protonet asio version 0
2014-09-03 23:46:21.086 INFO: Using CRC-32C for message checksums.
2014-09-03 23:46:21.086 INFO: backend: asio
2014-09-03 23:46:21.087 WARN: access file(gvwstate.dat) failed(No such file or directory)
2014-09-03 23:46:21.087 INFO: restore pc from disk failed
2014-09-03 23:46:21.088 INFO: GMCast version 0
2014-09-03 23:46:21.088 INFO: (73949133, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
2014-09-03 23:46:21.088 INFO: (73949133, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
2014-09-03 23:46:21.089 INFO: EVS version 0

However, if you check default config on /etc/sysconfig/garb, it shows that specifying the port is optional:
cat /etc/sysconfig/garb
# Copyright (C) 2012 Coedership Oy
# This config file is to be sourced by garb service script.

# REMOVE THIS AFTER CONFIGURATION

# A space-separated list of node addresses (address[:port]) in the cluster
# GALERA_NODES=""

# Galera cluster name, should be the same as on the rest of the nodes.
# GALERA_GROUP=""

# Optional Galera internal options string (e.g. SSL settings)
# see http://www.codership.com/wiki/doku.php?id=galera_parameters
# GALERA_OPTIONS=""

# Log file for garbd. Optional, by default logs to syslog
# LOG_FILE=""

But say if you specify the cluster address without the ports and start garbd:
Eg. GALERA_NODES="192.168.1.90 192.168.1.91"

You get the error:
[root@localhost ~]# service garb start
nc: port range not valid
nc: port range not valid
nc: port range not valid
None of the nodes in 192.168.1.90 192.168.1.91 is accessibl[FAILED]

My suggestion is to either make port 4567 default for garbd or change "# A space-separated list of node addresses (address[:port]) in the cluster" to "# A space-separated list of node addresses (address:port) in the cluster"

Tags: i45093
Revision history for this message
Przemek (pmalkowski) wrote :

I confirm that for both Percona-XtraDB-Cluster-garbd-2-2.11-1.2675.rhel6 and Percona-XtraDB-Cluster-garbd-3-3.7-1.3254.rhel6.
I agree the port should be either mandatory or 4567 set as default.

Changed in percona-xtradb-cluster:
status: New → Confirmed
Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PXC-1729

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.