Server crashes after a couple of seconds

Bug #551663 reported by Anders Wallenquist
26
This bug affects 4 people
Affects Status Importance Assigned to Milestone
glusterfs (Ubuntu)
Confirmed
Undecided
Unassigned

Bug Description

The glusterfsd crashes after a couple of seconds. The config-files are built with gluster-volgen

[2010-03-30 14:59:19] D [glusterfsd.c:1370:main] glusterfs: running in pid 32083
[2010-03-30 14:59:19] D [transport.c:145:transport_load] transport: attempt to load file /usr/lib/glusterfs/3.0.2/transport/socket.so
[2010-03-30 14:59:19] D [xlator.c:284:_volume_option_value_validate] server-tcp: no range check required for 'option transport.socket.listen-port 6996'
[2010-03-30 14:59:19] D [io-threads.c:2841:init] brick1: io-threads: Autoscaling: off, min_threads: 8, max_threads: 8
[2010-03-30 14:59:19] N [glusterfsd.c:1396:main] glusterfs: Successfully started
pending frames:
frame : type(2) op(SETVOLUME)

patchset: v3.0.2
signal received: 11
time of crash: 2010-03-30 14:59:25
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.2
/lib/libc.so.6(+0x33af0)[0x7fa970195af0]
/usr/lib/libglusterfs.so.0(dict_unserialize+0xff)[0x7fa970913dff]
/usr/lib/glusterfs/3.0.2/xlator/protocol/server.so(mop_setvolume+0x8f)[0x7fa96f11656f]
/usr/lib/glusterfs/3.0.2/xlator/protocol/server.so(protocol_server_pollin+0x7a)[0x7fa96f10d76a]
/usr/lib/glusterfs/3.0.2/xlator/protocol/server.so(notify+0x83)[0x7fa96f10d7f3]
/usr/lib/libglusterfs.so.0(xlator_notify+0x43)[0x7fa970918dc3]
/usr/lib/glusterfs/3.0.2/transport/socket.so(socket_event_handler+0x7a)[0x7fa96e6fe09a]
/usr/lib/libglusterfs.so.0(+0x2e31d)[0x7fa97093331d]
glusterfsd(main+0x852)[0x4044f2]
/lib/libc.so.6(__libc_start_main+0xfd)[0x7fa970180c4d]
glusterfsd[0x402ab9]
---------
Segmenteringsfel (minnesutskrift skapad)
aw@homer:/data/export$

## file auto generated by /usr/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/bin/glusterfs-volgen --name store1 homer.vertel.se:/data/export/store1 agata.vertel.se:/srv/export/store1

volume posix1
  type storage/posix
  option directory /data/export/store1
end-volume

volume locks1
    type features/locks
    subvolumes posix1
end-volume

volume brick1
    type performance/io-threads
    option thread-count 8
    subvolumes locks1
end-volume

volume server-tcp
    type protocol/server
    option transport-type tcp
    option auth.addr.brick1.allow *
    option transport.socket.listen-port 6996
    option transport.socket.nodelay on
    subvolumes brick1
end-volume

## file auto generated by /usr/bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /usr/bin/glusterfs-volgen --name store1 homer.vertel.se:/data/export/store1 agata.vertel.se:/srv/export/store1

# TRANSPORT-TYPE tcp
volume hymer.vertel.se-1
    type protocol/client
    option transport-type tcp
    option remote-host 8.8.8.6
    option transport.socket.nodelay on
    option transport.remote-port 6996
    option remote-subvolume brick1
end-volume

volume cooler.vertel.se-1
    type protocol/client
    option transport-type tcp
    option remote-host cooler.vertel.se
    option transport.socket.nodelay on
    option transport.remote-port 6996
    option remote-subvolume brick1
end-volume

volume distribute
    type cluster/distribute
    subvolumes hymer.vertel.se-1 cooler.vertel.se-1
end-volume

volume writebehind
    type performance/write-behind
    option cache-size 4MB
    subvolumes distribute
end-volume

volume readahead
    type performance/read-ahead
    option page-count 4
    subvolumes writebehind
end-volume

volume iocache
    type performance/io-cache
    option cache-size `grep 'MemTotal' /proc/meminfo | awk '{print $2 * 0.2 / 1024}' | cut -f1 -d.`MB
    option cache-timeout 1
    subvolumes readahead
end-volume

volume quickread
    type performance/quick-read
    option cache-timeout 1
    option max-file-size 64kB
    subvolumes iocache
end-volume

volume statprefetch
    type performance/stat-prefetch
    subvolumes quickread
end-volume

Revision history for this message
Gionn (giovanni.toraldo) wrote :

Probably fixed in 3.0.5, package should be update, the current one is almost useless.

Revision history for this message
Ivan Borzenkov (ivan1986) wrote :

work on 3.0.4 and crash on 3.0.5

Revision history for this message
Ivan Borzenkov (ivan1986) wrote :

update to 3.1.3 from debian also fix this

Revision history for this message
Arthur Wiebe (artooro) wrote :

I believe this bug is linked to this bug in GlusterFS http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1188

Which has been patched. I'm having the same problem with my Ubuntu Server 11.04 installation and need this fixed before I can continue deployment.

Changed in glusterfs (Ubuntu):
status: New → Confirmed
Revision history for this message
Arthur Wiebe (artooro) wrote :
Download full text (5.0 KiB)

================================================================================
Version : glusterfs 3.0.5 built on Oct 16 2010 09:48:24
git: v3.0.5
Starting Time: 2011-05-16 16:21:06
Command line : /usr/sbin/glusterfs --log-level=NORMAL --volfile=/etc/glusterfs/web-tcp.vol /mnt/web
PID : 7819
System name : Linux
Nodename : lb-1
Kernel Release : 2.6.38-8-virtual
Hardware Identifier: i686

Given volfile:
+------------------------------------------------------------------------------+
  1: ## file auto generated by /usr/bin/glusterfs-volgen (mount.vol)
  2: # Cmd line:
  3: # $ /usr/bin/glusterfs-volgen --name web --raid 1 lb-0:/web lb-1:/web
  4:
  5: # RAID 1
  6: # TRANSPORT-TYPE tcp
  7: volume lb-1-1
  8: type protocol/client
  9: option transport-type tcp
 10: option remote-host lb-1
 11: option transport.socket.nodelay on
 12: option transport.remote-port 6996
 13: option remote-subvolume brick1
 14: end-volume
 15:
 16: volume lb-0-1
 17: type protocol/client
 18: option transport-type tcp
 19: option remote-host lb-0
 20: option transport.socket.nodelay on
 21: option transport.remote-port 6996
 22: option remote-subvolume brick1
 23: end-volume
 24:
 25: volume mirror-0
 26: type cluster/replicate
 27: subvolumes lb-0-1 lb-1-1
 28: end-volume
 29:
 30: volume readahead
 31: type performance/read-ahead
 32: option page-count 4
 33: subvolumes mirror-0
 34: end-volume
 35:
 36: volume iocache
 37: type performance/io-cache
 38: option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed 's/[^0-9]//g') / 5120 ))`MB
 39: option cache-timeout 1
 40: subvolumes readahead
 41: end-volume
 42:
 43: volume quickread
 44: type performance/quick-read
 45: option cache-timeout 1
 46: option max-file-size 64kB
 47: subvolumes iocache
 48: end-volume
 49:
 50: volume writebehind
 51: type performance/write-behind
 52: option cache-size 4MB
 53: subvolumes quickread
 54: end-volume
 55:
 56: volume statprefetch
 57: type performance/stat-prefetch
 58: subvolumes writebehind
 59: end-volume
 60:

+------------------------------------------------------------------------------+
[2011-05-16 16:21:06] W [xlator.c:661:validate_xlator_volume_options] lb-0-1: option 'transport.remote-port' is deprecated, preferred is 'remote-port', continuing with correction
[2011-05-16 16:21:06] W [xlator.c:661:validate_xlator_volume_options] lb-1-1: option 'transport.remote-port' is deprecated, preferred is 'remote-port', continuing with correction
[2011-05-16 16:21:06] N [glusterfsd.c:1409:main] glusterfs: Successfully started
[2011-05-16 16:21:06] N [client-protocol.c:6288:client_setvolume_cbk] lb-1-1: Connected to 127.0.0.1:6996, attached to remote volume 'brick1'.
[2011-05-16 16:21:06] N [afr.c:2648:notify] mirror-0: Subvolume 'lb-1-1' came back up; going online.
[2011-05-16 16:21:06] N [client-protocol.c:6288:client_setvolume_cbk] lb-1-1: Connected to 127.0.0.1:6996, attached to remote volume 'brick1'.
[2011-05-16 16:21:06] N [afr.c:2648:notify] mirror-0: Subvolume 'lb-1-1' came back up; going online.
[2011-05-16 1...

Read more...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Bug attachments

Remote bug watches

Bug watches keep track of this bug in other bug trackers.