Ceph cluster mds failed during cephfs usage
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ubuntu on IBM z Systems |
Invalid
|
Undecided
|
Unassigned | ||
ceph (Ubuntu) |
Invalid
|
Low
|
Skipper Bug Screeners |
Bug Description
Ceph cluster mds failed during cephfs usage
---uname output---
Linux testU 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:31:26 UTC 2016 s390x s390x s390x GNU/Linux
---Additional Hardware Info---
System Z s390x LPAR
Machine Type = Ubuntu VM on s390x LPAR
---Debugger---
A debugger is not configured
---Steps to Reproduce---
On s390x LPAR 4 Ubuntu VMs:
1VM - ceph monitor, ceph mds
2VM - ceph monitor, ceph osd
3VM - ceph monitor, ceph osd
4VM - client for using cephfs
I installed ceph cluster on 3 VMs and use 4d VM as a client for cephfs. Mount cephfs share and trying to touch some file in mount point:
root@testU:~# ceph osd pool ls
rbd
libvirt-pool
root@testU:~# ceph osd pool create cephfs1_data 32
pool 'cephfs1_data' created
root@testU:~# ceph osd pool create cephfs1_metadata 32
pool 'cephfs1_metadata' created
root@testU:~# ceph osd pool ls
rbd
libvirt-pool
cephfs1_data
cephfs1_metadata
root@testU:~# ceph fs new cephfs1 cephfs1_metadata cephfs1_data
new fs with metadata pool 5 and data pool 4
root@testU:~# ceph fs ls
name: cephfs1, metadata pool: cephfs1_metadata, data pools: [cephfs1_data ]
root@testU:~# ceph mds stat
e37: 1/1/1 up {2:0=mon1=
root@testU:~# ceph -s
cluster 9f054e62-
health HEALTH_OK
monmap e1: 3 mons at {mon1=192.
fsmap e37: 1/1/1 up {2:0=mon1=
osdmap e62: 2 osds: 2 up, 2 in
flags sortbitwise
pgmap v2011: 256 pgs, 4 pools, 4109 MB data, 1318 objects
12371 MB used, 18326 MB / 30698 MB avail
root@testU:~# ceph auth get client.admin | grep key
exported keyring for client.admin
key = AQCepkZY5wuMOxA
root@testU:~# mount -t ceph 192.168.
root@testU:~#
root@testU:~# mount |grep ceph
192.168.
root@testU:~# ls -l /mnt/cephfs/
total 0
root@testU:~# touch /mnt/cephfs/
[ 759.865289] ceph: mds parse_reply err -5
[ 759.865293] ceph: mdsc_handle_reply got corrupt reply mds0(tid:2)
root@testU:~# ls -l /mnt/cephfs/
[ 764.600952] ceph: mds parse_reply err -5
[ 764.600955] ceph: mdsc_handle_reply got corrupt reply mds0(tid:5)
[ 764.601343] ceph: mds parse_reply err -5
[ 764.601345] ceph: mdsc_handle_reply got corrupt reply mds0(tid:6)
ls: reading directory '/mnt/cephfs/': Input/output error
total 0
Userspace tool common name: cephfs ceph
The userspace tool has the following bit modes: 64-bit
Userspace rpm: -
Userspace tool obtained from project website: na
-Attach ltrace and strace of userspace application.
Changed in ubuntu-z-systems: | |
assignee: | nobody → Canonical Server Team (canonical-server) |
tags: | added: s390x uosci |
Changed in ceph (Ubuntu): | |
status: | Incomplete → Confirmed |
tags: | added: openstack-ibm |
Changed in ubuntu-z-systems: | |
assignee: | Canonical Server Team (canonical-server) → nobody |
assignee: | nobody → Ceph OpenStack Team (ceph-openstack-team) |
Changed in ubuntu-z-systems: | |
assignee: | Ceph OpenStack Team (ceph-openstack-team) → nobody |
Changed in ubuntu-z-systems: | |
status: | New → Confirmed |
Changed in ceph (Ubuntu): | |
status: | Incomplete → Invalid |
Changed in ubuntu-z-systems: | |
status: | Incomplete → Invalid |
Default Comment by Bridge