Can't deploy Ceph with disk dedicated for journal in Fuel 6.1

Bug #1453959 reported by Claude Durocher
16
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Fix Committed
High
Ryan Moe

Bug Description

Environment:

{"build_id": "2015-05-08_05-40-59", "build_number": "223", "release_versions": {"2014.2.2-6.1": {"VERSION": {"build_id": "2015-05-08_05-40-59", "build_number": "223", "api": "1.0", "fuel-library_sha": "53ce3081a4dfda2995232714de7d17e6edf6e97d", "nailgun_sha": "ca9c91abed5e5b0671f4c514f7efd47eb5ca501c", "feature_groups": ["experimental"], "openstack_version": "2014.2.2-6.1", "production": "docker", "python-fuelclient_sha": "af6c9c3799b9ec107bcdc6dbf035cafc034526ce", "astute_sha": "6a4dcd11c67af2917815f3678fb594c7412a4c97", "fuel-ostf_sha": "740ded337bb2a8a9b3d505026652512257375c01", "release": "6.1", "fuelmain_sha": "43b890efe560ab65dd748b8c2d7bd7d7cb0649e3"}}}, "auth_required": true, "api": "1.0", "fuel-library_sha": "53ce3081a4dfda2995232714de7d17e6edf6e97d", "nailgun_sha": "ca9c91abed5e5b0671f4c514f7efd47eb5ca501c", "feature_groups": ["experimental"], "openstack_version": "2014.2.2-6.1", "production": "docker", "python-fuelclient_sha": "af6c9c3799b9ec107bcdc6dbf035cafc034526ce", "astute_sha": "6a4dcd11c67af2917815f3678fb594c7412a4c97", "fuel-ostf_sha": "740ded337bb2a8a9b3d505026652512257375c01", "release": "6.1", "fuelmain_sha": "43b890efe560ab65dd748b8c2d7bd7d7cb0649e3"}

Steps to reproduce:

-Create environment with Juno on Ubuntu 14.04, KVM, Neutron with vlan seg., and Ceph for Cinder and Glance
-Deploy 3 controller nodes, 3 compute nodes and 3 Ceph nodes; each Ceph node has 5 hdd for data and 1 ssd for journal (Ceph repl. factor = 3)
-Deployment fails on all 3 Ceph nodes with the following error :

2015-05-11 19:24:39 ERR
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) change from notrun to 0 failed: ceph-deploy osd prepare node-5:/dev/sdb3: returned 1 instead of one of [0]
2015-05-11 19:24:39 ERR
 /usr/bin/puppet:4:in `<main>'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/util/command_line.rb:91:in `execute'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/util/command_line.rb:137:in `run'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/application.rb:364:in `run'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/util.rb:478:in `exit_on_fail'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/application.rb:364:in `block in run'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/application.rb:470:in `plugin_hook'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/application.rb:364:in `block (2 levels) in run'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/application/apply.rb:146:in `run_command'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/application/apply.rb:218:in `main'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/application/apply.rb:268:in `apply_catalog'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/configurer.rb:192:in `run'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/configurer.rb:124:in `apply_catalog'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/util.rb:160:in `benchmark'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/1.9.1/benchmark.rb:295:in `realtime'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/util.rb:161:in `block in benchmark'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/configurer.rb:125:in `block in apply_catalog'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:163:in `apply'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction/report.rb:108:in `as_logging_destination'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/util/log.rb:149:in `with_destination'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:164:in `block in apply'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction.rb:108:in `evaluate'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/graph/relationship_graph.rb:118:in `traverse'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction.rb:117:in `block in evaluate'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/util.rb:326:in `thinmark'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/1.9.1/benchmark.rb:295:in `realtime'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/util.rb:327:in `block in thinmark'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction.rb:117:in `block (2 levels) in evaluate'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction.rb:117:in `call'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction.rb:187:in `eval_resource'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction.rb:174:in `apply'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:20:in `evaluate'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:88:in `perform_changes'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:88:in `each'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:89:in `block in perform_changes'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:128:in `sync_if_needed'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:191:in `sync'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/type/exec.rb:142:in `sync'
2015-05-11 19:24:39 ERR
 /usr/lib/ruby/vendor_ruby/puppet/util/errors.rb:97:in `fail'
2015-05-11 19:24:39 ERR
 ceph-deploy osd prepare node-5:/dev/sdb3: returned 1 instead of one of [0]
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdb3 /dev/
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][ERROR ] RuntimeError: command returned non-zero exit status: 1
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][WARNING] ceph-disk: Error: Journal /dev/ is neither a block device nor regular file
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][INFO ] Running command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdb3 /dev/
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [ceph_deploy.osd][DEBUG ] Preparing host node-5 disk /dev/sdb3 journal /dev/ activate False
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [ceph_deploy.osd][DEBUG ] Deploying osd to node-5
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][DEBUG ] detect machine type
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][DEBUG ] detect platform information from remote host
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [node-5][DEBUG ] connected to host: node-5
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node-5:/dev/sdb3:/dev/
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [ceph_deploy.cli][INFO ] Invoked (1.5.20): /usr/bin/ceph-deploy osd prepare node-5:/dev/sdb3:
2015-05-11 19:24:39 NOTICE
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]/returns) [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
2015-05-11 19:24:36 INFO
 (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy osd prepare node-5:/dev/sdb3:]) Starting to evaluate the resource

Note :

Deployment is successful if no separate disk for journal is specified

Revision history for this message
Mike Scherbakov (mihgen) wrote :

Thanks for bug report! Can you please generate & attach diagnostic snapshot as well? Some other logs might be needed here as well..

Changed in fuel:
milestone: none → 6.1
assignee: nobody → MOS Linux (mos-linux)
Changed in fuel:
assignee: MOS Linux (mos-linux) → Kostiantyn Danylov (kdanylov)
Revision history for this message
Mykola Golub (mgolub) wrote :

It looks like ceph-deploy was feed with empty journal name:

  ceph-deploy osd prepare node-5:/dev/sdb3:

There should not be ':' at the end.

ClaudeD, could you please show the output of this command:

  lsblk -ln

on the osd host after failed deployment?

Changed in fuel:
importance: Undecided → High
status: New → In Progress
assignee: Kostiantyn Danylov (kdanylov) → Mykola Golub (mgolub)
Changed in fuel:
assignee: Mykola Golub (mgolub) → Alyona Kiseleva (akiselyova)
Revision history for this message
Claude Durocher (claude-d) wrote : Re: [Bug 1453959] Re: Can't deploy Ceph with disk dedicated for journal in Fuel 6.1
Download full text (13.7 KiB)

root@node-5:~# lsblk -ln
sda 8:0 0 279.4G 0 disk
sda1 8:1 0 24M 0 part
sda2 8:2 0 200M 0 part
sda3 8:3 0 200M 0 part /boot
sda4 8:4 0 66.1G 0 part
os-root (dm-0) 252:0 0 50G 0 lvm /
os-swap (dm-1) 252:1 0 16G 0 lvm [SWAP]
sda5 8:5 0 20M 0 part
sdb 8:16 0 1.1T 0 disk
sdb1 8:17 0 24M 0 part
sdb2 8:18 0 200M 0 part
sdb3 8:19 0 1.1T 0 part
sdc 8:32 0 1.1T 0 disk
sdc1 8:33 0 24M 0 part
sdc2 8:34 0 200M 0 part
sdc3 8:35 0 1.1T 0 part
sdd 8:48 0 1.1T 0 disk
sdd1 8:49 0 24M 0 part
sdd2 8:50 0 200M 0 part
sdd3 8:51 0 1.1T 0 part
sde 8:64 0 1.1T 0 disk
sde1 8:65 0 24M 0 part
sde2 8:66 0 200M 0 part
sde3 8:67 0 1.1T 0 part
sdf 8:80 0 1.1T 0 disk
sdf1 8:81 0 24M 0 part
sdf2 8:82 0 200M 0 part
sdf3 8:83 0 1.1T 0 part
sdg 8:96 0 93.1G 0 disk
sdg1 8:97 0 24M 0 part
sdg2 8:98 0 200M 0 part
sdg3 8:99 0 10G 0 part
sdg4 8:100 0 10G 0 part
sdg5 8:101 0 10G 0 part
sdg6 8:102 0 10G 0 part
sdg7 8:103 0 10G 0 part

Claude

2015-05-12 3:03 GMT-04:00 Mykola Golub <email address hidden>:

> It looks like ceph-deploy was feed with empty journal name:
>
> ceph-deploy osd prepare node-5:/dev/sdb3:
>
> There should not be ':' at the end.
>
> ClaudeD, could you please show the output of this command:
>
> lsblk -ln
>
> on the osd host after failed deployment?
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1453959
>
> Title:
> Can't deploy Ceph with disk dedicated for journal in Fuel 6.1
>
> Status in Fuel: OpenStack installer that works:
> New
>
> Bug description:
> Environment:
>
> {"build_id": "2015-05-08_05-40-59", "build_number": "223",
> "release_versions": {"2014.2.2-6.1": {"VERSION": {"build_id":
> "2015-05-08_05-40-59", "build_number": "223", "api": "1.0", "fuel-
> library_sha": "53ce3081a4dfda2995232714de7d17e6edf6e97d",
> "nailgun_sha": "ca9c91abed5e5b0671f4c514f7efd47eb5ca501c",
> "feature_groups": ["experimental"], "openstack_version":
> "2014.2.2-6.1", "production": "docker", "python-fuelclient_sha":
> "af6c9c3799b9ec107bcdc6dbf035cafc034526ce", "astute_sha":
> "6a4dcd11c67af2917815f3678fb594c7412a4c97", "fuel-ostf_sha":
> "740ded337bb2a8a9b3d505026652512257375c01", "release": "6.1",
> "fuelmain_sha": "43b890efe560ab65dd748b8c2d7bd7d7cb0649e3"}}},
> "auth_required": true, "api": "1.0", "fuel-library_sha":
> "53ce3081a4dfda2995232714de7d17e6edf6e97d", "nailgun_sha":
> "ca9c91abed5e5b0671f4c514f7efd47eb5ca501c", "feature_groups":
> ["experimental"], "openstack_version": "2014.2.2-6.1", "production":
> "docker", "python-fuelclient_sha":
> "af6...

Revision history for this message
Claude Durocher (claude-d) wrote :
Download full text (12.3 KiB)

I can't generate a snapshot : I get "exit code:1 stderr:" in the web ui
(and I have plenty of disk space on the fuel server).

Is there any specific logs needed?

Claude

2015-05-11 17:04 GMT-04:00 Mike Scherbakov <email address hidden>:

> Thanks for bug report! Can you please generate & attach diagnostic
> snapshot as well? Some other logs might be needed here as well..
>
> ** Changed in: fuel
> Milestone: None => 6.1
>
> ** Changed in: fuel
> Assignee: (unassigned) => MOS Linux (mos-linux)
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1453959
>
> Title:
> Can't deploy Ceph with disk dedicated for journal in Fuel 6.1
>
> Status in Fuel: OpenStack installer that works:
> New
>
> Bug description:
> Environment:
>
> {"build_id": "2015-05-08_05-40-59", "build_number": "223",
> "release_versions": {"2014.2.2-6.1": {"VERSION": {"build_id":
> "2015-05-08_05-40-59", "build_number": "223", "api": "1.0", "fuel-
> library_sha": "53ce3081a4dfda2995232714de7d17e6edf6e97d",
> "nailgun_sha": "ca9c91abed5e5b0671f4c514f7efd47eb5ca501c",
> "feature_groups": ["experimental"], "openstack_version":
> "2014.2.2-6.1", "production": "docker", "python-fuelclient_sha":
> "af6c9c3799b9ec107bcdc6dbf035cafc034526ce", "astute_sha":
> "6a4dcd11c67af2917815f3678fb594c7412a4c97", "fuel-ostf_sha":
> "740ded337bb2a8a9b3d505026652512257375c01", "release": "6.1",
> "fuelmain_sha": "43b890efe560ab65dd748b8c2d7bd7d7cb0649e3"}}},
> "auth_required": true, "api": "1.0", "fuel-library_sha":
> "53ce3081a4dfda2995232714de7d17e6edf6e97d", "nailgun_sha":
> "ca9c91abed5e5b0671f4c514f7efd47eb5ca501c", "feature_groups":
> ["experimental"], "openstack_version": "2014.2.2-6.1", "production":
> "docker", "python-fuelclient_sha":
> "af6c9c3799b9ec107bcdc6dbf035cafc034526ce", "astute_sha":
> "6a4dcd11c67af2917815f3678fb594c7412a4c97", "fuel-ostf_sha":
> "740ded337bb2a8a9b3d505026652512257375c01", "release": "6.1",
> "fuelmain_sha": "43b890efe560ab65dd748b8c2d7bd7d7cb0649e3"}
>
> Steps to reproduce:
>
> -Create environment with Juno on Ubuntu 14.04, KVM, Neutron with vlan
> seg., and Ceph for Cinder and Glance
> -Deploy 3 controller nodes, 3 compute nodes and 3 Ceph nodes; each Ceph
> node has 5 hdd for data and 1 ssd for journal (Ceph repl. factor = 3)
> -Deployment fails on all 3 Ceph nodes with the following error :
>
> 2015-05-11 19:24:39 ERR
> (/Stage[main]/Ceph::Osds/Ceph::Osds::Osd[/dev/sdb3:]/Exec[ceph-deploy
> osd prepare node-5:/dev/sdb3:]/returns) change from notrun to 0 failed:
> ceph-deploy osd prepare node-5:/dev/sdb3: returned 1 instead of one of [0]
> 2015-05-11 19:24:39 ERR
> /usr/bin/puppet:4:in `<main>'
> 2015-05-11 19:24:39 ERR
> /usr/lib/ruby/vendor_ruby/puppet/util/command_line.rb:91:in `execute'
> 2015-05-11 19:24:39 ERR
> /usr/lib/ruby/vendor_ruby/puppet/util/command_line.rb:137:in `run'
> 2015-05-11 19:24:39 ERR
> /usr/lib/ruby/vendor_ruby/puppet/application.rb:364:in `run'
> 2015-05-11 19:24:39 ERR
> /usr/lib/ruby/vendor_ruby/puppet/util.rb:478:in `exit_on_fail'
> 2015-05-11 19:24...

Revision history for this message
Claude Durocher (claude-d) wrote :

This might not be directly related to the bug, but I found this warning in the logs :

2015-05-11 23:00:42 WARNING
 Unrecognised escape sequence '\ ' in file /etc/puppet/modules/ceph/manifests/osds/osd.pp at line 26

Changed in fuel:
status: In Progress → Invalid
Revision history for this message
Alyona Kiseleva (akiselyova) wrote :

This bug is related to custom build, why it is testing?
We cannot reproduce it on Fuel 6.1, ceph-osd node this separated journal disk deployed successfully.
VERSION:
  feature_groups:
    - mirantis
  production: "docker"
  release: "6.1"
  openstack_version: "2014.2-6.1"
  api: "1.0"
  build_number: "337"
  build_id: "2015-04-21_22-54-31"
  nailgun_sha: "3928c5aee6e7aabad37cf0665acc6790ef220141"
  python-fuelclient_sha: "08640730176591a3818f24e75b710f8c95846e6e"
  astute_sha: "bf1751a4fe0d912325e3b4af629126a59c1b2b51"
  fuel-library_sha: "af4eee78a4dd6e6606079f5515bac91c45b04114"
  fuel-ostf_sha: "df8db1f48f03b18126ce5ec65317a1eb83a5a95f"
  fuelmain_sha: "aad0a7ad97a5660f5d53fa830b168d36fa9694eb"

Revision history for this message
Claude Durocher (claude-d) wrote :

It's not a custom build but a nightly build from
https://ci.fuel-infra.org/view/ISO/.

Le 2015-05-13 08:11, Alyona Kiseleva a écrit :
> This bug is related to custom build, why it is testing?
> We cannot reproduce it on Fuel 6.1, ceph-osd node this separated journal disk deployed successfully.
> VERSION:
> feature_groups:
> - mirantis
> production: "docker"
> release: "6.1"
> openstack_version: "2014.2-6.1"
> api: "1.0"
> build_number: "337"
> build_id: "2015-04-21_22-54-31"
> nailgun_sha: "3928c5aee6e7aabad37cf0665acc6790ef220141"
> python-fuelclient_sha: "08640730176591a3818f24e75b710f8c95846e6e"
> astute_sha: "bf1751a4fe0d912325e3b4af629126a59c1b2b51"
> fuel-library_sha: "af4eee78a4dd6e6606079f5515bac91c45b04114"
> fuel-ostf_sha: "df8db1f48f03b18126ce5ec65317a1eb83a5a95f"
> fuelmain_sha: "aad0a7ad97a5660f5d53fa830b168d36fa9694eb"
>

--
Claude

Revision history for this message
Mykola Golub (mgolub) wrote :

Claude,

You can try to generate snapshot from the command line on the master node:

  fuel snapshot

(note, it will take some time)

If it still fails, please provide /var/log/puppet.log from the osd node.

Also I am interested in seeing the output for this command:

udevadm info -q property -n /dev/sdg4

Revision history for this message
Mykola Golub (mgolub) wrote :

It looks like the issue is with this recent change:

https://review.openstack.org/#/c/177352/

I have added the comments what is actually wrong there

https://review.openstack.org/#/c/177352/1/deployment/puppet/ceph/lib/facter/ceph_osd.rb

Revision history for this message
Claude Durocher (claude-d) wrote :
Download full text (14.0 KiB)

Mykola,

Thanks for looking into this. I stiil can't generate a dump with 'fuel
snapshot' as I get this error :

File "/usr/lib64/python2.6/httplib.py", line 686, in _set_hostport raise
InvalidURL("nonnumeric port: '%s'" % host[i+1:])
httplib.InvalidURL: nonnumeric port: ''

So I'm attaching puppet.log and here's the requested info:

root@node-5:~# udevadm info -q property -n /dev/sdg4
DEVLINKS=/dev/disk/by-id/scsi-3600508b1001c4b3e5cf1aba38f1213ae-part4
/dev/disk/by-id/wwn-0x600508b1001c4b3e5cf1aba38f1213ae-part4
/dev/disk/by-partlabel/primary
/dev/disk/by-parttypeuuid/45b0969e-9b03-4f30-b4c6-b4b80ceff106.37a4c27f-7dd7-4860-a229-fd1512597d02
/dev/disk/by-partuuid/37a4c27f-7dd7-4860-a229-fd1512597d02
/dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:0:6-part4
DEVNAME=/dev/sdg4
DEVPATH=/devices/pci0000:00/0000:00:02.2/0000:02:00.0/host2/target2:0:0/2:0:0:6/block/sdg/sdg4
DEVTYPE=partition
ID_BUS=scsi
ID_MODEL=LOGICAL_VOLUME
ID_MODEL_ENC=LOGICAL\x20VOLUME\x20\x20
ID_PART_ENTRY_DISK=8:96
ID_PART_ENTRY_NAME=primary
ID_PART_ENTRY_NUMBER=4
ID_PART_ENTRY_OFFSET=21432320
ID_PART_ENTRY_SCHEME=gpt
ID_PART_ENTRY_SIZE=20971520
ID_PART_ENTRY_TYPE=45b0969e-9b03-4f30-b4c6-b4b80ceff106
ID_PART_ENTRY_UUID=37a4c27f-7dd7-4860-a229-fd1512597d02
ID_PART_TABLE_TYPE=gpt
ID_PATH=pci-0000:02:00.0-scsi-0:0:0:6
ID_PATH_TAG=pci-0000_02_00_0-scsi-0_0_0_6
ID_REVISION=5.42
ID_SCSI=1
ID_SCSI_SERIAL=00143802965FDC0
ID_SERIAL=3600508b1001c4b3e5cf1aba38f1213ae
ID_SERIAL_SHORT=600508b1001c4b3e5cf1aba38f1213ae
ID_TYPE=disk
ID_VENDOR=HP
ID_VENDOR_ENC=HP\x20\x20\x20\x20\x20\x20
ID_WWN=0x600508b1001c4b3e
ID_WWN_VENDOR_EXTENSION=0x5cf1aba38f1213ae
ID_WWN_WITH_EXTENSION=0x600508b1001c4b3e5cf1aba38f1213ae
MAJOR=8
MINOR=100
SUBSYSTEM=block
USEC_INITIALIZED=360241
nomdmonddf=1
nomdmonisw=1

Claude

2015-05-13 9:18 GMT-04:00 Mykola Golub <email address hidden>:

> Claude,
>
> You can try to generate snapshot from the command line on the master
> node:
>
> fuel snapshot
>
> (note, it will take some time)
>
> If it still fails, please provide /var/log/puppet.log from the osd node.
>
> Also I am interested in seeing the output for this command:
>
> udevadm info -q property -n /dev/sdg4
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1453959
>
> Title:
> Can't deploy Ceph with disk dedicated for journal in Fuel 6.1
>
> Status in Fuel: OpenStack installer that works:
> Invalid
>
> Bug description:
> Environment:
>
> {"build_id": "2015-05-08_05-40-59", "build_number": "223",
> "release_versions": {"2014.2.2-6.1": {"VERSION": {"build_id":
> "2015-05-08_05-40-59", "build_number": "223", "api": "1.0", "fuel-
> library_sha": "53ce3081a4dfda2995232714de7d17e6edf6e97d",
> "nailgun_sha": "ca9c91abed5e5b0671f4c514f7efd47eb5ca501c",
> "feature_groups": ["experimental"], "openstack_version":
> "2014.2.2-6.1", "production": "docker", "python-fuelclient_sha":
> "af6c9c3799b9ec107bcdc6dbf035cafc034526ce", "astute_sha":
> "6a4dcd11c67af2917815f3678fb594c7412a4c97", "fuel-ostf_sha":
> "740ded337bb2a8a9b3d505026652512257375c01", "release": "6.1",
> "fuelmain_sha": "43b890efe560ab65dd748b8c2d7bd7d7cb0649e3"}...

Mykola Golub (mgolub)
Changed in fuel:
status: Invalid → Confirmed
assignee: Alyona Kiseleva (akiselyova) → Fuel Library Team (fuel-library)
Changed in fuel:
assignee: Fuel Library Team (fuel-library) → Ryan Moe (rmoe)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to fuel-library (master)

Fix proposed to branch: master
Review: https://review.openstack.org/182742

Changed in fuel:
status: Confirmed → In Progress
Revision history for this message
Claude Durocher (claude-d) wrote :

I've tested the proposed patch and deployment was successful:

root@node-5:~# ceph-disk list
/dev/sda :
 /dev/sda1 other, 21686148-6449-6e6f-744e-656564454649
 /dev/sda2 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
 /dev/sda3 other, ext2, mounted on /boot
 /dev/sda4 other, LVM2_member
 /dev/sda5 other, iso9660
/dev/sdb :
 /dev/sdb1 other, 21686148-6449-6e6f-744e-656564454649
 /dev/sdb2 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
 /dev/sdb3 ceph data, active, cluster ceph, osd.13
/dev/sdc :
 /dev/sdc1 other, 21686148-6449-6e6f-744e-656564454649
 /dev/sdc2 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
 /dev/sdc3 ceph data, active, cluster ceph, osd.4
/dev/sdd :
 /dev/sdd1 other, 21686148-6449-6e6f-744e-656564454649
 /dev/sdd2 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
 /dev/sdd3 ceph data, active, cluster ceph, osd.6
/dev/sde :
 /dev/sde1 other, 21686148-6449-6e6f-744e-656564454649
 /dev/sde2 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
 /dev/sde3 ceph data, active, cluster ceph, osd.11
/dev/sdf :
 /dev/sdf1 other, 21686148-6449-6e6f-744e-656564454649
 /dev/sdf2 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
 /dev/sdf3 ceph data, active, cluster ceph, osd.0
/dev/sdg :
 /dev/sdg1 other, 21686148-6449-6e6f-744e-656564454649
 /dev/sdg2 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
 /dev/sdg3 ceph journal
 /dev/sdg4 ceph journal
 /dev/sdg5 ceph journal
 /dev/sdg6 ceph journal
 /dev/sdg7 ceph journal

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to fuel-library (master)

Reviewed: https://review.openstack.org/182742
Committed: https://git.openstack.org/cgit/stackforge/fuel-library/commit/?id=801cf96c8e08d87683919e0ae5b83d93e0f81bad
Submitter: Jenkins
Branch: master

commit 801cf96c8e08d87683919e0ae5b83d93e0f81bad
Author: Ryan Moe <email address hidden>
Date: Wed May 13 09:13:24 2015 -0700

    Fix incorrect shell command in ceph OSD Facter function

    When DEVLINKS was on the first line of udevadm output
    this function would fail to correctly find a journal
    device name. This would result in an invalid ceph-deploy
    command line. An empty string as the journal device name
    would cause the same issue and this case has been fixed
    as well.

    Change-Id: I785fd463fab781ac1b769fb2eee5c6abf0ef6261
    Co-Authored-By: Mykola Golub <email address hidden>
    Closes-bug: #1453959
    Related-bug: #1441434

Changed in fuel:
status: In Progress → Fix Committed
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.