[Spark] Spark plugin doesn't work with Cinder volumes

Bug #1375920 reported by Yaroslav Lobankov
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Sahara
Fix Released
High
Andrew Lazarev

Bug Description

How to reproduce:

Create cluster by Spark plugin with attached Cinder volumes to cluster nodes.

Expected result:
Cluster can be successfully built.

Observed result:
Cluster fails to build.

2014-09-30 13:35:53.100 DEBUG sahara.utils.ssh_remote [-] [sahara-cluster-491955859-mas-nn-001] Executing "sudo -u hdfs hadoop namenode -format" from (pid=18270) _log_command /opt/stack/sahara/sahara/utils/
ssh_remote.py:459
2014-09-30 13:35:56.536 DEBUG sahara.utils.ssh_remote [-] [sahara-cluster-491955859-mas-nn-001] _execute_command took 3.4 seconds to complete from (pid=18270) _log_command /opt/stack/sahara/sahara/utils/ssh
_remote.py:459
2014-09-30 13:35:56.738 ERROR sahara.service.ops [-] Error during operating cluster 'sahara-cluster-491955859' (reason: RemoteCommandException: Error during command execution: "sudo -u hdfs hadoop namenode
-format"
Return code: 1
STDERR:
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/09/30 17:35:53 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = sahara-cluster-491955859-mas-nn-001.novalocal/10.0.0.120
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.0.0-cdh4.7.0
STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh4.7.0.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jersey-server
-1.8.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/li
b/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/
lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib
/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/h
adoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/commons-net-3.
1.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hado
op/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/com
mons-digester-1.8.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoo
p/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoo
p/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/cloudera-jets3t-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/
lib/hadoop/.//parquet-cascading-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-thrift-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-scrooge-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parqu
et-hive-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-hadoop-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-common-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-pig-1.2.5-cdh4.7.0-sources.jar
:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-hadoop-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-pig-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-encoding-
1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-hadoop-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.7.0-tests.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.7.0.jar:/usr/lib/ha
doop/.//parquet-encoding-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-column-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-encoding-1.2.5-cdh4.7.0-javadoc.jar:/
usr/lib/hadoop/.//parquet-scrooge-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-scrooge-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-generator-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-format-1
.0.0-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-pig-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-hive-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-column-1.2.5-cdh4.7.0-sources.jar:/usr/lib/h
adoop/.//parquet-hive-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-cascading-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-avro-1.2.5-cdh4.7.0.ja
r:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//parquet-pig-bundle-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-thrift-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-common-1.2.5-cdh
4.7.0.jar:/usr/lib/hadoop/.//parquet-format-1.0.0-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-avro-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-common-
1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-format-1.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-column-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-avro-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoo
p/.//parquet-thrift-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-generator-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-generator-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-test-hadoop2
-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-pig-bundle-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-cascading-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.5-
cdh4.7.0.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.clo
udera.2.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.7.0-tests.jar:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/.//*
STARTUP_MSG: build = git://ubuntu64-12-04-mk1/var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.7.0-Packaging-Hadoop-2014-05-28_09-36-51/hadoop-2.0.0+1604-1.cdh4.7.0.p0.17~precise/src/hadoop-common-project/hadoop-common -r 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on Wed May 28 10:12:30 PDT 2014
STARTUP_MSG: java = 1.7.0_51
************************************************************/
14/09/30 17:35:53 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/09/30 17:35:54 WARN common.Util: Path /mnt/dfs/nn should be specified as a URI in configuration files. Please update hdfs configuration.
14/09/30 17:35:54 WARN common.Util: Path /mnt/dfs/nn should be specified as a URI in configuration files. Please update hdfs configuration.
14/09/30 17:35:54 INFO namenode.FSNamesystem: fsLock is fair:true
14/09/30 17:35:54 INFO blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
14/09/30 17:35:55 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/09/30 17:35:55 INFO util.GSet: Computing capacity for map BlocksMap
14/09/30 17:35:55 INFO util.GSet: VM type = 64-bit
14/09/30 17:35:55 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
14/09/30 17:35:55 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/09/30 17:35:55 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/09/30 17:35:55 INFO blockmanagement.BlockManager: defaultReplication = 1
14/09/30 17:35:55 INFO blockmanagement.BlockManager: maxReplication = 512
14/09/30 17:35:55 INFO blockmanagement.BlockManager: minReplication = 1
14/09/30 17:35:55 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/09/30 17:35:55 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/09/30 17:35:55 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/09/30 17:35:55 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/09/30 17:35:55 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
14/09/30 17:35:55 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
14/09/30 17:35:55 INFO namenode.FSNamesystem: supergroup = supergroup
14/09/30 17:35:55 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/09/30 17:35:55 INFO namenode.FSNamesystem: HA Enabled: false
14/09/30 17:35:55 INFO namenode.FSNamesystem: Append Enabled: true
14/09/30 17:35:55 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/09/30 17:35:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/09/30 17:35:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/09/30 17:35:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/09/30 17:35:55 FATAL namenode.NameNode: Exception in namenode join
java.io.IOException: Cannot create directory /mnt/dfs/nn/current
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:298)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:528)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:549)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:152)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:768)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1136)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
14/09/30 17:35:56 INFO util.ExitUtil: Exiting with status 1
14/09/30 17:35:56 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sahara-cluster-491955859-mas-nn-001.novalocal/10.0.0.120
************************************************************/

STDOUT:
Formatting using clusterid: CID-1bf08fab-d9cf-4987-a0d9-13a01e6e2fe4
)
2014-09-30 13:35:56.738 TRACE sahara.service.ops Traceback (most recent call last):
2014-09-30 13:35:56.738 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/service/ops.py", line 113, in wrapper
2014-09-30 13:35:56.738 TRACE sahara.service.ops f(cluster_id, *args, **kwds)
2014-09-30 13:35:56.738 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/service/ops.py", line 206, in _provision_cluster
2014-09-30 13:35:56.738 TRACE sahara.service.ops plugin.start_cluster(cluster)
2014-09-30 13:35:56.738 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/plugins/spark/plugin.py", line 106, in start_cluster
2014-09-30 13:35:56.738 TRACE sahara.service.ops run.format_namenode(r)
2014-09-30 13:35:56.738 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/plugins/spark/run_scripts.py", line 40, in format_namenode
2014-09-30 13:35:56.738 TRACE sahara.service.ops nn_remote.execute_command("sudo -u hdfs hadoop namenode -format")
2014-09-30 13:35:56.738 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/utils/ssh_remote.py", line 411, in execute_command
2014-09-30 13:35:56.738 TRACE sahara.service.ops get_stderr, raise_when_error)
2014-09-30 13:35:56.738 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/utils/ssh_remote.py", line 480, in _run_s
2014-09-30 13:35:56.738 TRACE sahara.service.ops return self._run_with_log(func, timeout, *args, **kwargs)
2014-09-30 13:35:56.738 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/utils/ssh_remote.py", line 368, in _run_with_log
2014-09-30 13:35:56.738 TRACE sahara.service.ops return self._run(func, *args, **kwargs)
2014-09-30 13:35:56.738 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/utils/ssh_remote.py", line 477, in _run
2014-09-30 13:35:56.738 TRACE sahara.service.ops return procutils.run_in_subprocess(self.proc, func, args, kwargs)
2014-09-30 13:35:56.738 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/utils/procutils.py", line 52, in run_in_subprocess
2014-09-30 13:35:56.738 TRACE sahara.service.ops raise SubprocessException(result['exception'])
2014-09-30 13:35:56.738 TRACE sahara.service.ops SubprocessException: RemoteCommandException: Error during command execution: "sudo -u hdfs hadoop namenode -format"
2014-09-30 13:35:56.738 TRACE sahara.service.ops Return code: 1
2014-09-30 13:35:56.738 TRACE sahara.service.ops STDERR:
2014-09-30 13:35:56.738 TRACE sahara.service.ops DEPRECATED: Use of this script to execute hdfs command is deprecated.
2014-09-30 13:35:56.738 TRACE sahara.service.ops Instead use the hdfs command for it.
2014-09-30 13:35:56.738 TRACE sahara.service.ops
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:53 INFO namenode.NameNode: STARTUP_MSG:
2014-09-30 13:35:56.738 TRACE sahara.service.ops /************************************************************
2014-09-30 13:35:56.738 TRACE sahara.service.ops STARTUP_MSG: Starting NameNode
2014-09-30 13:35:56.738 TRACE sahara.service.ops STARTUP_MSG: host = sahara-cluster-491955859-mas-nn-001.novalocal/10.0.0.120
2014-09-30 13:35:56.738 TRACE sahara.service.ops STARTUP_MSG: args = [-format]
2014-09-30 13:35:56.738 TRACE sahara.service.ops STARTUP_MSG: version = 2.0.0-cdh4.7.0
2014-09-30 13:35:56.738 TRACE sahara.service.ops STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh4.7.0.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/cloudera-jets3t-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/.//parquet-cascading-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-thrift-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-scrooge-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-hive-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-hadoop-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-common-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-pig-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-hadoop-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-pig-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-encoding-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-hadoop-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.7.0-tests.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-encoding-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-column-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-encoding-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-scrooge-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-scrooge-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-generator-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-format-1.0.0-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-pig-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-hive-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-column-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-hive-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-cascading-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-avro-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//parquet-pig-bundle-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-thrift-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-common-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-format-1.0.0-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-avro-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-common-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-format-1.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-column-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-avro-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-generator-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-generator-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-test-hadoop2-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-pig-bundle-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-cascading-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.7.0.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.7.0-tests.jar:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/.//*
2014-09-30 13:35:56.738 TRACE sahara.service.ops STARTUP_MSG: build = git://ubuntu64-12-04-mk1/var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.7.0-Packaging-Hadoop-2014-05-28_09-36-51/hadoop-2.0.0+1604-1.cdh4.7.0.p0.17~precise/src/hadoop-common-project/hadoop-common -r 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on Wed May 28 10:12:30 PDT 2014
2014-09-30 13:35:56.738 TRACE sahara.service.ops STARTUP_MSG: java = 1.7.0_51
2014-09-30 13:35:56.738 TRACE sahara.service.ops ************************************************************/
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:53 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:54 WARN common.Util: Path /mnt/dfs/nn should be specified as a URI in configuration files. Please update hdfs configuration.
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:54 WARN common.Util: Path /mnt/dfs/nn should be specified as a URI in configuration files. Please update hdfs configuration.
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:54 INFO namenode.FSNamesystem: fsLock is fair:true
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:54 INFO blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO util.GSet: Computing capacity for map BlocksMap
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO util.GSet: VM type = 64-bit
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO util.GSet: capacity = 2^21 = 2097152 entries
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO blockmanagement.BlockManager: defaultReplication = 1
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO blockmanagement.BlockManager: maxReplication = 512
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO blockmanagement.BlockManager: minReplication = 1
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO blockmanagement.BlockManager: encryptDataTransfer = false
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO namenode.FSNamesystem: supergroup = supergroup
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO namenode.FSNamesystem: isPermissionEnabled = true
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO namenode.FSNamesystem: HA Enabled: false
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO namenode.FSNamesystem: Append Enabled: true
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO namenode.NameNode: Caching file names occuring more than 10 times
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:55 FATAL namenode.NameNode: Exception in namenode join
2014-09-30 13:35:56.738 TRACE sahara.service.ops java.io.IOException: Cannot create directory /mnt/dfs/nn/current
2014-09-30 13:35:56.738 TRACE sahara.service.ops at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:298)
2014-09-30 13:35:56.738 TRACE sahara.service.ops at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:528)
2014-09-30 13:35:56.738 TRACE sahara.service.ops at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:549)
2014-09-30 13:35:56.738 TRACE sahara.service.ops at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:152)
2014-09-30 13:35:56.738 TRACE sahara.service.ops at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:768)
2014-09-30 13:35:56.738 TRACE sahara.service.ops at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1136)
2014-09-30 13:35:56.738 TRACE sahara.service.ops at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:56 INFO util.ExitUtil: Exiting with status 1
2014-09-30 13:35:56.738 TRACE sahara.service.ops 14/09/30 17:35:56 INFO namenode.NameNode: SHUTDOWN_MSG:
2014-09-30 13:35:56.738 TRACE sahara.service.ops /************************************************************
2014-09-30 13:35:56.738 TRACE sahara.service.ops SHUTDOWN_MSG: Shutting down NameNode at sahara-cluster-491955859-mas-nn-001.novalocal/10.0.0.120
2014-09-30 13:35:56.738 TRACE sahara.service.ops ************************************************************/
2014-09-30 13:35:56.738 TRACE sahara.service.ops
2014-09-30 13:35:56.738 TRACE sahara.service.ops STDOUT:
2014-09-30 13:35:56.738 TRACE sahara.service.ops Formatting using clusterid: CID-1bf08fab-d9cf-4987-a0d9-13a01e6e2fe4
2014-09-30 13:35:56.738 TRACE sahara.service.ops
2014-09-30 13:35:56.738 TRACE sahara.service.ops
2014-09-30 13:35:56.889 INFO sahara.utils.general [-] Cluster status has been changed: id=5771683a-6e2a-40fc-9ff7-880c18d4fc49, New status=Error

Changed in sahara:
assignee: nobody → Andrew Lazarev (alazarev)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to sahara (master)

Fix proposed to branch: master
Review: https://review.openstack.org/125199

Changed in sahara:
status: New → In Progress
Revision history for this message
Andrew Lazarev (alazarev) wrote :

To reproduce the issue master and worker nodes should have different number of volumes. The root of the issue is that Spark plugin uses one node group configuration for all nodes.

Changed in sahara:
importance: Undecided → High
tags: added: juno-rc-potential
Changed in sahara:
milestone: none → kilo-1
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to sahara (master)

Reviewed: https://review.openstack.org/125199
Committed: https://git.openstack.org/cgit/openstack/sahara/commit/?id=b3223ad8928fca39488e4b249146d108c1e15b8f
Submitter: Jenkins
Branch: master

commit b3223ad8928fca39488e4b249146d108c1e15b8f
Author: Andrew Lazarev <email address hidden>
Date: Tue Sep 30 14:02:28 2014 -0700

    Fixed volumes configuration in spark plugin

    Only current node group config should influence on data location.

    Change-Id: Id1d6f7bf29fd5b8d7734d3358b6e34f06bf084da
    Closes-Bug: #1375920

Changed in sahara:
status: In Progress → Fix Committed
Thierry Carrez (ttx)
Changed in sahara:
milestone: kilo-1 → juno-rc2
tags: removed: juno-rc-potential
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to sahara (proposed/juno)

Fix proposed to branch: proposed/juno
Review: https://review.openstack.org/126385

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to sahara (proposed/juno)

Reviewed: https://review.openstack.org/126385
Committed: https://git.openstack.org/cgit/openstack/sahara/commit/?id=a0ba4d5c373cdbf859ffe0c3faba2ef132077d84
Submitter: Jenkins
Branch: proposed/juno

commit a0ba4d5c373cdbf859ffe0c3faba2ef132077d84
Author: Andrew Lazarev <email address hidden>
Date: Tue Sep 30 14:02:28 2014 -0700

    Fixed volumes configuration in spark plugin

    Only current node group config should influence on data location.

    Change-Id: Id1d6f7bf29fd5b8d7734d3358b6e34f06bf084da
    Closes-Bug: #1375920
    (cherry picked from commit b3223ad8928fca39488e4b249146d108c1e15b8f)

Thierry Carrez (ttx)
Changed in sahara:
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in sahara:
milestone: juno-rc2 → 2014.2
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to sahara (master)

Fix proposed to branch: master
Review: https://review.openstack.org/128889

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to sahara (master)
Download full text (5.1 KiB)

Reviewed: https://review.openstack.org/128889
Committed: https://git.openstack.org/cgit/openstack/sahara/commit/?id=2217fb27ecf8c5b4e4c4673b5f22f0f16016b677
Submitter: Jenkins
Branch: master

commit 3630ccffb25f66e2efc9297b0ecb852f8d932363
Author: Trevor McKay <email address hidden>
Date: Wed Oct 1 17:23:29 2014 -0400

    Fix HDFS url description, and other various edits

    HDFS url description is wrong as a result of code changes. This was
    the major motivation for this CR.

    Additional changes

    * formatted for 80 characters
    * consistent use of '.' at the end of bullets
    * added mention of Spark
    * adding '.sahara' suffix is no longer necessary
    * some other minor changes

    Closes-Bug: 1376457
    Change-Id: I72134bcdf6c42911d07e65952a9a56331d896699
    (cherry picked from commit a718ec7ddf85ef2e1e17868f6e2cd05b1c2762cd)

commit ff3bf76318821336810709eb1ff4b88cf94b67c7
Author: Trevor McKay <email address hidden>
Date: Wed Oct 1 13:16:57 2014 -0400

    Remove line saying that scaling and EDP are not supported for Spark

    Closes-Bug: 1376364
    Change-Id: I82249f8b9fb932c206876c2f6652c0a0b9e0650b
    (cherry picked from commit e385e3ed02bddf4db3f0b82c800b2cc0e2c056ba)

commit 4f23cfefa18332274d88475984491facd79b85f3
Author: Trevor McKay <email address hidden>
Date: Wed Oct 1 12:34:14 2014 -0400

    Description of job config hints in new doc page is wrong

    The 'configs' field is not a dictionary, it is actually
    a list of dictionaries. Update the description.

    Closes-Bug: #1357615
    Change-Id: I540abe050f1d81e36f4b5dcca547a7e5c3514c84
    (cherry picked from commit 61be4ece04d6370086d8b5b9bea4224010ec0d15)

commit 0d94b67fca6b0c5776ddcfe0f3e5b489afe376ea
Author: Michael McCune <email address hidden>
Date: Wed Oct 1 11:25:41 2014 -0400

    Removing extraneous Swift information from Features

    Changes
    * removing repeated information from Features page for Swift integration
    * refactoring features.rst to 80 columns

    Change-Id: Ib37e4476258cc4547d4a27847c89a9611bff05bc
    Closes-Bug: #1376309
    (cherry picked from commit eb529ca4f2dd153d494c4e02dd302998b3d6f43b)

commit 9e3fbb654d3530b11d3e6c1fb652028e631e5859
Author: Trevor McKay <email address hidden>
Date: Tue Sep 30 16:08:15 2014 -0400

    Update the Elastic Data Processing (EDP) documentation page

    * Add description of MapReduce.Streaming job type
    * Add description of Spark job type
    * Add reference to advanced configuration for Swfit proxy
    * Note that .sahara suffix is added to Swift URLs automatically
    * A few minor changes

    Closes-Bug: 1374574
    Closes-Bug: 1374606
    Change-Id: Ie53888975ce436439cc808b2fdc45dff66bae1a9
    (cherry picked from commit 7973db35e61b0c2d686798cb2de50d281713b03b)

commit 360aedfb323fb888acc4745b262eb7746d14ef27
Author: Trevor McKay <email address hidden>
Date: Tue Sep 30 12:51:08 2014 -0400

    Add documentation on the EDP job engine SPI

    Closes-Bug: 1357615
    Change-Id: I57dae10da9460deb2a332025cc3a0ea37ae233ee
    (cherry picked from commit 62ba37a8c415f1c422f010c96c0d553ff788d343)

commit 9fa0c5473d29c5eeeef3a23e7...

Read more...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.