Hi Jeff, i'm using 9.4 M2 I have the same problem even if the sticky bit is set to off.( or by changing the HDFS_TEMP path). Opening the log file of the VM i found this message: 2015-03-26 02:21:50,793 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. 2015-03-26 02:21:59,029 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds 2015-03-26 02:21:59,029 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s). 2015-03-26 02:22:29,029 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds 2015-03-26 02:22:29,029 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). 2015-03-26 02:22:50,818 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. 2015-03-26 02:22:51,058 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 97 Total time for transactions(ms): 61 Number of transactions batched in Syncs: 7 Number of syncs: 84 SyncTimes(ms): 289 2015-03-26 02:22:51,472 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/sasdata-2015-03-26-07-20-34-420-e-00003.dlv. BP-150411824-127.0.0.1-1418915217884 blk_1073742312_1498{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-58c87cab-9390-42e3-ba20-58d96ba4696e:NORMAL|RBW]]} 2015-03-26 02:22:52,376 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1. For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy 2015-03-26 02:22:52,376 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:cloudera (auth:SIMPLE) cause:java.io.IOException: File /tmp/sasdata-2015-03-26-07-20-34-420-e-00003.dlv could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation. 2015-03-26 02:22:52,376 INFO org.apache.hadoop.ipc.Server: IPC Server handler 16 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 192.168.100.1:49699 Call#30 Retry#0 java.io.IOException: File /tmp/sasdata-2015-03-26-07-20-34-420-e-00003.dlv could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1504) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3065) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) 2015-03-26 02:22:52,629 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:cloudera (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: User does not belong to hive 2015-03-26 02:22:52,630 INFO org.apache.hadoop.ipc.Server: IPC Server handler 16 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setOwner from 127.0.0.1:41381 Call#191 Retry#0: org.apache.hadoop.security.AccessControlException: User does not belong to hive 2015-03-26 02:22:59,035 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30006 milliseconds 2015-03-26 02:22:59,035 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). 2015-03-26 02:23:29,039 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30004 milliseconds 2015-03-26 02:23:29,039 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s). 2015-03-26 02:23:50,792 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks.
... View more