未执行distcp

1mrurvl1  于 2021-05-27  发布在  Hadoop
关注(0)|答案(1)|浏览(426)

我正在尝试使用distcp命令将数据从一个hdfs集群复制到另一个hdfs集群
hadoop分布hdfs://sourcenamenodehostname:50070/var/lib/hadoop hdfs/distcptest.txthdfs://destinationnamenodehostname:50070/var/lib/hadoop hdfs
提交此文件时收到错误消息。请仔细阅读错误信息并指导我正确的方法。
19/02/27 04:28:19 info tools.optionsparser:parsechunksize:blocksperchunk false 19/02/27 04:28:20 error tools.distcp:无效参数:org.apache.hadoop.ipc.remoteexception(org.apache.hadoop.ipc.standbyexception):在待机状态下不支持读取操作类别。访问https://s.apache.org/sbnn-error 在org.apache.hadoop.hdfs.server.namenode.ha.standbystate.checkoperation(standbystate。java:88)在org.apache.hadoop.hdfs.server.namenode.namenode$namenodehacontext.checkoperation(namenode。java:1835)在org.apache.hadoop.hdfs.server.namenode.fsnamesystem.checkoperation(fsnamesystem。java:1515)在org.apache.hadoop.hdfs.server.namenode.fsnamesystem.getfileinfo(fsnamesystem)。java:4448)在org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.getfileinfo(namenoderpcserver。java:912)位于org.apache.hadoop.hdfs.server.namenode.authorizationproviderproxyclientprotocol.getfileinfo(authorizationproviderproxyclientprotocol)。java:533)在org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.getfileinfo(clientnamenodeprotocolserversidetranslatorpb。java:862)org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java)位于org.apache.hadoop.ipc.protobufrpceengine$server$protobufrpinvoker.call(protobufrpceengine。java:617)在org.apache.hadoop.ipc.rpc$server.call(rpc。java:1073)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2281)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2277)位于java.security.accesscontroller.doprivileged(本机方法)javax.security.auth.subject.doas(主题。java:422)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation。java:1924)在org.apache.hadoop.ipc.server$handler.run(server。java:2275)

at org.apache.hadoop.ipc.Client.call(Client.java:1504)
    at org.apache.hadoop.ipc.Client.call(Client.java:1441)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
    at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:788)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
    at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2168)
    at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1266)
    at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1262)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1262)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1418)
    at org.apache.hadoop.tools.DistCp.setTargetPathExists(DistCp.java:208)
    at org.apache.hadoop.tools.DistCp.run(DistCp.java:133)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.tools.DistCp.main(DistCp.java:493)

无效参数:standby状态下不支持读取操作类别。访问https://s.apache.org/sbnn-error 在org.apache.hadoop.hdfs.server.namenode.ha.standbystate.checkoperation(standbystate。java:88)在org.apache.hadoop.hdfs.server.namenode.namenode$namenodehacontext.checkoperation(namenode。java:1835)在org.apache.hadoop.hdfs.server.namenode.fsnamesystem.checkoperation(fsnamesystem。java:1515)在org.apache.hadoop.hdfs.server.namenode.fsnamesystem.getfileinfo(fsnamesystem)。java:4448)在org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.getfileinfo(namenoderpcserver。java:912)位于org.apache.hadoop.hdfs.server.namenode.authorizationproviderproxyclientprotocol.getfileinfo(authorizationproviderproxyclientprotocol)。java:533)在org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.getfileinfo(clientnamenodeprotocolserversidetranslatorpb。java:862)org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java)位于org.apache.hadoop.ipc.protobufrpceengine$server$protobufrpinvoker.call(protobufrpceengine。java:617)在org.apache.hadoop.ipc.rpc$server.call(rpc。java:1073)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2281)在org.apache.hadoop.ipc.server$handler$1.run(server。java:2277)位于java.security.accesscontroller.doprivileged(本机方法)javax.security.auth.subject.doas(主题。java:422)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation。java:1924)在org.apache.hadoop.ipc.server$handler.run(server。java:2275)

zed5wv10

zed5wv101#

在我看来,其中一个namenode(源)不正常,不接受入站连接来修改状态或新的写入。看这个-

Invalid arguments: Operation category READ is not supported in state standby.

相关问题