sqoop导入数据,但即使在更改hdfs-site.xml属性之后也会出现复制问题

eqqqjvef  于 2021-06-04  发布在  Sqoop
关注(0)|答案(1)|浏览(341)

任何人可以让我知道任何属性文件的变化!!
sqoop导入数据,但即使在更改hdfs-site.xml属性之后,也会出现复制问题。
命令:

C:\hadoop\hdp\sqoop-1.4.6.2.4.0.0-169>SQOOP import --connect jdbc:oracle:thin:@nile:1527:huiprd --username hud_reader --password hud_reader_n1le --table PWRLINE_COPY.DATAAGGRUN --m 1

错误消息:

> warning: HBASE_HOME and HBASE_VERSION not set. Warning: HCATALOG_HOME
> does not exist HCatalog imports will fail. Please set HCATALOG_HOME to
> the root of your HCatalog installation. Warning: ACCUMULO_HOME not
> set. Warning: HBASE_HOME does not exist HBase imports will fail.
> Please set HBASE_HOME to the root of your HBase installation. Warning:
> ACCUMULO_HOME does not exist Accumulo imports will fail. Please set
> ACCUMULO_HOME to the root of your Accumulo installation. 16/04/22
> 08:50:40 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.4.0.0-169
> 16/04/22 08:50:40 WARN tool.BaseSqoopTool: Setting your password on
> the command-line is insecure. Consider using -P instead. 16/04/22
> 08:50:40 INFO oracle.OraOopManagerFactory: Data Connector for Oracle
> and Hadoop is disabled. 16/04/22 08:50:40 INFO manager.SqlManager:
> Using default fetchSize of 1000 16/04/22 08:50:40 INFO
> tool.CodeGenTool: Beginning code generation 16/04/22 08:50:41 INFO
> manager.OracleManager: Time zone has been set to GMT 16/04/22 08:50:42
> INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM
> PWRLINE_COPY.DATAAGGRUN t WHERE 1=0 16/04/22 08:50:42 INFO
> orm.CompilationManager: HADOOP_MAPRED_HOME is
> c:\hadoop\hdp\hadoop-2.7.1.2.4.0.0-169 Note:
> \tmp\sqoop-sahus\compile\f1f5245c3a8fbf8c7782e696f3662575\PWRLINE_COPY_DATAAGGRUN.java
> uses or overrides a deprecated API. Note: Recompile with
> -Xlint:deprecation for details. 16/04/22 08:50:45 INFO orm.CompilationManager: Writing jar file:
> \tmp\sqoop-sahus\compile\f1f5245c3a8fbf8c7782e696f3662575\PWRLINE_COPY.DATAAGGRUN.jar
> 16/04/22 08:50:45 INFO manager.OracleManager: Time zone has been set
> to GMT 16/04/22 08:50:46 INFO manager.OracleManager: Time zone has
> been set to GMT 16/04/22 08:50:46 INFO mapreduce.ImportJobBase:
> Beginning import of PWRLINE_COPY.DATAAGGRUN 16/04/22 08:50:46 INFO
> Configuration.deprecation: mapred.jar is deprecated. Instead, use
> mapreduce.job.jar 16/04/22 08:50:46 INFO manager.OracleManager: Time
> zone has been set to GMT 16/04/22 08:50:47 INFO
> Configuration.deprecation: mapred.map.tasks is deprecated. Instead,
> use mapreduce.job.maps 16/04/22 08:50:48 INFO impl.TimelineClientImpl:
> Timeline service address:
> http://cc-wvd-ap161.pepcoholdings.biz:8188/ws/v1/timeline/ 16/04/22
> 08:50:49 INFO client.RMProxy: Connecting to ResourceManager at
> cc-wvd-ap161.pepcoholdings.biz/161.186.159.156:8032 16/04/22 08:50:50
> INFO mapreduce.JobSubmitter: Cleaning up the staging area
> /user/sahus/.staging/job_1461298205218_0003 16/04/22 08:50:50 ERROR
> tool.ImportTool: Encountered IOException running import job:
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): file
> /user/sahus/.staging/job_1461298205218_0003/libjars/xz-1.0.jar.
> Requested replication 10 exceeds maximum 3 at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.verifyReplication(BlockManager.java:988)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setReplication(FSDirAttrOp.java:138)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setReplication(FSNamesystem.java:1968)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setReplication(NameNodeRpcServer.java:740)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setReplication(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151) at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147) at
> java.security.AccessController.doPrivileged(Native Method) at
> javax.security.auth.Subject.doAs(Subject.java:415) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145) at
> org.apache.hadoop.ipc.Client.call(Client.java:1427) at
> org.apache.hadoop.ipc.Client.call(Client.java:1358) at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy14.setReplication(Unknown Source) at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setReplication(ClientNamenodeProtocolTranslatorPB.java:349)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606) at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
> at com.sun.proxy.$Proxy15.setReplication(Unknown Source) at
> org.apache.hadoop.hdfs.DFSClient.setReplication(DFSClient.java:1902)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$9.doCall(DistributedFileSystem.java:517)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$9.doCall(DistributedFileSystem.java:513)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.setReplication(DistributedFileSystem.java:513)
> at
> org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:204)
> at
> org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:128)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:95)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:190)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at
> org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at
> java.security.AccessController.doPrivileged(Native Method) at
> javax.security.auth.Subject.doAs(Subject.java:415) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at
> org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:196)
> at
> org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169)
> at
> org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266)
> at
> org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673)
> at
> org.apache.sqoop.manager.OracleManager.importTable(OracleManager.java:444)
> at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
> at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at
> org.apache.sqoop.Sqoop.run(Sqoop.java:148) at
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at
> org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184) at
> org.apache.sqoop.Sqoop.runTool(Sqoop.java:226) at
> org.apache.sqoop.Sqoop.runTool(Sqoop.java:235) at
> org.apache.sqoop.Sqoop.main(Sqoop.java:244)
nbewdwxp

nbewdwxp1#

错误日志中的语句
请求的复制10超过最大值3
指物业 mapreduce.client.submit.file.replication ,默认为10,可在中修改 mapred-site.xml .

相关问题