从couchbase到hadoop的sqoop导入

zi8p0yeb  于 2021-06-03  发布在  Hadoop
关注(0)|答案(2)|浏览(336)

在ubuntu上,使用couchbase 2.5.1、cloudera cdh4、couchbase的hadoop插件和oracle jdk 6。一切都安装得很好(看起来),我可以独立地使用hadoop和couchbase而没有任何问题,但是当我尝试使用以下插件时

sqoop import --connect http://127.0.0.1:8091/ --table DUMP

我得到以下错误

Please set $ACCUMULO_HOME to the root of your Accumulo installation.
14/04/11 11:44:08 INFO sqoop.Sqoop: Running Sqoop version: 1.4.3-cdh4.6.0
14/04/11 11:44:08 INFO tool.CodeGenTool: Beginning code generation
14/04/11 11:44:08 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
Note: /tmp/sqoop-vagrant/compile/30e6774902d338663db059706cde5b12/DUMP.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/04/11 11:44:09 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-vagrant/compile/30e6774902d338663db059706cde5b12/DUMP.jar
14/04/11 11:44:09 INFO mapreduce.ImportJobBase: Beginning import of DUMP
14/04/11 11:44:09 WARN util.Jars: No such class couchbase doesn't use a jdbc driver available.
14/04/11 11:44:11 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/04/11 11:44:12 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/04/11 11:44:13 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

知道我哪里出错了吗?或者我能做些什么来找出答案?

imzjd6km

imzjd6km1#

我不认为你将能够连接到couchbase桶与密码使用couchbase hadoop插件。我曾经遇到过身份验证异常,但始终无法解决它。我编辑了源代码,然后我就可以让它工作了。

kse8i1jr

kse8i1jr2#

看来我使用的语法是错误的。假设我们要导入 beer-sample bucket从couchbase转换为hdfs,正确的语法如下,bucket名称实际上作为 username .

sqoop import --connect http://localhost:8091/pools --password password --username beer-sample --table DUMP

相关问题