我在序列文件中使用sqoop导入数据,并使用sparkshell加载该数据。spark生成的代码引用了com.cloudera.sqoop.lib包中的类。在spark shell中运行该命令会生成以下错误:
val ordersRDD = sc.sequenceFile("/user/pawinder/problem1-seq/orders",classOf[org.apache.hadoop.io.IntWritable],classOf[com.problem1.retaildb.orders])
warning: Class com.cloudera.sqoop.lib.SqoopRecord not found - continuing with a stub.
warning: Class com.cloudera.sqoop.lib.LargeObjectLoader not found - continuing with a stub.
warning: Class com.cloudera.sqoop.lib.LargeObjectLoader not found - continuing with a stub.
warning: Class com.cloudera.sqoop.lib.DelimiterSet not found - continuing with a stub.
warning: Class com.cloudera.sqoop.lib.DelimiterSet not found - continuing with a stub.
warning: Class com.cloudera.sqoop.lib.DelimiterSet not found - continuing with a stub.
warning: Class com.cloudera.sqoop.lib.RecordParser not found - continuing with a stub.
error: Class com.cloudera.sqoop.lib.SqoopRecord not found - continuing with a stub.
我是否可以指示sqoop在不依赖cloudera包的情况下生成代码?启动sparkshell时是否需要添加包含com.cloudera.sqoop.lib包的jar文件?在哪里可以找到jar文件?我应该为value类编写代码,以便它不依赖于com.cloudera.sqoop.lib包吗?
我正在使用cloudera quickstart虚拟机。非常感谢你的帮助。
编辑:通过将sqoop-1.4.6.2.6.5.0-292.jar添加到spark2shell,可以解决这个问题
spark-shell --jars problem1/bin/orders.jar,/usr/hdp/2.6.5.0-292/sqoop/sqoop-1.4.6.2.6.5.0-292.jar
我试图通过为orders定义一个case类来解决这个问题,但是没有成功。mapreduce作业仍然引用com.cloudera.sqoop包类
scala> case class Orders(order_id:Int,order_date:java.sql.Timestamp,customer_id:Int,status:String)
defined class Orders
scala> val ordersRDD = sc.sequenceFile("/user/pawinder/problem1-seq/orders",classOf[org.apache.hadoop.io.IntWritable],classOf[Orders])
ordersRDD: org.apache.spark.rdd.RDD[(org.apache.hadoop.io.IntWritable, Orders)] = /user/pawinder/problem1-seq/orders HadoopRDD[0] at sequenceFile at <console>:26
scala> ordersRDD.count
19/05/14 14:29:21 ERROR Executor: Exception in task 1.0 in stage 0.0 (TID 1)
java.lang.NoClassDefFoundError: com/cloudera/sqoop/lib/SqoopRecord
暂无答案!
目前还没有任何答案,快来回答吧!