本机snappy库不可用:此版本的libhadoop是在不支持snappy的情况下生成的

m1m5dgzv  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(422)

我在使用mlutils saveaslibsvmfile时遇到上述错误。尝试了各种方法,如下面,但没有工作。

/*
		 conf.set("spark.io.compression.codec","org.apache.spark.io.LZFCompressionCodec")
		*/

		/*
		conf.set("spark.executor.extraClassPath","/usr/hdp/current/hadoop-client/lib/snappy-java-*.jar")
		conf.set("spark.driver.extraClassPath","/usr/hdp/current/hadoop-client/lib/snappy-java-*.jar")

  		conf.set("spark.executor.extraLibraryPath","/usr/hdp/2.3.4.0-3485/hadoop/lib/native")
  		conf.set("spark.driver.extraLibraryPath","/usr/hdp/2.3.4.0-3485/hadoop/lib/native")
		*/

我阅读了以下链接https://community.hortonworks.com/questions/18903/this-version-of-libhadoop-was-built-without-snappy.html
最后只有两种方法可以解决它。答案如下。

e4eetjau

e4eetjau1#

一种方法是使用不同的hadoop编解码器,如下所示 sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress", "true") sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress.type", CompressionType.BLOCK.toString) sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress.codec", "org.apache.hadoop.io.compress.BZip2Codec") sc.hadoopConfiguration.set("mapreduce.map.output.compress", "true") sc.hadoopConfiguration.set("mapreduce.map.output.compress.codec", "org.apache.hadoop.io.compress.BZip2Codec") 第二种方法是提到驱动程序库路径 /usr/hdp/<whatever is your current version>/hadoop/lib/native/ 作为spark submit作业的参数(在命令行中)

相关问题