Spark性能非常慢

rn0zuynd  于 2021-06-26  发布在  Hive
关注(0)|答案(1)|浏览(472)

我是一个新手Spark和需要一些帮助调试非常缓慢的性能在Spark。我正在做下面的转换,它已经运行了2个多小时。

scala> val hiveContext = new org.apache.spark.sql.hive.HiveContext( sc )
hiveContext: org.apache.spark.sql.hive.HiveContext =      org.apache.spark.sql.hive.HiveContext@2b33f7a0
scala> val t1_df = hiveContext.sql("select * from T1" )

scala> t1_df.registerTempTable( "T1" )
warning: there was one deprecation warning; re-run with -deprecation for details

scala> t1_df.count
17/06/07 07:26:51 WARN util.Utils: Truncated the string representation of a    plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
res3: Long = 1732831

scala> val t1_df1 = t1_df.dropDuplicates( Array("c1","c2","c3", "c4" ))

scala> df1.registerTempTable( "ABC" )
warning: there was one deprecation warning; re-run with -deprecation for details

scala> hiveContext.sql( "select * from T1 where c1 not in ( select c1 from ABC )" ).count
[Stage 4:====================================================>    (89 + 8) / 97]

我正在使用spark2.1.0,并从amazonvms集群上的hive.2.1.1读取数据,每个集群有7个节点,每个节点有250gbram和64个虚拟内核。有了如此庞大的资源,我期待着这个关于170万个rec的简单查询能够运行,但它的速度非常慢。任何指点都会很有帮助。
更新:添加计划:

scala> hiveContext.sql( "select * from T1 where c1 not in ( select c1 from    ABC )" ).explain
    == Physical Plan ==
    BroadcastNestedLoopJoin BuildRight, LeftAnti, (isnull((c1#26 = c1#26#1398))   || (c1#26 = c1#26#1398))
:- FileScan parquet default.t1_pq[cols
 more fields] Batched: false, Format: Parquet, Location: InMemoryFileIndex[hdfs://<hostname>/user/hive/warehouse/atn_load_pq], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<hdr_msg_src:string,hdr_recv_tsmp:timestamp,hdr_desk_id:string,execprc:string,dreg:string,c...
+- BroadcastExchange IdentityBroadcastMode
   +- *HashAggregate(keys=[c1#26, c2#59, c3#60L, c4#82], functions=[])
      +- Exchange hashpartitioning(c1#26, c2#59, c3#60L, c4#82, 200)
         +- *HashAggregate(keys=[c1#26, c2#59, c3#60L, c4#82], functions=[])
            +- *FileScan parquet default.atn_load_pq[c1#26,c2#59,c3#60L,c4#82] Batched: true, Format: Parquet, Location: InMemoryFileIndex[hdfs://<hostname>/user/hive/warehouse/atn_load_pq], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<c1:string,c2:string,c3:bigint,c4:string>
1cklez4t

1cklez4t1#

虽然我认为在查询中计数总是0,但是您可以尝试使用左反联接,并且不要忘记缓存t1挈df以避免多次重新计算

val t1_df = hiveContext.sql("select * from T1" ).cache

t1_df
   .join(
     t1_df.dropDuplicates( Array("c1","c2","c3", "c4" )),
     Seq("c1"),
     "leftanti"
   )
   .count()

相关问题