基于spark/java的sql查询与Dataframe

hfyxw5xn  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(369)

我是spark的初学者,我被困在如何使用dataframe发出sql请求。
我有以下两个Dataframe。

df_zones
+-----------------+-----------------+----------------------+---------------------+
|id               |geomType         |geom                  |rayon                |
+-----------------+-----------------+----------------------+---------------------+
|30               |Polygon          |[00 00 00 00 01 0...] |200                  |
|32               |Point            |[00 00 00 00 01 0.. ] |320179               |
+-----------------+-----------------+----------------------+---------------------+
df_tracking
+-----------------+-----------------+----------------------+
|idZones         |Longitude        |Latitude              |               
+-----------------+-----------------+----------------------+
|[30,50,100,]     | -7.6198783      |33.5942549            |
|[20,140,39,]     |-7.6198783       |33.5942549            |
+-----------------+-----------------+----------------------+

我想执行以下请求。

"SELECT zones.* FROM zones WHERE zones.id IN ("
                            + idZones
                            + ") AND ((zones.geomType='Polygon' AND (ST_WITHIN(ST_GeomFromText(CONCAT('POINT(',"
                            + longitude
                            + ",' ',"
                            + latitude
                            + ",')'),4326),zones.geom))) OR (   (zones.geomType='LineString' OR zones.geomType='Point') AND  ST_Intersects(ST_buffer(zones.geom,(zones.rayon/100000)),ST_GeomFromText(CONCAT('POINT(',"
                            + longitude
                            + ",' ',"
                            + latitude
                            + ",')'),4326)))) "

我真的被卡住了,我应该加入两个Dataframe还是什么?我尝试用id和idzone连接两个Dataframe,如下所示:

df_tracking.select(explode(col("idZones").as ("idZones"))).join(df_zones,col("idZones").equalTo(df_zones.col("id")));

但在我看来,加入并不是正确的选择。
我需要你的帮助。
谢谢您

y53ybaqx

y53ybaqx1#

你可以转换 df_tracking.idZones eg: [20, 140, 39] 变成一个 Array() 类型和用途 array_contains() 在连接一系列元素的同时,使事情变得更简单。

val joinDF = df_zones.join(df_tracking, array_contains($"id_Zones",$"id"))

示例代码:

import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._

object JoinExample extends App{

val spark = SparkSession.builder()
    .master("local[8]")
    .appName("Example")
    .getOrCreate()

  import spark.implicits._

val df_zones = Seq(
      (30,"Polygon", "[00 00 00 00 01]",200),
      (32,"Point", "[00 00 00 00 01]",320179),
      (39,"Point", "[00 00 00 00 01]",320179)
      ).toDF("id","geomType","geom","rayon")

val df_tracking = Seq(
      (Array(30,50,100),"-7.6198783","33.5942549"),
      (Array(20,140,39),"-7.6198783","33.5942549"))
  .toDF("id_Zones","Longitude","Latitude")

  df_zones.show()
  df_tracking.show()

  val joinDF = df_zones.join(df_tracking, array_contains($"id_Zones",$"id"))
  joinDF.show()

输出:

+---+--------+----------------+------+
| id|geomType|            geom| rayon|
+---+--------+----------------+------+
| 30| Polygon|[00 00 00 00 01]|   200|
| 32|   Point|[00 00 00 00 01]|320179|
| 39|   Point|[00 00 00 00 01]|320179|
+---+--------+----------------+------+

+-------------+----------+----------+
|     id_Zones| Longitude|  Latitude|
+-------------+----------+----------+
|[30, 50, 100]|-7.6198783|33.5942549|
|[20, 140, 39]|-7.6198783|33.5942549|
+-------------+----------+----------+

+---+--------+----------------+------+-------------+----------+----------+
| id|geomType|            geom| rayon|     id_Zones| Longitude|  Latitude|
+---+--------+----------------+------+-------------+----------+----------+
| 30| Polygon|[00 00 00 00 01]|   200|[30, 50, 100]|-7.6198783|33.5942549|
| 39|   Point|[00 00 00 00 01]|320179|[20, 140, 39]|-7.6198783|33.5942549|
+---+--------+----------------+------+-------------+----------+----------+

edit-1:在上面的继续中,通过定义 SPARK UDF's 下面的代码片段为您提供了一个简单的想法。

// UDF Creation

  // Define Logic of (ST_WITHIN(ST_GeomFromText(CONCAT('POINT(', longitude, ' ', latitude, ')')
  // , 4326), zones.geom))
  val condition1 = (x:Int) => {1}

  // Define Logic of ST_Intersects(ST_buffer(zones.geom, (zones.rayon / 100000)),
  // ST_GeomFromText(CONCAT('POINT(', longitude, ' ', latitude, ')'), 4326))
  val condition2 = (y:Int) => {1}

  val condition1UDF = udf(condition1)
  val condition2UDF = udf(condition2)

  val joinDF = df_zones.join(df_tracking, array_contains($"id_Zones",$"id"))

  val finalDF = joinDF
      .withColumn("Condition1DerivedValue", condition1UDF(lit("000")))
      .withColumn("Condition2DerivedValue", condition2UDF(lit("000")))
      .filter(
        (col("geomType") === "Polygon" and col("Condition1DerivedValue") === 1 )
      or ((col("geomType")==="LineString" or col("geomType")==="Point")
          and $"Condition2DerivedValue" === 1
        )
      )
    .select("id","geomType","geom","rayon")

  finalDF.show()

输出:

+---+--------+----------------+------+
| id|geomType|            geom| rayon|
+---+--------+----------------+------+
| 30| Polygon|[00 00 00 00 01]|   200|
| 39|   Point|[00 00 00 00 01]|320179|
+---+--------+----------------+------+

相关问题