org.apache.spark.sql.DataFrame.sqlContext()方法的使用及代码示例

x33g5p2x  于2022-01-18 转载在 其他  
字(2.7k)|赞(0)|评价(0)|浏览(285)

本文整理了Java中org.apache.spark.sql.DataFrame.sqlContext()方法的一些代码示例,展示了DataFrame.sqlContext()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。DataFrame.sqlContext()方法的具体详情如下:
包路径:org.apache.spark.sql.DataFrame
类名称:DataFrame
方法名:sqlContext

DataFrame.sqlContext介绍

暂无

代码示例

代码示例来源:origin: Impetus/Kundera

@Override
  public void saveDataFrame(DataFrame dataFrame, Class<?> entityClazz, Map<String, Object> properties)
  {
    dataFrame.sqlContext().sql("use " + (String) properties.get(KEYSPACE));
    dataFrame.write().insertInto((String) properties.get(TABLE));
  }
}

代码示例来源:origin: flipkart-incubator/spark-transformers

private void preConditions(DataFrame df) {
  if (null != df) {
    if (!StringUtils.startsWith(df.sqlContext().sparkContext().version(), Constants.SUPPORTED_SPARK_VERSION_PREFIX)) {
      throw new UnsupportedOperationException("Only spark version " + Constants.SUPPORTED_SPARK_VERSION_PREFIX + " is supported by this version of the library");
    }
  }
}

代码示例来源:origin: phuonglh/vn.vitk

@Override
public DataFrame transform(DataFrame dataset) {
  JavaRDD<Row> output = dataset.javaRDD().map(new DecodeFunction());
  StructType schema = new StructType(new StructField[]{
    new StructField("sentence", DataTypes.StringType, false, Metadata.empty()),
    new StructField("prediction", DataTypes.StringType, false, Metadata.empty())
  });
  return dataset.sqlContext().createDataFrame(output, schema);
}

代码示例来源:origin: phuonglh/vn.vitk

ContextExtractor contextExtractor = new ContextExtractor(order, Constants.REGEXP_FILE);
JavaRDD<LabeledContext> contexts = contextExtractor.extract(dataset.javaRDD());
DataFrame dataFrame = dataset.sqlContext().createDataFrame(contexts, LabeledContext.class);
Row row = df.sqlContext().sql("SELECT MAX(label) as maxValue FROM dft").first();
this.numLabels = (int)row.getDouble(0) + 1;
JavaRDD<Row> rows = df.sqlContext().sql("SELECT label, features FROM dft").toJavaRDD();

代码示例来源:origin: org.wso2.carbon.analytics/org.wso2.carbon.analytics.spark.core

private void writeDataFrameToDAL(DataFrame data) {
  if (this.preserveOrder) {
    logDebug("Inserting data with order preserved! Each partition will be written using separate jobs.");
    for (int i = 0; i < data.rdd().partitions().length; i++) {
      data.sqlContext().sparkContext().runJob(data.rdd(),
                          new AnalyticsWritingFunction(this.tenantId, this.tableName, data.schema(),
                                         this.globalTenantAccess, this.schemaString, this.primaryKeys, this.mergeFlag,
                                         this.recordStore, this.recordBatchSize), CarbonScalaUtils.getNumberSeq(i, i + 1),
                          false, ClassTag$.MODULE$.Unit());
    }
  } else {
    data.foreachPartition(new AnalyticsWritingFunction(this.tenantId, this.tableName, data.schema(),
                              this.globalTenantAccess, this.schemaString, this.primaryKeys, this.mergeFlag,
                              this.recordStore, this.recordBatchSize));
  }
}

相关文章