org.apache.spark.sql.DataFrameWriter.json()方法的使用及代码示例

x33g5p2x  于2022-01-18 转载在 其他  
字(2.8k)|赞(0)|评价(0)|浏览(90)

本文整理了Java中org.apache.spark.sql.DataFrameWriter.json()方法的一些代码示例,展示了DataFrameWriter.json()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。DataFrameWriter.json()方法的具体详情如下:
包路径:org.apache.spark.sql.DataFrameWriter
类名称:DataFrameWriter
方法名:json

DataFrameWriter.json介绍

暂无

代码示例

代码示例来源:origin: org.apache.spark/spark-sql_2.10

@Test
public void testJsonAPI() {
 spark.read().schema(schema).json();
 spark.read().schema(schema).json(input);
 spark.read().schema(schema).json(input, input, input);
 spark.read().schema(schema).json(new String[]{input, input})
   .write().json(output);
}

代码示例来源:origin: org.apache.spark/spark-sql

@Test
public void testJsonAPI() {
 spark.read().schema(schema).json();
 spark.read().schema(schema).json(input);
 spark.read().schema(schema).json(input, input, input);
 spark.read().schema(schema).json(new String[]{input, input})
   .write().json(output);
}

代码示例来源:origin: org.apache.spark/spark-sql_2.11

@Test
public void testJsonAPI() {
 spark.read().schema(schema).json();
 spark.read().schema(schema).json(input);
 spark.read().schema(schema).json(input, input, input);
 spark.read().schema(schema).json(new String[]{input, input})
   .write().json(output);
}

代码示例来源:origin: uber/uberscriptquery

@Override
  public Object execute(SparkSession sparkSession, ActionStatement actionStatement, CredentialProvider credentialManager) {

    String filePath = actionStatement.getParamValues().get(0).getValue().toString();
    String saveModeStr = actionStatement.getParamValues().get(1).getValue().toString();
    String dfTableName = actionStatement.getParamValues().get(2).getValue().toString();

    SaveMode saveMode = SaveMode.valueOf(saveModeStr);

    String sql = String.format("select * from %s", dfTableName);
    logger.info(String.format("Running sql [%s] to get data and then save it", sql));
    Dataset<Row> df = sparkSession.sql(sql);

    logger.info(String.format("Saving to json %s, saveMode: %s", filePath, saveMode));
    df.coalesce(1).write().mode(saveMode).json(filePath);
    logger.info(String.format("Saved to json %s, saveMode: %s", filePath, saveMode));
    return null;
  }
}

代码示例来源:origin: cloudera-labs/envelope

case JSON_FORMAT:
 LOG.debug("Writing JSON: {}", path);
 writer.json(path);
 break;
default:

代码示例来源:origin: phuonglh/vn.vitk

switch (outputFormat) {
case JSON:
  output.write().json(outputFileName);
  break;
case PARQUET:

代码示例来源:origin: uber/hudi

Dataset<Row> inputDF1,
        Dataset<Row> inputDF2) throws Exception {
inputDF1.write().mode(SaveMode.Append).json(streamingSourcePath);
logger.info("First commit at instant time :" + commitInstantTime1);
inputDF2.write().mode(SaveMode.Append).json(streamingSourcePath);

代码示例来源:origin: phuonglh/vn.vitk

df.select("dependency").write().text(outputFileName);
else 
  df.repartition(1).write().json(outputFileName);

代码示例来源:origin: jgperrin/net.jgp.labs.spark

libraryDf.printSchema();
libraryDf.write().json("data/library.json");

相关文章