org.apache.spark.SparkContext.applicationId()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(2.1k)|赞(0)|评价(0)|浏览(159)

本文整理了Java中org.apache.spark.SparkContext.applicationId()方法的一些代码示例,展示了SparkContext.applicationId()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。SparkContext.applicationId()方法的具体详情如下:
包路径:org.apache.spark.SparkContext
类名称:SparkContext
方法名:applicationId

SparkContext.applicationId介绍

暂无

代码示例

代码示例来源:origin: apache/hive

@Override
public String getAppID() {
 return sparkContext.sc().applicationId();
}

代码示例来源:origin: apache/drill

@Override
public String getAppID() {
 return sparkContext.sc().applicationId();
}

代码示例来源:origin: apache/hive

@Override
 public String call(JobContext jc) throws Exception {
  return jc.sc().sc().applicationId();
 }
}

代码示例来源:origin: apache/drill

@Override
 public String call(JobContext jc) throws Exception {
  return jc.sc().sc().applicationId();
 }
}

代码示例来源:origin: cloudera-labs/envelope

public static String getTokenStoreFilePath(Config config, boolean onDriver) throws IOException {
 String tokenFilePrefix;
 if (config.hasPath(TOKENS_FILE)) {
  tokenFilePrefix = config.getString(TOKENS_FILE);
 } else {
  String userName = UserGroupInformation.getCurrentUser().getShortUserName();
  String appId;
  if (onDriver) {
   appId = Contexts.getSparkSession().sparkContext().applicationId();
  } else {
   appId = SparkEnv.get().conf().getAppId();
  }
  tokenFilePrefix = String.format("/user/%s/.sparkStaging/%s/envelope_tokens", userName, appId);
 }
 return tokenFilePrefix;
}

代码示例来源:origin: uber/marmaray

/**
 * Creates JavaSparkContext if its hasn't been created yet, or returns the instance. {@link #addSchema(Schema)} and
 * {@link #addSchemas(Collection)} must not be called once the JavaSparkContext has been created
 * @return the JavaSparkContext that will be used to execute the JobDags
 */
public JavaSparkContext getOrCreateSparkContext() {
  if (!this.sparkContext.isPresent()) {
    this.sparkContext = Optional.of(new JavaSparkContext(
        SparkUtil.getSparkConf(
          this.appName, Optional.of(this.schemas), this.serializationClasses, this.conf)));
    this.sparkContext.get().sc().addSparkListener(new SparkEventListener());
    // Adding hadoop configuration to default
    this.sparkContext.get().sc().hadoopConfiguration().addResource(
      new HadoopConfiguration(conf).getHadoopConf());
    this.appId = this.sparkContext.get().sc().applicationId();
  }
  return this.sparkContext.get();
}

相关文章