本文整理了Java中org.apache.spark.SparkContext.applicationId()
方法的一些代码示例,展示了SparkContext.applicationId()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。SparkContext.applicationId()
方法的具体详情如下:
包路径:org.apache.spark.SparkContext
类名称:SparkContext
方法名:applicationId
暂无
代码示例来源:origin: apache/hive
@Override
public String getAppID() {
return sparkContext.sc().applicationId();
}
代码示例来源:origin: apache/drill
@Override
public String getAppID() {
return sparkContext.sc().applicationId();
}
代码示例来源:origin: apache/hive
@Override
public String call(JobContext jc) throws Exception {
return jc.sc().sc().applicationId();
}
}
代码示例来源:origin: apache/drill
@Override
public String call(JobContext jc) throws Exception {
return jc.sc().sc().applicationId();
}
}
代码示例来源:origin: cloudera-labs/envelope
public static String getTokenStoreFilePath(Config config, boolean onDriver) throws IOException {
String tokenFilePrefix;
if (config.hasPath(TOKENS_FILE)) {
tokenFilePrefix = config.getString(TOKENS_FILE);
} else {
String userName = UserGroupInformation.getCurrentUser().getShortUserName();
String appId;
if (onDriver) {
appId = Contexts.getSparkSession().sparkContext().applicationId();
} else {
appId = SparkEnv.get().conf().getAppId();
}
tokenFilePrefix = String.format("/user/%s/.sparkStaging/%s/envelope_tokens", userName, appId);
}
return tokenFilePrefix;
}
代码示例来源:origin: uber/marmaray
/**
* Creates JavaSparkContext if its hasn't been created yet, or returns the instance. {@link #addSchema(Schema)} and
* {@link #addSchemas(Collection)} must not be called once the JavaSparkContext has been created
* @return the JavaSparkContext that will be used to execute the JobDags
*/
public JavaSparkContext getOrCreateSparkContext() {
if (!this.sparkContext.isPresent()) {
this.sparkContext = Optional.of(new JavaSparkContext(
SparkUtil.getSparkConf(
this.appName, Optional.of(this.schemas), this.serializationClasses, this.conf)));
this.sparkContext.get().sc().addSparkListener(new SparkEventListener());
// Adding hadoop configuration to default
this.sparkContext.get().sc().hadoopConfiguration().addResource(
new HadoopConfiguration(conf).getHadoopConf());
this.appId = this.sparkContext.get().sc().applicationId();
}
return this.sparkContext.get();
}
内容来源于网络,如有侵权,请联系作者删除!