org.apache.spark.SparkContext.defaultParallelism()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(3.0k)|赞(0)|评价(0)|浏览(113)

本文整理了Java中org.apache.spark.SparkContext.defaultParallelism()方法的一些代码示例,展示了SparkContext.defaultParallelism()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。SparkContext.defaultParallelism()方法的具体详情如下:
包路径:org.apache.spark.SparkContext
类名称:SparkContext
方法名:defaultParallelism

SparkContext.defaultParallelism介绍

暂无

代码示例

代码示例来源:origin: apache/hive

@Override
public int getDefaultParallelism() throws Exception {
 return sc.sc().defaultParallelism();
}

代码示例来源:origin: apache/drill

@Override
public int getDefaultParallelism() throws Exception {
 return sc.sc().defaultParallelism();
}

代码示例来源:origin: apache/hive

@Override
 public Integer call(JobContext jc) throws Exception {
  return jc.sc().sc().defaultParallelism();
 }
}

代码示例来源:origin: com.facebook.presto.hive/hive-apache

@Override
public int getDefaultParallelism() throws Exception {
 return sc.sc().defaultParallelism();
}

代码示例来源:origin: com.github.hyukjinkwon/spark-client

@Override
 public Integer call(JobContext jc) throws Exception {
  return jc.sc().sc().defaultParallelism();
 }
}

代码示例来源:origin: org.spark-project.hive/spark-client

@Override
 public Integer call(JobContext jc) throws Exception {
  return jc.sc().sc().defaultParallelism();
 }
}

代码示例来源:origin: org.apache.beam/beam-runners-spark

public Bounded(
  SparkContext sc,
  BoundedSource<T> source,
  SerializablePipelineOptions options,
  String stepName) {
 super(sc, NIL, JavaSparkContext$.MODULE$.fakeClassTag());
 this.source = source;
 this.options = options;
 // the input parallelism is determined by Spark's scheduler backend.
 // when running on YARN/SparkDeploy it's the result of max(totalCores, 2).
 // when running on Mesos it's 8.
 // when running local it's the total number of cores (local = 1, local[N] = N,
 // local[*] = estimation of the machine's cores).
 // ** the configuration "spark.default.parallelism" takes precedence over all of the above **
 this.numPartitions = sc.defaultParallelism();
 checkArgument(this.numPartitions > 0, "Number of partitions must be greater than zero.");
 this.bundleSize = options.get().as(SparkPipelineOptions.class).getBundleSize();
 this.stepName = stepName;
 this.metricsAccum = MetricsAccumulator.getInstance();
}

代码示例来源:origin: org.apache.beam/beam-runners-spark

SourceDStream(
  StreamingContext ssc,
  UnboundedSource<T, CheckpointMarkT> unboundedSource,
  SerializablePipelineOptions options,
  Long boundMaxRecords) {
 super(ssc, JavaSparkContext$.MODULE$.fakeClassTag());
 this.unboundedSource = unboundedSource;
 this.options = options;
 SparkPipelineOptions sparkOptions = options.get().as(SparkPipelineOptions.class);
 // Reader cache expiration interval. 50% of batch interval is added to accommodate latency.
 this.readerCacheInterval = 1.5 * sparkOptions.getBatchIntervalMillis();
 this.boundReadDuration =
   boundReadDuration(
     sparkOptions.getReadTimePercentage(), sparkOptions.getMinReadTimeMillis());
 // set initial parallelism once.
 this.initialParallelism = ssc().sparkContext().defaultParallelism();
 checkArgument(this.initialParallelism > 0, "Number of partitions must be greater than zero.");
 this.boundMaxRecords = boundMaxRecords;
 try {
  this.numPartitions = createMicrobatchSource().split(sparkOptions).size();
 } catch (Exception e) {
  throw new RuntimeException(e);
 }
}

相关文章