org.apache.spark.api.java.JavaSparkContext.defaultParallelism()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(3.1k)|赞(0)|评价(0)|浏览(83)

本文整理了Java中org.apache.spark.api.java.JavaSparkContext.defaultParallelism()方法的一些代码示例,展示了JavaSparkContext.defaultParallelism()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JavaSparkContext.defaultParallelism()方法的具体详情如下:
包路径:org.apache.spark.api.java.JavaSparkContext
类名称:JavaSparkContext
方法名:defaultParallelism

JavaSparkContext.defaultParallelism介绍

暂无

代码示例

代码示例来源:origin: deeplearning4j/dl4j-examples

private void writeConfig(JavaSparkContext sc) throws Exception {
  long time = System.currentTimeMillis();
  StringBuilder sb = new StringBuilder();
  sb.append("Output Path: ").append(outputPath).append("\n")
      .append("Time: ").append(time).append("\n")
      .append("numEpoch: ").append(numEpochs).append("\n")
      .append("minibatch: ").append(minibatch).append("\n")
      .append("numNodes: ").append(numNodes).append("\n")
      .append("numWorkpersPerNode: ").append(numWorkersPerNode).append("\n")
      .append("Listener Frequency: ").append(listenerFrequency).append("\n")
      .append("Azure Storage Account: ").append(azureStorageAcct).append("\n")
      .append("Gradient threshold: ").append(gradientThreshold).append("\n")
      .append("Controller: ").append(masterIP).append("\n")
      .append("Port: ").append(port).append("\n")
      .append("Network Mask: ").append(networkMask).append("\n")
      .append("Word vectors path: ").append(wordVectorsPath).append("\n")
      .append("Continue training: ").append(continueTraining).append("\n")
      .append("saveFreqSec: ").append(saveFreqSec).append("\n")
      .append("\n");
  sb.append("\n\n")
      .append("Spark Default Parallelism: ").append(sc.defaultParallelism()).append("\n");
  String str = sb.toString();
  log.info(str);
  String path = FilenameUtils.concat(outputPath, "experimentConfig.txt");
  log.info("Writing experiment config and info to file: {}", path);
  SparkUtils.writeStringToFile(path, str, sc);
}

代码示例来源:origin: nielsutrecht/spark-of-life

public static void main(String... argv) {
    System.out.println("Solving Pi");

    JavaSparkContext sc = new JavaSparkContext("local[4]", "PiSample");

    int slices = sc.defaultParallelism();
    int n = 100000 * slices;
    List<Integer> l = new ArrayList<>(n);
    for (int i = 0; i < n; i++) {
      l.add(i);
    }

    JavaRDD<Integer> dataSet = sc.parallelize(l, slices);

    int count = dataSet.map(i -> {
      double x = Math.random() * 2 - 1;
      double y = Math.random() * 2 - 1;
      return (x * x + y * y < 1) ? 1 : 0;
    }).reduce((i1, i2) -> i1 + i2);

    double pi = 4.0 * (double) count / (double) n;
    System.out.println("Pi is roughly " + pi);
    sc.stop();
  }
}

代码示例来源:origin: nielsutrecht/spark-of-life

LOG.info("Solving for {} cities in {} iterations ({} ms max iteration duration)", cities.size(), iterations, maxDuration);
LOG.info("{}", cities.stream().map(City::getName).sorted().collect(Collectors.toList()));
LOG.info("Parallelism: {}", sc.defaultParallelism());
  JavaRDD<Work> dataSet = sc.parallelize(work.fork(sc.defaultParallelism()));

相关文章

微信公众号

最新文章

更多