org.apache.spark.util.Utils.memoryStringToMb()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(2.8k)|赞(0)|评价(0)|浏览(77)

本文整理了Java中org.apache.spark.util.Utils.memoryStringToMb()方法的一些代码示例,展示了Utils.memoryStringToMb()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Utils.memoryStringToMb()方法的具体详情如下:
包路径:org.apache.spark.util.Utils
类名称:Utils
方法名:memoryStringToMb

Utils.memoryStringToMb介绍

暂无

代码示例

代码示例来源:origin: apache/hive

return new ObjectPair<Long, Integer>(-1L, -1);
int executorMemoryInMB = Utils.memoryStringToMb(
    sparkConf.get("spark.executor.memory", "512m"));
double memoryFraction = 1.0 - sparkConf.getDouble("spark.storage.memoryFraction", 0.6);

代码示例来源:origin: apache/drill

return new ObjectPair<Long, Integer>(-1L, -1);
int executorMemoryInMB = Utils.memoryStringToMb(
  sparkConf.get("spark.executor.memory", "512m"));
double memoryFraction = 1.0 - sparkConf.getDouble("spark.storage.memoryFraction", 0.6);

代码示例来源:origin: com.uber.hoodie/hoodie-client

Utils.memoryStringToMb(SparkEnv.get().conf().get(SPARK_EXECUTOR_MEMORY_PROP,
  DEFAULT_SPARK_EXECUTOR_MEMORY_MB)) * 1024

代码示例来源:origin: uber/hudi

long executorMemoryInBytes = Utils.memoryStringToMb(SparkEnv.get().conf().get(SPARK_EXECUTOR_MEMORY_PROP,
    DEFAULT_SPARK_EXECUTOR_MEMORY_MB)) * 1024

代码示例来源:origin: org.wso2.carbon.analytics/org.wso2.carbon.analytics.spark.core

/**
 * this starts a worker with given parameters. it reads the spark defaults from
 * the given properties file and override parameters accordingly. it also adds the port offset
 * to all the port configurations
 */
public synchronized void startWorker() {
  if (!this.workerActive) {
    String workerHost = this.myHost;
    int workerPort = this.sparkConf.getInt(AnalyticsConstants.SPARK_WORKER_PORT, 10000 + this.portOffset);
    int workerUiPort = this.sparkConf.getInt(AnalyticsConstants.SPARK_WORKER_WEBUI_PORT, 10500 + this.portOffset);
    int workerCores = this.sparkConf.getInt(AnalyticsConstants.SPARK_WORKER_CORES, 1);
    String workerMemory = getStringFromSparkConf(AnalyticsConstants.SPARK_WORKER_MEMORY, "1g");
    String[] masters = this.getSparkMastersFromCluster();
    String workerDir = getStringFromSparkConf(AnalyticsConstants.SPARK_WORKER_DIR, "work");
    Worker.startRpcEnvAndEndpoint(workerHost, workerPort, workerUiPort, workerCores,
        Utils.memoryStringToMb(workerMemory), masters, workerDir,
        Option.empty(), this.sparkConf);
    log.info("[Spark init - worker] Started SPARK WORKER in " + workerHost + ":" + workerPort + " with webUI port "
        + workerUiPort + " with Masters " + Arrays.toString(masters));
    this.workerActive = true;
  } else {
    logDebug("Worker is already active in this node, therefore ignoring worker startup");
  }
}

代码示例来源:origin: com.facebook.presto.hive/hive-apache

return new ObjectPair<Long, Integer>(-1L, -1);
int executorMemoryInMB = Utils.memoryStringToMb(
  sparkConf.get("spark.executor.memory", "512m"));
double memoryFraction = 1.0 - sparkConf.getDouble("spark.storage.memoryFraction", 0.6);

相关文章