org.apache.spark.executor.TaskMetrics.incMemoryBytesSpilled()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(4.1k)|赞(0)|评价(0)|浏览(62)

本文整理了Java中org.apache.spark.executor.TaskMetrics.incMemoryBytesSpilled()方法的一些代码示例,展示了TaskMetrics.incMemoryBytesSpilled()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。TaskMetrics.incMemoryBytesSpilled()方法的具体详情如下:
包路径:org.apache.spark.executor.TaskMetrics
类名称:TaskMetrics
方法名:incMemoryBytesSpilled

TaskMetrics.incMemoryBytesSpilled介绍

暂无

代码示例

代码示例来源:origin: org.apache.spark/spark-core

/**
 * Sort and spill the current records in response to memory pressure.
 */
@Override
public long spill(long size, MemoryConsumer trigger) throws IOException {
 if (trigger != this || inMemSorter == null || inMemSorter.numRecords() == 0) {
  return 0L;
 }
 logger.info("Thread {} spilling sort data of {} to disk ({} {} so far)",
  Thread.currentThread().getId(),
  Utils.bytesToString(getMemoryUsage()),
  spills.size(),
  spills.size() > 1 ? " times" : " time");
 writeSortedFile(false);
 final long spillSize = freeMemory();
 inMemSorter.reset();
 // Reset the in-memory sorter's pointer array only after freeing up the memory pages holding the
 // records. Otherwise, if the task is over allocated memory, then without freeing the memory
 // pages, we might not be able to get memory for the pointer array.
 taskContext.taskMetrics().incMemoryBytesSpilled(spillSize);
 return spillSize;
}

代码示例来源:origin: org.apache.spark/spark-core_2.10

/**
 * Sort and spill the current records in response to memory pressure.
 */
@Override
public long spill(long size, MemoryConsumer trigger) throws IOException {
 if (trigger != this || inMemSorter == null || inMemSorter.numRecords() == 0) {
  return 0L;
 }
 logger.info("Thread {} spilling sort data of {} to disk ({} {} so far)",
  Thread.currentThread().getId(),
  Utils.bytesToString(getMemoryUsage()),
  spills.size(),
  spills.size() > 1 ? " times" : " time");
 writeSortedFile(false);
 final long spillSize = freeMemory();
 inMemSorter.reset();
 // Reset the in-memory sorter's pointer array only after freeing up the memory pages holding the
 // records. Otherwise, if the task is over allocated memory, then without freeing the memory
 // pages, we might not be able to get memory for the pointer array.
 taskContext.taskMetrics().incMemoryBytesSpilled(spillSize);
 return spillSize;
}

代码示例来源:origin: org.apache.spark/spark-core_2.11

/**
 * Sort and spill the current records in response to memory pressure.
 */
@Override
public long spill(long size, MemoryConsumer trigger) throws IOException {
 if (trigger != this || inMemSorter == null || inMemSorter.numRecords() == 0) {
  return 0L;
 }
 logger.info("Thread {} spilling sort data of {} to disk ({} {} so far)",
  Thread.currentThread().getId(),
  Utils.bytesToString(getMemoryUsage()),
  spills.size(),
  spills.size() > 1 ? " times" : " time");
 writeSortedFile(false);
 final long spillSize = freeMemory();
 inMemSorter.reset();
 // Reset the in-memory sorter's pointer array only after freeing up the memory pages holding the
 // records. Otherwise, if the task is over allocated memory, then without freeing the memory
 // pages, we might not be able to get memory for the pointer array.
 taskContext.taskMetrics().incMemoryBytesSpilled(spillSize);
 return spillSize;
}

代码示例来源:origin: org.apache.spark/spark-core_2.11

taskContext.taskMetrics().incMemoryBytesSpilled(spillSize);
taskContext.taskMetrics().incDiskBytesSpilled(writeMetrics.bytesWritten());
totalSpillBytes += spillSize;

代码示例来源:origin: org.apache.spark/spark-core

taskContext.taskMetrics().incMemoryBytesSpilled(spillSize);
taskContext.taskMetrics().incDiskBytesSpilled(writeMetrics.bytesWritten());
totalSpillBytes += spillSize;

代码示例来源:origin: org.apache.spark/spark-core_2.10

taskContext.taskMetrics().incMemoryBytesSpilled(spillSize);
totalSpillBytes += spillSize;
return spillSize;

代码示例来源:origin: org.apache.spark/spark-core_2.11

inMemSorter.free();
inMemSorter = null;
taskContext.taskMetrics().incMemoryBytesSpilled(released);
taskContext.taskMetrics().incDiskBytesSpilled(writeMetrics.bytesWritten());
totalSpillBytes += released;

代码示例来源:origin: org.apache.spark/spark-core

inMemSorter.free();
inMemSorter = null;
taskContext.taskMetrics().incMemoryBytesSpilled(released);
taskContext.taskMetrics().incDiskBytesSpilled(writeMetrics.bytesWritten());
totalSpillBytes += released;

代码示例来源:origin: org.apache.spark/spark-core_2.10

inMemSorter.free();
inMemSorter = null;
taskContext.taskMetrics().incMemoryBytesSpilled(released);
totalSpillBytes += released;
return released;

相关文章