org.apache.hadoop.mapred.Task.getFsStatistics()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(17.3k)|赞(0)|评价(0)|浏览(74)

本文整理了Java中org.apache.hadoop.mapred.Task.getFsStatistics()方法的一些代码示例,展示了Task.getFsStatistics()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Task.getFsStatistics()方法的具体详情如下:
包路径:org.apache.hadoop.mapred.Task
类名称:Task
方法名:getFsStatistics

Task.getFsStatistics介绍

[英]Gets a handle to the Statistics instance based on the scheme associated with path.
[中]根据与路径关联的方案获取统计实例的句柄。

代码示例

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

@SuppressWarnings("unchecked")
NewTrackingRecordWriter(ReduceTask reduce,
  org.apache.hadoop.mapreduce.TaskAttemptContext taskContext)
  throws InterruptedException, IOException {
 this.outputRecordCounter = reduce.reduceOutputCounter;
 this.fileOutputByteCounter = reduce.fileOutputByteCounter;
 List<Statistics> matchedStats = null;
 if (reduce.outputFormat instanceof org.apache.hadoop.mapreduce.lib.output.FileOutputFormat) {
  matchedStats = getFsStatistics(org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
    .getOutputPath(taskContext), taskContext.getConfiguration());
 }
 fsStats = matchedStats;
 long bytesOutPrev = getOutputBytes(fsStats);
 this.real = (org.apache.hadoop.mapreduce.RecordWriter<K, V>) reduce.outputFormat
   .getRecordWriter(taskContext);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

@SuppressWarnings("unchecked")
NewTrackingRecordWriter(ReduceTask reduce,
  org.apache.hadoop.mapreduce.TaskAttemptContext taskContext)
  throws InterruptedException, IOException {
 this.outputRecordCounter = reduce.reduceOutputCounter;
 this.fileOutputByteCounter = reduce.fileOutputByteCounter;
 List<Statistics> matchedStats = null;
 if (reduce.outputFormat instanceof org.apache.hadoop.mapreduce.lib.output.FileOutputFormat) {
  matchedStats = getFsStatistics(org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
    .getOutputPath(taskContext), taskContext.getConfiguration());
 }
 fsStats = matchedStats;
 long bytesOutPrev = getOutputBytes(fsStats);
 this.real = (org.apache.hadoop.mapreduce.RecordWriter<K, V>) reduce.outputFormat
   .getRecordWriter(taskContext);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

@SuppressWarnings("unchecked")
NewTrackingRecordWriter(ReduceTask reduce,
  org.apache.hadoop.mapreduce.TaskAttemptContext taskContext)
  throws InterruptedException, IOException {
 this.outputRecordCounter = reduce.reduceOutputCounter;
 this.fileOutputByteCounter = reduce.fileOutputByteCounter;
 List<Statistics> matchedStats = null;
 if (reduce.outputFormat instanceof org.apache.hadoop.mapreduce.lib.output.FileOutputFormat) {
  matchedStats = getFsStatistics(org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
    .getOutputPath(taskContext), taskContext.getConfiguration());
 }
 fsStats = matchedStats;
 long bytesOutPrev = getOutputBytes(fsStats);
 this.real = (org.apache.hadoop.mapreduce.RecordWriter<K, V>) reduce.outputFormat
   .getRecordWriter(taskContext);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

@SuppressWarnings("unchecked")
NewTrackingRecordWriter(ReduceTask reduce,
  org.apache.hadoop.mapreduce.TaskAttemptContext taskContext)
  throws InterruptedException, IOException {
 this.outputRecordCounter = reduce.reduceOutputCounter;
 this.fileOutputByteCounter = reduce.fileOutputByteCounter;
 List<Statistics> matchedStats = null;
 if (reduce.outputFormat instanceof org.apache.hadoop.mapreduce.lib.output.FileOutputFormat) {
  matchedStats = getFsStatistics(org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
    .getOutputPath(taskContext), taskContext.getConfiguration());
 }
 fsStats = matchedStats;
 long bytesOutPrev = getOutputBytes(fsStats);
 this.real = (org.apache.hadoop.mapreduce.RecordWriter<K, V>) reduce.outputFormat
   .getRecordWriter(taskContext);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

@SuppressWarnings({ "deprecation", "unchecked" })
public OldTrackingRecordWriter(ReduceTask reduce, JobConf job,
  TaskReporter reporter, String finalName) throws IOException {
 this.reduceOutputCounter = reduce.reduceOutputCounter;
 this.fileOutputByteCounter = reduce.fileOutputByteCounter;
 List<Statistics> matchedStats = null;
 if (job.getOutputFormat() instanceof FileOutputFormat) {
  matchedStats = getFsStatistics(FileOutputFormat.getOutputPath(job), job);
 }
 fsStats = matchedStats;
 FileSystem fs = FileSystem.get(job);
 long bytesOutPrev = getOutputBytes(fsStats);
 this.real = job.getOutputFormat().getRecordWriter(fs, job, finalName,
   reporter);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

@SuppressWarnings({ "deprecation", "unchecked" })
public OldTrackingRecordWriter(ReduceTask reduce, JobConf job,
  TaskReporter reporter, String finalName) throws IOException {
 this.reduceOutputCounter = reduce.reduceOutputCounter;
 this.fileOutputByteCounter = reduce.fileOutputByteCounter;
 List<Statistics> matchedStats = null;
 if (job.getOutputFormat() instanceof FileOutputFormat) {
  matchedStats = getFsStatistics(FileOutputFormat.getOutputPath(job), job);
 }
 fsStats = matchedStats;
 FileSystem fs = FileSystem.get(job);
 long bytesOutPrev = getOutputBytes(fsStats);
 this.real = job.getOutputFormat().getRecordWriter(fs, job, finalName,
   reporter);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

@SuppressWarnings({ "deprecation", "unchecked" })
public OldTrackingRecordWriter(ReduceTask reduce, JobConf job,
  TaskReporter reporter, String finalName) throws IOException {
 this.reduceOutputCounter = reduce.reduceOutputCounter;
 this.fileOutputByteCounter = reduce.fileOutputByteCounter;
 List<Statistics> matchedStats = null;
 if (job.getOutputFormat() instanceof FileOutputFormat) {
  matchedStats = getFsStatistics(FileOutputFormat.getOutputPath(job), job);
 }
 fsStats = matchedStats;
 FileSystem fs = FileSystem.get(job);
 long bytesOutPrev = getOutputBytes(fsStats);
 this.real = job.getOutputFormat().getRecordWriter(fs, job, finalName,
   reporter);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

@SuppressWarnings({ "deprecation", "unchecked" })
public OldTrackingRecordWriter(ReduceTask reduce, JobConf job,
  TaskReporter reporter, String finalName) throws IOException {
 this.reduceOutputCounter = reduce.reduceOutputCounter;
 this.fileOutputByteCounter = reduce.fileOutputByteCounter;
 List<Statistics> matchedStats = null;
 if (job.getOutputFormat() instanceof FileOutputFormat) {
  matchedStats = getFsStatistics(FileOutputFormat.getOutputPath(job), job);
 }
 fsStats = matchedStats;
 FileSystem fs = FileSystem.get(job);
 long bytesOutPrev = getOutputBytes(fsStats);
 this.real = job.getOutputFormat().getRecordWriter(fs, job, finalName,
   reporter);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

NewTrackingRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
  org.apache.hadoop.mapreduce.InputFormat<K, V> inputFormat,
  TaskReporter reporter,
  org.apache.hadoop.mapreduce.TaskAttemptContext taskContext)
  throws InterruptedException, IOException {
 this.reporter = reporter;
 this.inputRecordCounter = reporter
   .getCounter(TaskCounter.MAP_INPUT_RECORDS);
 this.fileInputByteCounter = reporter
   .getCounter(FileInputFormatCounter.BYTES_READ);
 List <Statistics> matchedStats = null;
 if (split instanceof org.apache.hadoop.mapreduce.lib.input.FileSplit) {
  matchedStats = getFsStatistics(((org.apache.hadoop.mapreduce.lib.input.FileSplit) split)
    .getPath(), taskContext.getConfiguration());
 }
 fsStats = matchedStats;
 long bytesInPrev = getInputBytes(fsStats);
 this.real = inputFormat.createRecordReader(split, taskContext);
 long bytesInCurr = getInputBytes(fsStats);
 fileInputByteCounter.increment(bytesInCurr - bytesInPrev);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

NewTrackingRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
  org.apache.hadoop.mapreduce.InputFormat<K, V> inputFormat,
  TaskReporter reporter,
  org.apache.hadoop.mapreduce.TaskAttemptContext taskContext)
  throws InterruptedException, IOException {
 this.reporter = reporter;
 this.inputRecordCounter = reporter
   .getCounter(TaskCounter.MAP_INPUT_RECORDS);
 this.fileInputByteCounter = reporter
   .getCounter(FileInputFormatCounter.BYTES_READ);
 List <Statistics> matchedStats = null;
 if (split instanceof org.apache.hadoop.mapreduce.lib.input.FileSplit) {
  matchedStats = getFsStatistics(((org.apache.hadoop.mapreduce.lib.input.FileSplit) split)
    .getPath(), taskContext.getConfiguration());
 }
 fsStats = matchedStats;
 long bytesInPrev = getInputBytes(fsStats);
 this.real = inputFormat.createRecordReader(split, taskContext);
 long bytesInCurr = getInputBytes(fsStats);
 fileInputByteCounter.increment(bytesInCurr - bytesInPrev);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

@SuppressWarnings("unchecked")
NewDirectOutputCollector(MRJobConfig jobContext,
  JobConf job, TaskUmbilicalProtocol umbilical, TaskReporter reporter) 
throws IOException, ClassNotFoundException, InterruptedException {
 this.reporter = reporter;
 mapOutputRecordCounter = reporter
   .getCounter(TaskCounter.MAP_OUTPUT_RECORDS);
 fileOutputByteCounter = reporter
   .getCounter(FileOutputFormatCounter.BYTES_WRITTEN);
 List<Statistics> matchedStats = null;
 if (outputFormat instanceof org.apache.hadoop.mapreduce.lib.output.FileOutputFormat) {
  matchedStats = getFsStatistics(org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
    .getOutputPath(taskContext), taskContext.getConfiguration());
 }
 fsStats = matchedStats;
 long bytesOutPrev = getOutputBytes(fsStats);
 out = outputFormat.getRecordWriter(taskContext);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

@SuppressWarnings("unchecked")
NewDirectOutputCollector(MRJobConfig jobContext,
  JobConf job, TaskUmbilicalProtocol umbilical, TaskReporter reporter) 
throws IOException, ClassNotFoundException, InterruptedException {
 this.reporter = reporter;
 mapOutputRecordCounter = reporter
   .getCounter(TaskCounter.MAP_OUTPUT_RECORDS);
 fileOutputByteCounter = reporter
   .getCounter(FileOutputFormatCounter.BYTES_WRITTEN);
 List<Statistics> matchedStats = null;
 if (outputFormat instanceof org.apache.hadoop.mapreduce.lib.output.FileOutputFormat) {
  matchedStats = getFsStatistics(org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
    .getOutputPath(taskContext), taskContext.getConfiguration());
 }
 fsStats = matchedStats;
 long bytesOutPrev = getOutputBytes(fsStats);
 out = outputFormat.getRecordWriter(taskContext);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

@SuppressWarnings("unchecked")
NewDirectOutputCollector(MRJobConfig jobContext,
  JobConf job, TaskUmbilicalProtocol umbilical, TaskReporter reporter) 
throws IOException, ClassNotFoundException, InterruptedException {
 this.reporter = reporter;
 mapOutputRecordCounter = reporter
   .getCounter(TaskCounter.MAP_OUTPUT_RECORDS);
 fileOutputByteCounter = reporter
   .getCounter(FileOutputFormatCounter.BYTES_WRITTEN);
 List<Statistics> matchedStats = null;
 if (outputFormat instanceof org.apache.hadoop.mapreduce.lib.output.FileOutputFormat) {
  matchedStats = getFsStatistics(org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
    .getOutputPath(taskContext), taskContext.getConfiguration());
 }
 fsStats = matchedStats;
 long bytesOutPrev = getOutputBytes(fsStats);
 out = outputFormat.getRecordWriter(taskContext);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

@SuppressWarnings("unchecked")
public void init(MapOutputCollector.Context context
        ) throws IOException, ClassNotFoundException {
 this.reporter = context.getReporter();
 JobConf job = context.getJobConf();
 String finalName = getOutputName(getPartition());
 FileSystem fs = FileSystem.get(job);
 OutputFormat<K, V> outputFormat = job.getOutputFormat();   
 mapOutputRecordCounter = reporter.getCounter(TaskCounter.MAP_OUTPUT_RECORDS);
 
 fileOutputByteCounter = reporter
   .getCounter(FileOutputFormatCounter.BYTES_WRITTEN);
 List<Statistics> matchedStats = null;
 if (outputFormat instanceof FileOutputFormat) {
  matchedStats = getFsStatistics(FileOutputFormat.getOutputPath(job), job);
 }
 fsStats = matchedStats;
 long bytesOutPrev = getOutputBytes(fsStats);
 out = job.getOutputFormat().getRecordWriter(fs, job, finalName, reporter);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

@SuppressWarnings("unchecked")
public void init(MapOutputCollector.Context context
        ) throws IOException, ClassNotFoundException {
 this.reporter = context.getReporter();
 JobConf job = context.getJobConf();
 String finalName = getOutputName(getPartition());
 FileSystem fs = FileSystem.get(job);
 OutputFormat<K, V> outputFormat = job.getOutputFormat();   
 mapOutputRecordCounter = reporter.getCounter(TaskCounter.MAP_OUTPUT_RECORDS);
 
 fileOutputByteCounter = reporter
   .getCounter(FileOutputFormatCounter.BYTES_WRITTEN);
 List<Statistics> matchedStats = null;
 if (outputFormat instanceof FileOutputFormat) {
  matchedStats = getFsStatistics(FileOutputFormat.getOutputPath(job), job);
 }
 fsStats = matchedStats;
 long bytesOutPrev = getOutputBytes(fsStats);
 out = job.getOutputFormat().getRecordWriter(fs, job, finalName, reporter);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

@SuppressWarnings("unchecked")
public void init(MapOutputCollector.Context context
        ) throws IOException, ClassNotFoundException {
 this.reporter = context.getReporter();
 JobConf job = context.getJobConf();
 String finalName = getOutputName(getPartition());
 FileSystem fs = FileSystem.get(job);
 OutputFormat<K, V> outputFormat = job.getOutputFormat();   
 mapOutputRecordCounter = reporter.getCounter(TaskCounter.MAP_OUTPUT_RECORDS);
 
 fileOutputByteCounter = reporter
   .getCounter(FileOutputFormatCounter.BYTES_WRITTEN);
 List<Statistics> matchedStats = null;
 if (outputFormat instanceof FileOutputFormat) {
  matchedStats = getFsStatistics(FileOutputFormat.getOutputPath(job), job);
 }
 fsStats = matchedStats;
 long bytesOutPrev = getOutputBytes(fsStats);
 out = job.getOutputFormat().getRecordWriter(fs, job, finalName, reporter);
 long bytesOutCurr = getOutputBytes(fsStats);
 fileOutputByteCounter.increment(bytesOutCurr - bytesOutPrev);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

TrackedRecordReader(TaskReporter reporter, JobConf job) 
 throws IOException{
 inputRecordCounter = reporter.getCounter(TaskCounter.MAP_INPUT_RECORDS);
 fileInputByteCounter = reporter.getCounter(FileInputFormatCounter.BYTES_READ);
 this.reporter = reporter;
 
 List<Statistics> matchedStats = null;
 if (this.reporter.getInputSplit() instanceof FileSplit) {
  matchedStats = getFsStatistics(((FileSplit) this.reporter
    .getInputSplit()).getPath(), job);
 }
 fsStats = matchedStats;
 bytesInPrev = getInputBytes(fsStats);
 rawIn = job.getInputFormat().getRecordReader(reporter.getInputSplit(),
   job, reporter);
 bytesInCurr = getInputBytes(fsStats);
 fileInputByteCounter.increment(bytesInCurr - bytesInPrev);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

TrackedRecordReader(TaskReporter reporter, JobConf job) 
 throws IOException{
 inputRecordCounter = reporter.getCounter(TaskCounter.MAP_INPUT_RECORDS);
 fileInputByteCounter = reporter.getCounter(FileInputFormatCounter.BYTES_READ);
 this.reporter = reporter;
 
 List<Statistics> matchedStats = null;
 if (this.reporter.getInputSplit() instanceof FileSplit) {
  matchedStats = getFsStatistics(((FileSplit) this.reporter
    .getInputSplit()).getPath(), job);
 }
 fsStats = matchedStats;
 bytesInPrev = getInputBytes(fsStats);
 rawIn = job.getInputFormat().getRecordReader(reporter.getInputSplit(),
   job, reporter);
 bytesInCurr = getInputBytes(fsStats);
 fileInputByteCounter.increment(bytesInCurr - bytesInPrev);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

TrackedRecordReader(TaskReporter reporter, JobConf job) 
 throws IOException{
 inputRecordCounter = reporter.getCounter(TaskCounter.MAP_INPUT_RECORDS);
 fileInputByteCounter = reporter.getCounter(FileInputFormatCounter.BYTES_READ);
 this.reporter = reporter;
 
 List<Statistics> matchedStats = null;
 if (this.reporter.getInputSplit() instanceof FileSplit) {
  matchedStats = getFsStatistics(((FileSplit) this.reporter
    .getInputSplit()).getPath(), job);
 }
 fsStats = matchedStats;
 bytesInPrev = getInputBytes(fsStats);
 rawIn = job.getInputFormat().getRecordReader(reporter.getInputSplit(),
   job, reporter);
 bytesInCurr = getInputBytes(fsStats);
 fileInputByteCounter.increment(bytesInCurr - bytesInPrev);
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

TrackedRecordReader(TaskReporter reporter, JobConf job) 
 throws IOException{
 inputRecordCounter = reporter.getCounter(TaskCounter.MAP_INPUT_RECORDS);
 fileInputByteCounter = reporter.getCounter(FileInputFormatCounter.BYTES_READ);
 this.reporter = reporter;
 
 List<Statistics> matchedStats = null;
 if (this.reporter.getInputSplit() instanceof FileSplit) {
  matchedStats = getFsStatistics(((FileSplit) this.reporter
    .getInputSplit()).getPath(), job);
 }
 fsStats = matchedStats;
 bytesInPrev = getInputBytes(fsStats);
 rawIn = job.getInputFormat().getRecordReader(reporter.getInputSplit(),
   job, reporter);
 bytesInCurr = getInputBytes(fsStats);
 fileInputByteCounter.increment(bytesInCurr - bytesInPrev);
}

相关文章

微信公众号

最新文章

更多