org.apache.hadoop.mapred.Task.updateResourceCounters()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(5.0k)|赞(0)|评价(0)|浏览(71)

本文整理了Java中org.apache.hadoop.mapred.Task.updateResourceCounters()方法的一些代码示例,展示了Task.updateResourceCounters()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Task.updateResourceCounters()方法的具体详情如下:
包路径:org.apache.hadoop.mapred.Task
类名称:Task
方法名:updateResourceCounters

Task.updateResourceCounters介绍

[英]Update resource information counters
[中]更新资源信息计数器

代码示例

代码示例来源:origin: com.facebook.hadoop/hadoop-core

private synchronized void updateCounters() {
 for(Statistics stat: FileSystem.getAllStatistics()) {
  String uriScheme = stat.getScheme();
  FileSystemStatisticUpdater updater = statisticUpdaters.get(uriScheme);
  if(updater==null) {//new FileSystem has been found in the cache
   updater = new FileSystemStatisticUpdater(uriScheme, stat);
   statisticUpdaters.put(uriScheme, updater);
  }
  updater.updateCounters();      
 }
 updateResourceCounters();
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

private synchronized void updateCounters() {
 Map<String, List<FileSystem.Statistics>> map = new 
   HashMap<String, List<FileSystem.Statistics>>();
 for(Statistics stat: FileSystem.getAllStatistics()) {
  String uriScheme = stat.getScheme();
  if (map.containsKey(uriScheme)) {
   List<FileSystem.Statistics> list = map.get(uriScheme);
   list.add(stat);
  } else {
   List<FileSystem.Statistics> list = new ArrayList<FileSystem.Statistics>();
   list.add(stat);
   map.put(uriScheme, list);
  }
 }
 for (Map.Entry<String, List<FileSystem.Statistics>> entry: map.entrySet()) {
  FileSystemStatisticUpdater updater = statisticUpdaters.get(entry.getKey());
  if(updater==null) {//new FileSystem has been found in the cache
   updater = new FileSystemStatisticUpdater(entry.getValue(), entry.getKey());
   statisticUpdaters.put(entry.getKey(), updater);
  }
  updater.updateCounters();
 }
 
 gcUpdater.incrementGcCounter();
 updateResourceCounters();
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

private synchronized void updateCounters() {
 Map<String, List<FileSystem.Statistics>> map = new 
   HashMap<String, List<FileSystem.Statistics>>();
 for(Statistics stat: FileSystem.getAllStatistics()) {
  String uriScheme = stat.getScheme();
  if (map.containsKey(uriScheme)) {
   List<FileSystem.Statistics> list = map.get(uriScheme);
   list.add(stat);
  } else {
   List<FileSystem.Statistics> list = new ArrayList<FileSystem.Statistics>();
   list.add(stat);
   map.put(uriScheme, list);
  }
 }
 for (Map.Entry<String, List<FileSystem.Statistics>> entry: map.entrySet()) {
  FileSystemStatisticUpdater updater = statisticUpdaters.get(entry.getKey());
  if(updater==null) {//new FileSystem has been found in the cache
   updater = new FileSystemStatisticUpdater(entry.getValue(), entry.getKey());
   statisticUpdaters.put(entry.getKey(), updater);
  }
  updater.updateCounters();
 }
 
 gcUpdater.incrementGcCounter();
 updateResourceCounters();
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

private synchronized void updateCounters() {
 Map<String, List<FileSystem.Statistics>> map = new 
   HashMap<String, List<FileSystem.Statistics>>();
 for(Statistics stat: FileSystem.getAllStatistics()) {
  String uriScheme = stat.getScheme();
  if (map.containsKey(uriScheme)) {
   List<FileSystem.Statistics> list = map.get(uriScheme);
   list.add(stat);
  } else {
   List<FileSystem.Statistics> list = new ArrayList<FileSystem.Statistics>();
   list.add(stat);
   map.put(uriScheme, list);
  }
 }
 for (Map.Entry<String, List<FileSystem.Statistics>> entry: map.entrySet()) {
  FileSystemStatisticUpdater updater = statisticUpdaters.get(entry.getKey());
  if(updater==null) {//new FileSystem has been found in the cache
   updater = new FileSystemStatisticUpdater(entry.getValue(), entry.getKey());
   statisticUpdaters.put(entry.getKey(), updater);
  }
  updater.updateCounters();
 }
 
 gcUpdater.incrementGcCounter();
 updateResourceCounters();
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

private synchronized void updateCounters() {
 Map<String, List<FileSystem.Statistics>> map = new 
   HashMap<String, List<FileSystem.Statistics>>();
 for(Statistics stat: FileSystem.getAllStatistics()) {
  String uriScheme = stat.getScheme();
  if (map.containsKey(uriScheme)) {
   List<FileSystem.Statistics> list = map.get(uriScheme);
   list.add(stat);
  } else {
   List<FileSystem.Statistics> list = new ArrayList<FileSystem.Statistics>();
   list.add(stat);
   map.put(uriScheme, list);
  }
 }
 for (Map.Entry<String, List<FileSystem.Statistics>> entry: map.entrySet()) {
  FileSystemStatisticUpdater updater = statisticUpdaters.get(entry.getKey());
  if(updater==null) {//new FileSystem has been found in the cache
   updater = new FileSystemStatisticUpdater(entry.getValue(), entry.getKey());
   statisticUpdaters.put(entry.getKey(), updater);
  }
  updater.updateCounters();
 }
 
 gcUpdater.incrementGcCounter();
 updateResourceCounters();
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

private synchronized void updateCounters() {
 for(Statistics stat: FileSystem.getAllStatistics()) {
  String uriScheme = stat.getScheme();
  FileSystemStatisticUpdater updater = statisticUpdaters.get(uriScheme);
  if(updater==null) {//new FileSystem has been found in the cache
   updater = new FileSystemStatisticUpdater(uriScheme, stat);
   statisticUpdaters.put(uriScheme, updater);
  }
  updater.updateCounters();      
 }
 gcUpdater.incrementGcCounter();
 updateResourceCounters();
}

相关文章

微信公众号

最新文章

更多