org.apache.hadoop.mapred.Task.setPhase()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(10.1k)|赞(0)|评价(0)|浏览(76)

本文整理了Java中org.apache.hadoop.mapred.Task.setPhase()方法的一些代码示例,展示了Task.setPhase()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Task.setPhase()方法的具体详情如下:
包路径:org.apache.hadoop.mapred.Task
类名称:Task
方法名:setPhase

Task.setPhase介绍

[英]Set current phase of the task.
[中]设置任务的当前阶段。

代码示例

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

void taskCleanup(TaskUmbilicalProtocol umbilical) 
throws IOException {
 // set phase for this task
 setPhase(TaskStatus.Phase.CLEANUP);
 getProgress().setStatus("cleanup");
 statusUpdate(umbilical);
 LOG.info("Runnning cleanup for the task");
 // do the cleanup
 committer.abortTask(taskContext);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

void taskCleanup(TaskUmbilicalProtocol umbilical) 
throws IOException {
 // set phase for this task
 setPhase(TaskStatus.Phase.CLEANUP);
 getProgress().setStatus("cleanup");
 statusUpdate(umbilical);
 LOG.info("Runnning cleanup for the task");
 // do the cleanup
 committer.abortTask(taskContext);
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

void taskCleanup(TaskUmbilicalProtocol umbilical) 
throws IOException {
 // set phase for this task
 setPhase(TaskStatus.Phase.CLEANUP);
 getProgress().setStatus("cleanup");
 statusUpdate(umbilical);
 LOG.info("Runnning cleanup for the task");
 // do the cleanup
 committer.abortTask(taskContext);
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

void taskCleanup(TaskUmbilicalProtocol umbilical) 
throws IOException {
 // set phase for this task
 setPhase(TaskStatus.Phase.CLEANUP);
 getProgress().setStatus("cleanup");
 statusUpdate(umbilical);
 LOG.info("Runnning cleanup for the task");
 // do the cleanup
 committer.abortTask(taskContext);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

void taskCleanup(TaskUmbilicalProtocol umbilical) 
throws IOException {
 // set phase for this task
 setPhase(TaskStatus.Phase.CLEANUP);
 getProgress().setStatus("cleanup");
 statusUpdate(umbilical);
 LOG.info("Runnning cleanup for the task");
 // do the cleanup
 committer.abortTask(taskContext);
}

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

public void readFields(DataInput in) throws IOException {
 jobFile = Text.readString(in);
 taskId = TaskAttemptID.read(in);
 partition = in.readInt();
 taskStatus.readFields(in);
 this.mapOutputFile.setJobId(taskId.getJobID()); 
 skipRanges.readFields(in);
 currentRecIndexIterator = skipRanges.skipRangeIterator();
 currentRecStartIndex = currentRecIndexIterator.next();
 skipping = in.readBoolean();
 jobCleanup = in.readBoolean();
 jobSetup = in.readBoolean();
 writeSkipRecs = in.readBoolean();
 taskCleanup = in.readBoolean();
 if (taskCleanup) {
  setPhase(TaskStatus.Phase.CLEANUP);
 }
 pidFile = Text.readString(in);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

public void readFields(DataInput in) throws IOException {
 jobFile = Text.readString(in);
 taskId = TaskAttemptID.read(in);
 partition = in.readInt();
 numSlotsRequired = in.readInt();
 taskStatus.readFields(in);
 skipRanges.readFields(in);
 currentRecIndexIterator = skipRanges.skipRangeIterator();
 currentRecStartIndex = currentRecIndexIterator.next();
 skipping = in.readBoolean();
 jobCleanup = in.readBoolean();
 if (jobCleanup) {
  jobRunStateForCleanup = 
   WritableUtils.readEnum(in, JobStatus.State.class);
 }
 jobSetup = in.readBoolean();
 writeSkipRecs = in.readBoolean();
 taskCleanup = in.readBoolean();
 if (taskCleanup) {
  setPhase(TaskStatus.Phase.CLEANUP);
 }
 user = Text.readString(in);
 extraData.readFields(in);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

public void readFields(DataInput in) throws IOException {
 jobFile = StringInterner.weakIntern(Text.readString(in));
 taskId = TaskAttemptID.read(in);
 partition = in.readInt();
 numSlotsRequired = in.readInt();
 taskStatus.readFields(in);
 skipRanges.readFields(in);
 currentRecIndexIterator = skipRanges.skipRangeIterator();
 currentRecStartIndex = currentRecIndexIterator.next();
 skipping = in.readBoolean();
 jobCleanup = in.readBoolean();
 if (jobCleanup) {
  jobRunStateForCleanup = 
   WritableUtils.readEnum(in, JobStatus.State.class);
 }
 jobSetup = in.readBoolean();
 writeSkipRecs = in.readBoolean();
 taskCleanup = in.readBoolean();
 if (taskCleanup) {
  setPhase(TaskStatus.Phase.CLEANUP);
 }
 user = StringInterner.weakIntern(Text.readString(in));
 int len = in.readInt();
 encryptedSpillKey = new byte[len];
 extraData.readFields(in);
 in.readFully(encryptedSpillKey);
}

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

void taskCleanup(TaskUmbilicalProtocol umbilical) 
throws IOException {
 // set phase for this task
 setPhase(TaskStatus.Phase.CLEANUP);
 getProgress().setStatus("cleanup");
 statusUpdate(umbilical);
 LOG.info("Runnning cleanup for the task");
 // do the cleanup
 discardOutput(taskContext, conf.getOutputCommitter());
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

public void readFields(DataInput in) throws IOException {
 jobFile = StringInterner.weakIntern(Text.readString(in));
 taskId = TaskAttemptID.read(in);
 partition = in.readInt();
 numSlotsRequired = in.readInt();
 taskStatus.readFields(in);
 skipRanges.readFields(in);
 currentRecIndexIterator = skipRanges.skipRangeIterator();
 currentRecStartIndex = currentRecIndexIterator.next();
 skipping = in.readBoolean();
 jobCleanup = in.readBoolean();
 if (jobCleanup) {
  jobRunStateForCleanup = 
   WritableUtils.readEnum(in, JobStatus.State.class);
 }
 jobSetup = in.readBoolean();
 writeSkipRecs = in.readBoolean();
 taskCleanup = in.readBoolean();
 if (taskCleanup) {
  setPhase(TaskStatus.Phase.CLEANUP);
 }
 user = StringInterner.weakIntern(Text.readString(in));
 int len = in.readInt();
 encryptedSpillKey = new byte[len];
 extraData.readFields(in);
 in.readFully(encryptedSpillKey);
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

public void readFields(DataInput in) throws IOException {
 jobFile = StringInterner.weakIntern(Text.readString(in));
 taskId = TaskAttemptID.read(in);
 partition = in.readInt();
 numSlotsRequired = in.readInt();
 taskStatus.readFields(in);
 skipRanges.readFields(in);
 currentRecIndexIterator = skipRanges.skipRangeIterator();
 currentRecStartIndex = currentRecIndexIterator.next();
 skipping = in.readBoolean();
 jobCleanup = in.readBoolean();
 if (jobCleanup) {
  jobRunStateForCleanup = 
   WritableUtils.readEnum(in, JobStatus.State.class);
 }
 jobSetup = in.readBoolean();
 writeSkipRecs = in.readBoolean();
 taskCleanup = in.readBoolean();
 if (taskCleanup) {
  setPhase(TaskStatus.Phase.CLEANUP);
 }
 user = StringInterner.weakIntern(Text.readString(in));
 int len = in.readInt();
 encryptedSpillKey = new byte[len];
 extraData.readFields(in);
 in.readFully(encryptedSpillKey);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

public void readFields(DataInput in) throws IOException {
 jobFile = StringInterner.weakIntern(Text.readString(in));
 taskId = TaskAttemptID.read(in);
 partition = in.readInt();
 numSlotsRequired = in.readInt();
 taskStatus.readFields(in);
 skipRanges.readFields(in);
 currentRecIndexIterator = skipRanges.skipRangeIterator();
 currentRecStartIndex = currentRecIndexIterator.next();
 skipping = in.readBoolean();
 jobCleanup = in.readBoolean();
 if (jobCleanup) {
  jobRunStateForCleanup = 
   WritableUtils.readEnum(in, JobStatus.State.class);
 }
 jobSetup = in.readBoolean();
 writeSkipRecs = in.readBoolean();
 taskCleanup = in.readBoolean();
 if (taskCleanup) {
  setPhase(TaskStatus.Phase.CLEANUP);
 }
 user = StringInterner.weakIntern(Text.readString(in));
 int len = in.readInt();
 encryptedSpillKey = new byte[len];
 extraData.readFields(in);
 in.readFully(encryptedSpillKey);
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

protected void runJobCleanupTask(TaskUmbilicalProtocol umbilical,
               TaskReporter reporter
              ) throws IOException, InterruptedException {
 // set phase for this task
 setPhase(TaskStatus.Phase.CLEANUP);
 getProgress().setStatus("cleanup");
 statusUpdate(umbilical);
 // do the cleanup
 LOG.info("Cleaning up job");
 if (jobRunStateForCleanup == JobStatus.State.FAILED 
   || jobRunStateForCleanup == JobStatus.State.KILLED) {
  LOG.info("Aborting job with runstate : " + jobRunStateForCleanup);
    committer.abortJob(jobContext, jobRunStateForCleanup);
 } else if (jobRunStateForCleanup == JobStatus.State.SUCCEEDED){
  LOG.info("Committing job");
  committer.commitJob(jobContext);
 } else {
  throw new IOException("Invalid state of the job for cleanup. State found "
             + jobRunStateForCleanup + " expecting "
             + JobStatus.State.SUCCEEDED + ", " 
             + JobStatus.State.FAILED + " or "
             + JobStatus.State.KILLED);
 }
 done(umbilical, reporter);
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

public void readFields(DataInput in) throws IOException {
 jobFile = Text.readString(in);
 taskId = TaskAttemptID.read(in);
 partition = in.readInt();
 numSlotsRequired = in.readInt();
 taskStatus.readFields(in);
 this.mapOutputFile.setJobId(taskId.getJobID()); 
 skipRanges.readFields(in);
 currentRecIndexIterator = skipRanges.skipRangeIterator();
 currentRecStartIndex = currentRecIndexIterator.next();
 skipping = in.readBoolean();
 jobCleanup = in.readBoolean();
 if (jobCleanup) {
  jobRunStateForCleanup = 
   WritableUtils.readEnum(in, JobStatus.State.class);
 }
 jobSetup = in.readBoolean();
 username = Text.readString(in);
 writeSkipRecs = in.readBoolean();
 taskCleanup = in.readBoolean();
 if (taskCleanup) {
  setPhase(TaskStatus.Phase.CLEANUP);
 }
 extraData.readFields(in);
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

) throws IOException, InterruptedException {
setPhase(TaskStatus.Phase.CLEANUP);
getProgress().setStatus("cleanup");
statusUpdate(umbilical);

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

void taskCleanup(TaskUmbilicalProtocol umbilical) 
throws IOException {
 // set phase for this task
 setPhase(TaskStatus.Phase.CLEANUP);
 getProgress().setStatus("cleanup");
 statusUpdate(umbilical);
 LOG.info("Runnning cleanup for the task");
 // do the cleanup
 committer.abortTask(taskContext);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

) throws IOException, InterruptedException {
setPhase(TaskStatus.Phase.CLEANUP);
getProgress().setStatus("cleanup");
statusUpdate(umbilical);

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

protected void runJobCleanupTask(TaskUmbilicalProtocol umbilical) 
throws IOException {
 // set phase for this task
 setPhase(TaskStatus.Phase.CLEANUP);
 getProgress().setStatus("cleanup");
 // do the cleanup
 conf.getOutputCommitter().cleanupJob(jobContext);
 done(umbilical);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

) throws IOException, InterruptedException {
setPhase(TaskStatus.Phase.CLEANUP);
getProgress().setStatus("cleanup");
statusUpdate(umbilical);

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

) throws IOException, InterruptedException {
setPhase(TaskStatus.Phase.CLEANUP);
getProgress().setStatus("cleanup");
statusUpdate(umbilical);

相关文章

微信公众号

最新文章

更多