org.apache.hadoop.yarn.api.records.Container.getContainerToken()方法的使用及代码示例

x33g5p2x  于2022-01-18 转载在 其他  
字(10.2k)|赞(0)|评价(0)|浏览(112)

本文整理了Java中org.apache.hadoop.yarn.api.records.Container.getContainerToken()方法的一些代码示例,展示了Container.getContainerToken()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Container.getContainerToken()方法的具体详情如下:
包路径:org.apache.hadoop.yarn.api.records.Container
类名称:Container
方法名:getContainerToken

Container.getContainerToken介绍

[英]Get the ContainerToken for the container.

ContainerToken is the security token used by the framework to verify authenticity of any Container.

The ResourceManager, on container allocation provides a secure token which is verified by the NodeManager on container launch.

Applications do not need to care about ContainerToken, they are transparently handled by the framework - the allocated Container includes the ContainerToken.
[中]获取容器的ContainerToken
ContainerToken是框架用于验证任何Container的真实性的安全令牌。
容器分配时的ResourceManager提供了一个安全令牌,该令牌由容器启动时的NodeManager验证。
应用程序不需要关心ContainerToken,它们由框架透明地处理-分配的Container包括ContainerToken

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-client

public StartContainerEvent(Container container,
  ContainerLaunchContext containerLaunchContext) {
 super(container.getId(), container.getNodeId(),
   container.getContainerToken(), ContainerEventType.START_CONTAINER);
 this.container = container;
 this.containerLaunchContext = containerLaunchContext;
}

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-client

public UpdateContainerResourceEvent(Container container,
  boolean isIncreaseEvent) {
 super(container.getId(), container.getNodeId(),
   container.getContainerToken(),
   ContainerEventType.UPDATE_CONTAINER_RESOURCE);
 this.container = container;
 this.isIncreaseEvent = isIncreaseEvent;
}

代码示例来源:origin: io.hops/hadoop-yarn-client

public IncreaseContainerResourceEvent(Container container) {
 super(container.getId(), container.getNodeId(),
   container.getContainerToken(),
     ContainerEventType.INCREASE_CONTAINER_RESOURCE);
 this.container = container;
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-yarn-client

public StartContainerEvent(Container container,
  ContainerLaunchContext containerLaunchContext) {
 super(container.getId(), container.getNodeId(),
   container.getContainerToken(), ContainerEventType.START_CONTAINER);
 this.container = container;
 this.containerLaunchContext = containerLaunchContext;
}

代码示例来源:origin: org.apache.tez/tez-dag

public ContainerLauncherLaunchRequestEvent(ContainerLaunchContext clc,
                      Container container, int launcherId, int schedulerId,
                      int taskCommId) {
 super(container.getId(), container.getNodeId(), container
   .getContainerToken(), ContainerLauncherEventType.CONTAINER_LAUNCH_REQUEST,
   launcherId, schedulerId, taskCommId);
 this.clc = clc;
 this.container = container;
}

代码示例来源:origin: io.hops/hadoop-yarn-client

public StartContainerEvent(Container container,
  ContainerLaunchContext containerLaunchContext) {
 super(container.getId(), container.getNodeId(),
   container.getContainerToken(), ContainerEventType.START_CONTAINER);
 this.container = container;
 this.containerLaunchContext = containerLaunchContext;
}

代码示例来源:origin: ch.cern.hadoop/hadoop-yarn-client

public StartContainerEvent(Container container,
  ContainerLaunchContext containerLaunchContext) {
 super(container.getId(), container.getNodeId(),
   container.getContainerToken(), ContainerEventType.START_CONTAINER);
 this.container = container;
 this.containerLaunchContext = containerLaunchContext;
}

代码示例来源:origin: org.springframework.data/spring-yarn-core

@Override
public String toString() {
  return "ContainerAllocationEvent [containerId=" + container.getId()
      + ", nodeId=" + container.getNodeId()
      + ", containerToken=" + container.getContainerToken()
      + "]";
}

代码示例来源:origin: org.apache.tez/tez-dag

protected void sendStopRequestToNM() {
 sendEvent(new ContainerLauncherStopRequestEvent(containerId,
   container.getNodeId(), container.getContainerToken(), launcherId, schedulerId, taskCommId));
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-app

public ContainerRemoteLaunchEvent(TaskAttemptId taskAttemptID,
  ContainerLaunchContext containerLaunchContext,
  Container allocatedContainer, Task remoteTask) {
 super(taskAttemptID, allocatedContainer.getId(), StringInterner
  .weakIntern(allocatedContainer.getNodeId().toString()),
  allocatedContainer.getContainerToken(),
  ContainerLauncher.EventType.CONTAINER_REMOTE_LAUNCH);
 this.allocatedContainer = allocatedContainer;
 this.containerLaunchContext = containerLaunchContext;
 this.task = remoteTask;
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-app

public ContainerRemoteLaunchEvent(TaskAttemptId taskAttemptID,
  ContainerLaunchContext containerLaunchContext,
  Container allocatedContainer, Task remoteTask) {
 super(taskAttemptID, allocatedContainer.getId(), StringInterner
  .weakIntern(allocatedContainer.getNodeId().toString()),
  allocatedContainer.getContainerToken(),
  ContainerLauncher.EventType.CONTAINER_REMOTE_LAUNCH);
 this.allocatedContainer = allocatedContainer;
 this.containerLaunchContext = containerLaunchContext;
 this.task = remoteTask;
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-app

public ContainerRemoteLaunchEvent(TaskAttemptId taskAttemptID,
  ContainerLaunchContext containerLaunchContext,
  Container allocatedContainer, Task remoteTask) {
 super(taskAttemptID, allocatedContainer.getId(), StringInterner
  .weakIntern(allocatedContainer.getNodeId().toString()),
  allocatedContainer.getContainerToken(),
  ContainerLauncher.EventType.CONTAINER_REMOTE_LAUNCH);
 this.allocatedContainer = allocatedContainer;
 this.containerLaunchContext = containerLaunchContext;
 this.task = remoteTask;
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-app

@SuppressWarnings("unchecked")
private static void sendContainerCompleted(TaskAttemptImpl taskAttempt) {
 taskAttempt.eventHandler.handle(new ContainerLauncherEvent(
   taskAttempt.attemptId,
   taskAttempt.container.getId(), StringInterner
   .weakIntern(taskAttempt.container.getNodeId().toString()),
   taskAttempt.container.getContainerToken(),
   ContainerLauncher.EventType.CONTAINER_COMPLETED));
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-app

@SuppressWarnings("unchecked")
private static void sendContainerCompleted(TaskAttemptImpl taskAttempt) {
 taskAttempt.eventHandler.handle(new ContainerLauncherEvent(
   taskAttempt.attemptId,
   taskAttempt.container.getId(), StringInterner
   .weakIntern(taskAttempt.container.getNodeId().toString()),
   taskAttempt.container.getContainerToken(),
   ContainerLauncher.EventType.CONTAINER_COMPLETED));
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-app

@SuppressWarnings("unchecked")
 @Override
 public void transition(TaskAttemptImpl taskAttempt,
   TaskAttemptEvent event) {
  taskAttempt.setFinishTime();
  taskAttempt.taskAttemptListener.unregister(
    taskAttempt.attemptId, taskAttempt.jvmID);
  taskAttempt.eventHandler.handle(new ContainerLauncherEvent(
    taskAttempt.attemptId,
    taskAttempt.getAssignedContainerID(), taskAttempt.getAssignedContainerMgrAddress(),
    taskAttempt.container.getContainerToken(),
    ContainerLauncher.EventType.CONTAINER_REMOTE_CLEANUP, false));
  taskAttempt.eventHandler.handle(new TaskTAttemptKilledEvent(
    taskAttempt.attemptId, false));
 }
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-app

@SuppressWarnings("unchecked")
private static void sendContainerCleanup(TaskAttemptImpl taskAttempt,
  TaskAttemptEvent event) {
 if (event instanceof TaskAttemptKillEvent) {
  taskAttempt.addDiagnosticInfo(
    ((TaskAttemptKillEvent) event).getMessage());
 }
 //send the cleanup event to containerLauncher
 taskAttempt.eventHandler.handle(new ContainerLauncherEvent(
   taskAttempt.attemptId,
   taskAttempt.container.getId(), StringInterner
   .weakIntern(taskAttempt.container.getNodeId().toString()),
   taskAttempt.container.getContainerToken(),
   ContainerLauncher.EventType.CONTAINER_REMOTE_CLEANUP,
   event.getType() == TaskAttemptEventType.TA_TIMED_OUT));
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-app

@SuppressWarnings("unchecked")
private static void sendContainerCleanup(TaskAttemptImpl taskAttempt,
  TaskAttemptEvent event) {
 if (event instanceof TaskAttemptKillEvent) {
  taskAttempt.addDiagnosticInfo(
    ((TaskAttemptKillEvent) event).getMessage());
 }
 //send the cleanup event to containerLauncher
 taskAttempt.eventHandler.handle(new ContainerLauncherEvent(
   taskAttempt.attemptId,
   taskAttempt.container.getId(), StringInterner
   .weakIntern(taskAttempt.container.getNodeId().toString()),
   taskAttempt.container.getContainerToken(),
   ContainerLauncher.EventType.CONTAINER_REMOTE_CLEANUP,
   event.getType() == TaskAttemptEventType.TA_TIMED_OUT));
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-app

@SuppressWarnings("unchecked")
private static void sendContainerCleanup(TaskAttemptImpl taskAttempt,
  TaskAttemptEvent event) {
 if (event instanceof TaskAttemptKillEvent) {
  taskAttempt.addDiagnosticInfo(
    ((TaskAttemptKillEvent) event).getMessage());
 }
 taskAttempt.reportedStatus.progress = 1.0f;
 taskAttempt.updateProgressSplits();
 //send the cleanup event to containerLauncher
 taskAttempt.eventHandler.handle(
   new ContainerLauncherEvent(taskAttempt.attemptId,
     taskAttempt.container.getId(), StringInterner
     .weakIntern(taskAttempt.container.getNodeId().toString()),
     taskAttempt.container.getContainerToken(),
     ContainerLauncher.EventType.CONTAINER_REMOTE_CLEANUP));
}

代码示例来源:origin: org.apache.slider/slider-core

/**
 * This code is in the dist shell examples -it's been moved here
 * so that if it is needed, it's still here
 * @return a remote user with a token to access the container.
 */
public UserGroupInformation setupUGI() {
 UserGroupInformation user =
  UserGroupInformation.createRemoteUser(container.getId().toString());
 String cmIpPortStr = container.getNodeId().getHost() + ":" + container.getNodeId().getPort();
 final InetSocketAddress cmAddress = NetUtils.createSocketAddr(cmIpPortStr);
 org.apache.hadoop.yarn.api.records.Token containerToken = container.getContainerToken();
 if (containerToken != null) {
  Token<ContainerTokenIdentifier> token =
   ConverterUtils.convertFromYarn(containerToken, cmAddress);
  user.addToken(token);
 }
 return user;
}

代码示例来源:origin: apache/incubator-slider

/**
 * This code is in the dist shell examples -it's been moved here
 * so that if it is needed, it's still here
 * @return a remote user with a token to access the container.
 */
public UserGroupInformation setupUGI() {
 UserGroupInformation user =
  UserGroupInformation.createRemoteUser(container.getId().toString());
 String cmIpPortStr = container.getNodeId().getHost() + ":" + container.getNodeId().getPort();
 final InetSocketAddress cmAddress = NetUtils.createSocketAddr(cmIpPortStr);
 org.apache.hadoop.yarn.api.records.Token containerToken = container.getContainerToken();
 if (containerToken != null) {
  Token<ContainerTokenIdentifier> token =
   ConverterUtils.convertFromYarn(containerToken, cmAddress);
  user.addToken(token);
 }
 return user;
}

相关文章