org.apache.hadoop.mapred.QueueManager类的使用及代码示例

x33g5p2x  于2022-01-28 转载在 其他  
字(12.9k)|赞(0)|评价(0)|浏览(135)

本文整理了Java中org.apache.hadoop.mapred.QueueManager类的一些代码示例,展示了QueueManager类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。QueueManager类的具体详情如下:
包路径:org.apache.hadoop.mapred.QueueManager
类名称:QueueManager

QueueManager介绍

[英]Class that exposes information about queues maintained by the Hadoop Map/Reduce framework.

The Map/Reduce framework can be configured with one or more queues, depending on the scheduler it is configured with. While some schedulers work only with one queue, some schedulers support multiple queues. Some schedulers also support the notion of queues within queues - a feature called hierarchical queues.

Queue names are unique, and used as a key to lookup queues. Hierarchical queues are named by a 'fully qualified name' such as q1:q2:q3, where q2 is a child queue of q1 and q3 is a child queue of q2.

Leaf level queues are queues that contain no queues within them. Jobs can be submitted only to leaf level queues.

Queues can be configured with various properties. Some of these properties are common to all schedulers, and those are handled by this class. Schedulers might also associate several custom properties with queues. These properties are parsed and maintained per queue by the framework. If schedulers need more complicated structure to maintain configuration per queue, they are free to not use the facilities provided by the framework, but define their own mechanisms. In such cases, it is likely that the name of the queue will be used to relate the common properties of a queue with scheduler specific properties.

Information related to a queue, such as its name, properties, scheduling information and children are exposed by this class via a serializable class called JobQueueInfo.

Queues are configured in the configuration file mapred-queues.xml. To support backwards compatibility, queues can also be configured in mapred-site.xml. However, when configured in the latter, there is no support for hierarchical queues.
[中]类公开有关Hadoop Map/Reduce框架维护的队列的信息。
Map/Reduce框架可以配置一个或多个队列,具体取决于它配置的调度程序。虽然有些调度器只处理一个队列,但有些调度器支持多个队列。一些调度器还支持队列中队列的概念,这是一种称为分层队列的功能。
队列名称是唯一的,并用作查找队列的键。分层队列由“完全限定名”命名,例如q1:q2:q3,其中q2是q1的子队列,而q3是q2的子队列。
叶级队列是其中不包含任何队列的队列。作业只能提交到叶级队列。
队列可以配置各种属性。其中一些属性对于所有调度器都是通用的,这些属性由此类处理。调度程序还可以将多个自定义属性与队列关联。这些属性由框架按队列进行分析和维护。如果调度器需要更复杂的结构来维护每个队列的配置,他们可以自由地不使用框架提供的工具,而是定义自己的机制。在这种情况下,队列的名称很可能用于将队列的公共属性与特定于调度器的属性关联起来。
该类通过名为JobQueueInfo的可序列化类公开与队列相关的信息,例如其名称、属性、调度信息和子级。
队列在配置文件mapred Queues中配置。xml。为了支持向后兼容性,还可以在mapred站点中配置队列。xml。但是,在后者中配置时,不支持分层队列。

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-core

QueueManager manager = new QueueManager(f.getCanonicalPath(), true);
manager.setSchedulerInfo("first", "queueInfo");
manager.setSchedulerInfo("second", "queueInfoqueueInfo");
Queue root = manager.getRoot();
assertTrue(root.getChildren().size() == 2);
Iterator<Queue> iterator = root.getChildren().iterator();
template.add("first");
template.add("second");
assertEquals(manager.getLeafQueueNames(), template);
String[] groups = { "group1" };
when(mockUGI.getGroupNames()).thenReturn(groups);
assertTrue(manager.hasAccess("first", QueueACL.SUBMIT_JOB, mockUGI));
assertFalse(manager.hasAccess("second", QueueACL.SUBMIT_JOB, mockUGI));
assertFalse(manager.hasAccess("first", QueueACL.ADMINISTER_JOBS, mockUGI));
when(mockUGI.getShortUserName()).thenReturn("user3");
assertTrue(manager.hasAccess("first", QueueACL.ADMINISTER_JOBS, mockUGI));
QueueAclsInfo[] qai = manager.getQueueAcls(mockUGI);
assertEquals(qai.length, 1);
manager.refreshQueues(getConfiguration(), null);
assertEquals(manager.getSchedulerInfo("first"), "queueInfo");
Set<String> queueJobQueueInfos = new HashSet<String>();
for(JobQueueInfo jobInfo : manager.getJobQueueInfos()){
  queueJobQueueInfos.add(jobInfo.getQueueName());

代码示例来源:origin: com.facebook.hadoop/hadoop-core

@Override
public JobQueueInfo getQueueInfo(String queue) throws IOException {
 return queueManager.getJobQueueInfo(queue);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

QueueManager(boolean areAclsEnabled) {
 this.areAclsEnabled = areAclsEnabled;
 initialize(getQueueConfigurationParser(null, false, areAclsEnabled));
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-core

@Test (timeout=5000)
public void testDefaultConfig() {
 QueueManager manager = new QueueManager(true);
 assertEquals(manager.getRoot().getChildren().size(), 2);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-core

Configuration conf = getConfiguration();
QueueManager manager = new QueueManager(conf);
manager.setSchedulerInfo("first", "queueInfo");
manager.setSchedulerInfo("second", "queueInfoqueueInfo");
Queue root = manager.getRoot();
assertTrue(manager.isRunning("first"));
assertFalse(manager.isRunning("second"));
template.add("first");
template.add("second");
assertEquals(manager.getLeafQueueNames(), template);

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

parser = getQueueConfigurationParser(null, false, aclsEnabled);
dumpConfiguration(dumpGenerator,parser.getRoot().getChildren());
dumpGenerator.writeEndArray();
dumpGenerator.writeEndObject();

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

if (!areAclsEnabled()) {
 return true;
 LOG.debug("Checking access for the acl " + toFullPropertyName(queueName,
  qACL.getAclName()) + " for user " + ugi.getShortUserName());
  toFullPropertyName(queueName, qACL.getAclName()));
if (acl == null) {
 return false;

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

/**
 * Only applicable to leaf level queues
 * Parse ACLs for the queue from the configuration.
 */
private QueueState getQueueState(String name, Configuration conf) {
 String stateVal = conf.get(
   toFullPropertyName(name, "state"),
   QueueState.RUNNING.getStateName());
 return QueueState.getState(stateVal);
}

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

/**
 * Return true if the given {@link QueueManager.QueueOperation} can be 
 * performed by the specified user on the given queue.
 * 
 * An operation is allowed if all users are provided access for this
 * operation, or if either the user or any of the groups specified is
 * provided access.
 * 
 * @param queueName Queue on which the operation needs to be performed. 
 * @param oper The operation to perform
 * @param ugi The user and groups who wish to perform the operation.
 * 
 * @return true if the operation is allowed, false otherwise.
 */
public synchronized boolean hasAccess(String queueName, QueueOperation oper,
               UserGroupInformation ugi) {
 return hasAccess(queueName, null, oper, ugi);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

hadoopConfProps.put(toFullPropertyName(
  "default", submitAcl), ugi.getUserName());
hadoopConfProps.put(toFullPropertyName(
  "q1", submitAcl), "u1");
hadoopConfProps.put(toFullPropertyName(
  "q2", submitAcl), "*");
hadoopConfProps.put(toFullPropertyName(
  "default", adminAcl), ugi.getUserName());
hadoopConfProps.put(toFullPropertyName(
  "q1", adminAcl), "u2");
hadoopConfProps.put(toFullPropertyName(
  "q2", adminAcl), "*");
QueueManager queueManager = new QueueManager(conf);
  queueManager.hasAccess("default", QueueACL.
    SUBMIT_JOB, ugi));
assertFalse("User Job Submission failed.",
  queueManager.hasAccess("q1", QueueACL.
    SUBMIT_JOB, ugi));
assertTrue("User Job Submission failed.",
  queueManager.hasAccess("q2", QueueACL.
    SUBMIT_JOB, ugi));
  queueManager.hasAccess("default",
    QueueACL.ADMINISTER_JOBS, ugi));
assertFalse("User Job Submission failed.",
  queueManager.hasAccess("q1", QueueACL.
    ADMINISTER_JOBS, ugi));

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

/**
 * Dumps the configuration of hierarchy of queues
 * @param out the writer object to which dump is written
 * @throws IOException
 */
static void dumpConfiguration(Writer out,Configuration conf) throws IOException {
 dumpConfiguration(out, null,conf);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

/**
 * Test to verify that the scheduling information per queue in the
 * {@link QueueManager} is retained across queue-refresh.
 * 
 * @throws Exception
 */
@Test
public void testSchedulingInfoAfterRefresh()
  throws Exception {
 JobQueueInfo[] queues = getSimpleQueueHierarchy();
 // write the configuration file
 writeQueueConfigurationFile(
   QUEUES_CONFIG_FILE_PATH, new JobQueueInfo[] { queues[0] });
 QueueManager qManager = new QueueManager();
 // Set some scheduling information for the queues in the QueueManager.
 for (String qName : qManager.getLeafQueueNames()) {
  qManager.setSchedulerInfo(qName, new String(
    "scheduling-information-for-queue-" + qName));
 }
 qManager.refreshQueues(null, null);
 // Verify that the scheduling information is retained across refresh.
 for (String qName : qManager.getLeafQueueNames()) {
  assertEquals("scheduling-information-for-queue-" + qName,
    qManager.getSchedulerInfo(qName));
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

createSimpleDocumentWithAcls(doc);
writeToFile(doc, QUEUES_CONFIG_FILE_PATH);
QueueManager qm = new QueueManager(QUEUES_CONFIG_FILE_PATH, true);
assertTrue(qm.hasAccess("p1" + NAME_SEPARATOR + "p12",
  QueueACL.SUBMIT_JOB, ugi));
ugi = createUGI("u2");
assertTrue(qm.hasAccess("p1" + NAME_SEPARATOR + "p12",
  QueueACL.ADMINISTER_JOBS, ugi));
assertTrue(qm.hasAccess("p1" + NAME_SEPARATOR + "p11",
  QueueACL.SUBMIT_JOB, ugi));
ugi = createUGI("u2");
assertTrue(qm.hasAccess("p1" + NAME_SEPARATOR + "p11",
  QueueACL.ADMINISTER_JOBS, ugi));
assertFalse(qm.hasAccess("p1" + NAME_SEPARATOR + "p13",
  QueueACL.SUBMIT_JOB, ugi));
ugi = createUGI("u2");
assertFalse(qm.hasAccess("p1" + NAME_SEPARATOR + "p13",
  QueueACL.ADMINISTER_JOBS, ugi));
assertTrue(qm.isRunning("p1" + NAME_SEPARATOR + "p13"));

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

/**
 * Initialize the queue-manager with the queue hierarchy specified by the
 * given {@link QueueConfigurationParser}.
 * 
 * @param cp
 */
private void initialize(QueueConfigurationParser cp) {
 this.root = cp.getRoot();
 leafQueues.clear();
 allQueues.clear();
 //At this point we have root populated
 //update data structures leafNodes.
 leafQueues = getRoot().getLeafQueues();
 allQueues.putAll(getRoot().getInnerQueues());
 allQueues.putAll(leafQueues);
 LOG.info("AllQueues : " + allQueues + "; LeafQueues : " + leafQueues);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

/**
 * @param originalQManager
 * @throws Exception
 */
private void testRefreshFailureWithChangeOfHierarchy(
  QueueManager originalQManager)
  throws Exception {
 // Make sure that isHierarchySame returns false.
 QueueManager modifiedQueueManager = new QueueManager();
 assertFalse("Hierarchy changed after refresh!",
   originalQManager.getRoot().isHierarchySameAs(
     modifiedQueueManager.getRoot()));
 // Refresh the QueueManager and make sure it fails.
 try {
  originalQManager.refreshQueues(null, null);
  fail("Queue-refresh should have failed!");
 } catch (Exception e) {
  // Refresh failed as expected. Check the error message.
  assertTrue(
    "Exception message should point to a change in queue hierarchy!",
    e.getMessage().contains(
      QueueManager.MSG_REFRESH_FAILURE_WITH_CHANGE_OF_HIERARCHY));
 }
 // Make sure that the old configuration is retained.
 List<JobQueueInfo> rootQueues =
   originalQManager.getRoot().getJobQueueInfo().getChildren();
 assertTrue(rootQueues.size() == 1);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

if (queue.getAcls() != null) {
 submitJobList =
  queue.getAcls().get(toFullPropertyName(queue.getName(),
    QueueACL.SUBMIT_JOB.getAclName()));
 administerJobsList =
  queue.getAcls().get(toFullPropertyName(queue.getName(),
    QueueACL.ADMINISTER_JOBS.getAclName()));
dumpGenerator.writeStartArray();
if (childQueues != null && childQueues.size() > 0) {
 dumpConfiguration(dumpGenerator, childQueues);

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

/**
 * Construct a new QueueManager using configuration specified in the passed
 * in {@link org.apache.hadoop.conf.Configuration} object.
 * 
 * @param conf Configuration object where queue configuration is specified.
 */
public QueueManager(Configuration conf) {
 queueNames = new TreeSet<String>();
 aclsMap = new HashMap<String, ACL>();
 schedulerInfoObjects = new HashMap<String, Object>();
 initialize(conf);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

@Override
public synchronized void start() throws IOException {
 super.start();
 QueueManager queueManager = taskTrackerManager.getQueueManager();
 String queueName = queueManager.getJobQueueInfos()[0].getQueueName();
 queueManager.setSchedulerInfo(queueName
   ,"Maximum Tasks Per Job :: " + String.valueOf(maxTasksPerJob));
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

public void testSchedulerInfo() {
 JobConf conf = new JobConf();
 conf.set(DeprecatedQueueConfigurationParser.MAPRED_QUEUE_NAMES_KEY,
   "qq1,qq2");
 QueueManager qMgr = new QueueManager(conf);
 qMgr.setSchedulerInfo("qq1", "queueInfoForqq1");
 qMgr.setSchedulerInfo("qq2", "queueInfoForqq2");
 assertEquals(qMgr.getSchedulerInfo("qq2"), "queueInfoForqq2");
 assertEquals(qMgr.getSchedulerInfo("qq1"), "queueInfoForqq1");
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

@Test
public void testhasAccessForParent() throws Exception {
 deleteQueuesConfigFile();
 Document doc = createDocument();
 createSimpleDocument(doc);
 writeToFile(doc, QUEUES_CONFIG_FILE_PATH);
 QueueManager qm = new QueueManager(QUEUES_CONFIG_FILE_PATH, true);
 UserGroupInformation ugi = createUGI("u1");
 assertFalse(qm.hasAccess("p1", QueueACL.SUBMIT_JOB, ugi));
}

相关文章