java.util.concurrent.LinkedBlockingDeque.put()方法的使用及代码示例

x33g5p2x  于2022-01-23 转载在 其他  
字(7.7k)|赞(0)|评价(0)|浏览(122)

本文整理了Java中java.util.concurrent.LinkedBlockingDeque.put()方法的一些代码示例,展示了LinkedBlockingDeque.put()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。LinkedBlockingDeque.put()方法的具体详情如下:
包路径:java.util.concurrent.LinkedBlockingDeque
类名称:LinkedBlockingDeque
方法名:put

LinkedBlockingDeque.put介绍

暂无

代码示例

代码示例来源:origin: apache/incubator-gobblin

private void onSuccess(ProducerJob job, List<String[]> results, LinkedBlockingDeque<String[]> responseQueue,
   ConcurrentLinkedDeque<ProducerJob> pagesToRetry) {
  int size = results.size();
  if (size == GoogleWebmasterClient.API_ROW_LIMIT) {
   List<? extends ProducerJob> granularJobs = job.partitionJobs();
   if (granularJobs.isEmpty()) {
    //The job is not divisible
    //TODO: 99.99% cases we are good. But what if it happens, what can we do?
    log.warn(String.format(
      "There might be more query data for your job %s. Currently, downloading more than the Google API limit '%d' is not supported.",
      job, GoogleWebmasterClient.API_ROW_LIMIT));
   } else {
    log.info(String.format("Partition current job %s", job));
    pagesToRetry.addAll(granularJobs);
    return;
   }
  }
  log.debug(String.format("Finished %s. Current Queue size: %d. Record size: %d.", job, responseQueue.size(), size));
  try {
   for (String[] r : results) {
    responseQueue.put(r);
   }
  } catch (InterruptedException e) {
   log.error(e.getMessage());
   throw new RuntimeException(e);
  }
 }
}

代码示例来源:origin: loklak/loklak_server

public static void addScheduler(final TwitterTweet t, final UserEntry u, final boolean dump, boolean priority) {
  try {
    if (priority) {
      try {
        messageQueue.addFirst(new DAO.MessageWrapper(t, u, dump));
      } catch (IllegalStateException ee) {
        // case where the queue is full
        messageQueue.put(new DAO.MessageWrapper(t, u, dump));
      }
    } else {
      messageQueue.put(new DAO.MessageWrapper(t, u, dump));
    }
  } catch (InterruptedException e) {
    DAO.severe(e);
  }
}

代码示例来源:origin: apache/flume

@Override
protected Event doTake() throws InterruptedException {
 channelCounter.incrementEventTakeAttemptCount();
 if (takeList.remainingCapacity() == 0) {
  throw new ChannelException("Take list for MemoryTransaction, capacity " +
    takeList.size() + " full, consider committing more frequently, " +
    "increasing capacity, or increasing thread count");
 }
 if (!queueStored.tryAcquire(keepAlive, TimeUnit.SECONDS)) {
  return null;
 }
 Event event;
 synchronized (queueLock) {
  event = queue.poll();
 }
 Preconditions.checkNotNull(event, "Queue.poll returned NULL despite semaphore " +
   "signalling existence of entry");
 takeList.put(event);
 int eventByteSize = (int) Math.ceil(estimateEventSize(event) / byteCapacitySlotSize);
 takeByteCounter += eventByteSize;
 return event;
}

代码示例来源:origin: blazegraph/database

/**
   * Accept an event. Events are generated by the methods below which update
   * our internal state. Events are ONLY generated if the internal state is
   * changed by the request, and that state change is made atomically while
   * holding the {@link #lock}. This guarantees that we will not see duplicate
   * events arising from duplicate requests.
   * 
   * @param e
   *            The event.
   */
  private void accept(final QuorumEvent e) {
    lock.lock();
    try {
      assertRunning();
//            if (false) {
//                // stack trace so we can see who generated this event.
//                log.warn("event=" + e, new RuntimeException("stack trace"));
//            }
      deque.put(e);
      dequeNotEmpty.signalAll();
    } catch (InterruptedException ex) {
      throw new RuntimeException(ex);
    } finally {
      lock.unlock();
    }
  }

代码示例来源:origin: com.blazegraph/bigdata-core-test

/**
   * Accept an event. Events are generated by the methods below which update
   * our internal state. Events are ONLY generated if the internal state is
   * changed by the request, and that state change is made atomically while
   * holding the {@link #lock}. This guarantees that we will not see duplicate
   * events arising from duplicate requests.
   * 
   * @param e
   *            The event.
   */
  private void accept(final QuorumEvent e) {
    lock.lock();
    try {
      assertRunning();
//            if (false) {
//                // stack trace so we can see who generated this event.
//                log.warn("event=" + e, new RuntimeException("stack trace"));
//            }
      deque.put(e);
      dequeNotEmpty.signalAll();
    } catch (InterruptedException ex) {
      throw new RuntimeException(ex);
    } finally {
      lock.unlock();
    }
  }

代码示例来源:origin: org.apache.gobblin/google-ingestion

private void onSuccess(ProducerJob job, List<String[]> results, LinkedBlockingDeque<String[]> responseQueue,
   ConcurrentLinkedDeque<ProducerJob> pagesToRetry) {
  int size = results.size();
  if (size == GoogleWebmasterClient.API_ROW_LIMIT) {
   List<? extends ProducerJob> granularJobs = job.partitionJobs();
   if (granularJobs.isEmpty()) {
    //The job is not divisible
    //TODO: 99.99% cases we are good. But what if it happens, what can we do?
    log.warn(String.format(
      "There might be more query data for your job %s. Currently, downloading more than the Google API limit '%d' is not supported.",
      job, GoogleWebmasterClient.API_ROW_LIMIT));
   } else {
    log.info(String.format("Partition current job %s", job));
    pagesToRetry.addAll(granularJobs);
    return;
   }
  }
  log.debug(String.format("Finished %s. Current Queue size: %d. Record size: %d.", job, responseQueue.size(), size));
  try {
   for (String[] r : results) {
    responseQueue.put(r);
   }
  } catch (InterruptedException e) {
   log.error(e.getMessage());
   throw new RuntimeException(e);
  }
 }
}

代码示例来源:origin: ch.cern.hadoop/hadoop-common

@Override
 public Boolean call() throws Exception {
  Compressor c = CodecPool.getCompressor(codec);
  queue.put(c);
  return c != null;
 }
};

代码示例来源:origin: ch.cern.hadoop/hadoop-common

@Override
 public Boolean call() throws Exception {
  Decompressor c = CodecPool.getDecompressor(codec);
  queue.put(c);
  return c != null;
 }
};

代码示例来源:origin: com.github.jiayuhan-it/hadoop-common

@Override
 public Boolean call() throws Exception {
  Compressor c = CodecPool.getCompressor(codec);
  queue.put(c);
  return c != null;
 }
};

代码示例来源:origin: com.github.jiayuhan-it/hadoop-common

@Override
 public Boolean call() throws Exception {
  Decompressor c = CodecPool.getDecompressor(codec);
  queue.put(c);
  return c != null;
 }
};

代码示例来源:origin: io.syndesis/db-logging

json.remove("step");
logsController.eventQueue.put(batch -> {
logsController.eventQueue.put(batch -> {

代码示例来源:origin: org.apache.flume/flume-ng-core

@Override
protected Event doTake() throws InterruptedException {
 channelCounter.incrementEventTakeAttemptCount();
 if (takeList.remainingCapacity() == 0) {
  throw new ChannelException("Take list for MemoryTransaction, capacity " +
    takeList.size() + " full, consider committing more frequently, " +
    "increasing capacity, or increasing thread count");
 }
 if (!queueStored.tryAcquire(keepAlive, TimeUnit.SECONDS)) {
  return null;
 }
 Event event;
 synchronized (queueLock) {
  event = queue.poll();
 }
 Preconditions.checkNotNull(event, "Queue.poll returned NULL despite semaphore " +
   "signalling existence of entry");
 takeList.put(event);
 int eventByteSize = (int) Math.ceil(estimateEventSize(event) / byteCapacitySlotSize);
 takeByteCounter += eventByteSize;
 return event;
}

代码示例来源:origin: apache/asterixdb

@org.junit.Test
public void testFixedSizeSubscribtion() {
  try {
    ActiveProperties afp = Mockito.mock(ActiveProperties.class);
    Mockito.when(afp.getMemoryComponentGlobalBudget()).thenReturn(FEED_MEM_BUDGET);
    ConcurrentFramePool fmm =
        new ConcurrentFramePool("TestNode", afp.getMemoryComponentGlobalBudget(), DEFAULT_FRAME_SIZE);
    int i = 0;
    ByteBuffer buffer = ByteBuffer.allocate(DEFAULT_FRAME_SIZE);
    LinkedBlockingDeque<ByteBuffer> buffers = new LinkedBlockingDeque<>();
    FrameAction frameAction = new FrameAction();
    frameAction.setFrame(buffer);
    while (!fmm.subscribe(frameAction)) {
      buffers.put(frameAction.retrieve());
      i++;
    }
    // One subscriber.
    // Check that all frames have been consumed
    Assert.assertEquals(i, NUM_FRAMES);
    // Release a frame (That will be handed out to the subscriber)
    fmm.release(buffers.take());
    // Check that all frames have been consumed (since the released frame have been handed to the consumer)
    Assert.assertEquals(0, fmm.remaining());
  } catch (Throwable th) {
    th.printStackTrace();
    Assert.fail(th.getMessage());
  } finally {
    Assert.assertNull(cause);
  }
}

代码示例来源:origin: apache/activemq-artemis

failureQueue.put(false);

代码示例来源:origin: apache/asterixdb

frameAction.setFrame(buffer);
while (!fmm.subscribe(frameAction)) {
  buffers.put(frameAction.retrieve());
  i++;
buffers.put(frameAction.retrieve());

代码示例来源:origin: stackoverflow.com

stack.put(e);

相关文章

微信公众号

最新文章

更多