org.elasticsearch.action.bulk.BulkRequest.requests()方法的使用及代码示例

x33g5p2x  于2022-01-16 转载在 其他  
字(10.9k)|赞(0)|评价(0)|浏览(234)

本文整理了Java中org.elasticsearch.action.bulk.BulkRequest.requests()方法的一些代码示例,展示了BulkRequest.requests()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。BulkRequest.requests()方法的具体详情如下:
包路径:org.elasticsearch.action.bulk.BulkRequest
类名称:BulkRequest
方法名:requests

BulkRequest.requests介绍

[英]Requests that are part of this request. It is only possible to add things that are both ActionRequests and WriteRequests to this but java doesn't support syntax to declare that everything in the array has both types so we declare the one with the least casts.
[中]属于此请求一部分的请求。只能向其中添加ActionRequests和WriterRequests,但java不支持声明数组中的所有内容都具有这两种类型的语法,因此我们声明的是强制转换最少的类型。

代码示例

代码示例来源:origin: SonarSource/sonarqube

message.append("Bulk[");
Multiset<BulkRequestKey> groupedRequests = LinkedHashMultiset.create();
for (int i = 0; i < bulkRequest.requests().size(); i++) {
 DocWriteRequest item = bulkRequest.requests().get(i);
 String requestType;
 if (item instanceof IndexRequest) {

代码示例来源:origin: richardwilly98/elasticsearch-river-mongodb

@Override
public void beforeBulk(long executionId, BulkRequest request) {
  checkBulkProcessorAvailability();
  logger.trace("beforeBulk - new bulk [{}] of items [{}]", executionId, request.numberOfActions());
  if (flushBulkProcessor.get()) {
    logger.trace("About to flush bulk request index[{}] - type[{}]", index, type);
    int dropDollectionIndex = findLastDropCollection(request.requests());
    request.requests().subList(0, dropDollectionIndex + 1).clear();
    try {
      dropRecreateMapping();
      deletedDocuments.set(0);
      updatedDocuments.set(0);
      insertedDocuments.set(0);
      flushBulkProcessor.set(false);
    } catch (Throwable t) {
      logger.error("Drop collection operation failed", t);
      MongoDBRiverHelper.setRiverStatus(client, definition.getRiverName(), Status.IMPORT_FAILED);
      request.requests().clear();
      bulkProcessor.close();
      river.close();
    }
  }
}

代码示例来源:origin: richardwilly98/elasticsearch-river-mongodb

@Override
public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
  if (failure.getClass().equals(ActionRequestValidationException.class)) {
    if (logger.isTraceEnabled()) {
      logger.trace("Ignore ActionRequestValidationException : {}", failure);
    }
  } else {
    logger.error("afterBulk - Bulk request failed: {} - {} - {}", executionId, request, failure);
    MongoDBRiverHelper.setRiverStatus(client, definition.getRiverName(), Status.IMPORT_FAILED);
    request.requests().clear();
    bulkProcessor.close();
    river.close();
  }
}

代码示例来源:origin: richardwilly98/elasticsearch-river-mongodb

@Override
  public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
    if (response.hasFailures()) {
      logger.error("Bulk processor failed. {}", response.buildFailureMessage());
      MongoDBRiverHelper.setRiverStatus(client, definition.getRiverName(), Status.IMPORT_FAILED);
      request.requests().clear();
      bulkProcessor.close();
      river.close();
    } else {
      documentCount.addAndGet(response.getItems().length);
      logStatistics(response.getTookInMillis());
      deletedDocuments.set(0);
      updatedDocuments.set(0);
      insertedDocuments.set(0);
      if (logger.isTraceEnabled()) {
        logger.trace("afterBulk - bulk [{}] success [{} items] [{} ms] total [{}]", executionId, response.getItems().length,
            response.getTookInMillis(), documentCount.get());
      }
    }
  }
};

代码示例来源:origin: apache/flink

@Override
  public Object answer(InvocationOnMock invocationOnMock) throws Throwable {
    while (nextBulkRequest.numberOfActions() > 0) {
      // wait until we are allowed to continue with the flushing
      flushLatch.await();
      // create a copy of the accumulated mock requests, so that
      // re-added requests from the failure handler are included in the next bulk
      BulkRequest currentBulkRequest = nextBulkRequest;
      nextBulkRequest = new BulkRequest();
      listener.beforeBulk(123L, currentBulkRequest);
      if (nextBulkFailure == null) {
        BulkItemResponse[] mockResponses = new BulkItemResponse[currentBulkRequest.requests().size()];
        for (int i = 0; i < currentBulkRequest.requests().size(); i++) {
          Throwable mockItemFailure = mockItemFailuresList.get(i);
          if (mockItemFailure == null) {
            // the mock response for the item is success
            mockResponses[i] = new BulkItemResponse(i, "opType", mock(ActionResponse.class));
          } else {
            // the mock response for the item is failure
            mockResponses[i] = new BulkItemResponse(i, "opType", new BulkItemResponse.Failure("index", "type", "id", mockItemFailure));
          }
        }
        listener.afterBulk(123L, currentBulkRequest, new BulkResponse(mockResponses, 1000L));
      } else {
        listener.afterBulk(123L, currentBulkRequest, nextBulkFailure);
      }
    }
    return null;
  }
}).when(mockBulkProcessor).flush();

代码示例来源:origin: floragunncom/search-guard

count = ""+((BulkRequest) request).requests().size();

代码示例来源:origin: org.elasticsearch/elasticsearch

@Override
public DocWriteRequest next() {
  return bulkRequest.requests().get(++currentSlot);
}

代码示例来源:origin: org.elasticsearch/elasticsearch

@Override
public boolean hasNext() {
  return (currentSlot + 1) < bulkRequest.requests().size();
}

代码示例来源:origin: floragunncom/search-guard

for (DocWriteRequest ar : ((BulkRequest) request).requests()) {
  result = getOrReplaceAllIndices(ar, provider, false) && result;

代码示例来源:origin: org.elasticsearch/elasticsearch

BulkRequestModifier(BulkRequest bulkRequest) {
  this.bulkRequest = bulkRequest;
  this.failedSlots = new SparseFixedBitSet(bulkRequest.requests().size());
  this.itemResponses = new ArrayList<>(bulkRequest.requests().size());
}

代码示例来源:origin: spring-projects/spring-data-elasticsearch

DocWriteRequest<?> action = bulkRequest.requests().get(i);
for (DocWriteRequest<?> action : bulkRequest.requests()) {
  DocWriteRequest.OpType opType = action.opType();

代码示例来源:origin: org.elasticsearch/elasticsearch

private BulkRequest createBulkRequestForRetry(BulkResponse bulkItemResponses) {
  BulkRequest requestToReissue = new BulkRequest();
  int index = 0;
  for (BulkItemResponse bulkItemResponse : bulkItemResponses.getItems()) {
    if (bulkItemResponse.isFailed()) {
      requestToReissue.add(currentBulkRequest.requests().get(index));
    }
    index++;
  }
  return requestToReissue;
}

代码示例来源:origin: tomoya92/pybbs

public void bulkDocument(String type, Map<String, Map<String, Object>> sources) {
 try {
  if (this.instance() == null) return;
  BulkRequest requests = new BulkRequest();
  Iterator<String> it = sources.keySet().iterator();
  int count = 0;
  while(it.hasNext()) {
   count++;
   String next = it.next();
   IndexRequest request = new IndexRequest(name, type, next);
   request.source(sources.get(next));
   requests.add(request);
   if (count % 1000 == 0) {
    client.bulk(requests, RequestOptions.DEFAULT);
    requests.requests().clear();
    count = 0;
   }
  }
  if (requests.numberOfActions() > 0) client.bulk(requests, RequestOptions.DEFAULT);
 } catch (IOException e) {
  log.error(e.getMessage());
 }
}

代码示例来源:origin: tomoya92/pybbs

public void bulkDeleteDocument(String type, List<Integer> ids) {
 try {
  if (this.instance() == null) return;
  BulkRequest requests = new BulkRequest();
  int count = 0;
  for (Integer id: ids) {
   count++;
   DeleteRequest request = new DeleteRequest(name, type, String.valueOf(id));
   requests.add(request);
   if (count % 1000 == 0) {
    client.bulk(requests, RequestOptions.DEFAULT);
    requests.requests().clear();
    count = 0;
   }
  }
  if (requests.numberOfActions() > 0) client.bulk(requests, RequestOptions.DEFAULT);
 } catch (IOException e) {
  log.error(e.getMessage());
 }
}

代码示例来源:origin: org.elasticsearch/elasticsearch

void markCurrentItemAsFailed(Exception e) {
  IndexRequest indexRequest = getIndexWriteRequest(bulkRequest.requests().get(currentSlot));
  // We hit a error during preprocessing a request, so we:
  // 1) Remember the request item slot from the bulk, so that we're done processing all requests we know what failed
  // 2) Add a bulk item failure for this request
  // 3) Continue with the next request in the bulk.
  failedSlots.set(currentSlot);
  BulkItemResponse.Failure failure = new BulkItemResponse.Failure(indexRequest.index(), indexRequest.type(),
    indexRequest.id(), e);
  itemResponses.add(new BulkItemResponse(currentSlot, indexRequest.opType(), failure));
}

代码示例来源:origin: org.elasticsearch/elasticsearch

BulkRequest getBulkRequest() {
  if (itemResponses.isEmpty()) {
    return bulkRequest;
  } else {
    BulkRequest modifiedBulkRequest = new BulkRequest();
    modifiedBulkRequest.setRefreshPolicy(bulkRequest.getRefreshPolicy());
    modifiedBulkRequest.waitForActiveShards(bulkRequest.waitForActiveShards());
    modifiedBulkRequest.timeout(bulkRequest.timeout());
    int slot = 0;
    List<DocWriteRequest<?>> requests = bulkRequest.requests();
    originalSlots = new int[requests.size()]; // oversize, but that's ok
    for (int i = 0; i < requests.size(); i++) {
      DocWriteRequest request = requests.get(i);
      if (failedSlots.get(i) == false) {
        modifiedBulkRequest.add(request);
        originalSlots[slot++] = i;
      }
    }
    return modifiedBulkRequest;
  }
}

代码示例来源:origin: org.elasticsearch/elasticsearch

void processBulkIndexIngestRequest(Task task, BulkRequest original, ActionListener<BulkResponse> listener) {
  long ingestStartTimeInNanos = System.nanoTime();
  BulkRequestModifier bulkRequestModifier = new BulkRequestModifier(original);
  ingestService.executeBulkRequest(() -> bulkRequestModifier,
    (indexRequest, exception) -> {
      logger.debug(() -> new ParameterizedMessage("failed to execute pipeline [{}] for document [{}/{}/{}]",
        indexRequest.getPipeline(), indexRequest.index(), indexRequest.type(), indexRequest.id()), exception);
      bulkRequestModifier.markCurrentItemAsFailed(exception);
    }, (exception) -> {
      if (exception != null) {
        logger.error("failed to execute pipeline for a bulk request", exception);
        listener.onFailure(exception);
      } else {
        long ingestTookInMillis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - ingestStartTimeInNanos);
        BulkRequest bulkRequest = bulkRequestModifier.getBulkRequest();
        ActionListener<BulkResponse> actionListener = bulkRequestModifier.wrapActionListenerIfNeeded(ingestTookInMillis,
          listener);
        if (bulkRequest.requests().isEmpty()) {
          // at this stage, the transport bulk action can't deal with a bulk request with no requests,
          // so we stop and send an empty response back to the client.
          // (this will happen if pre-processing all items in the bulk failed)
          actionListener.onResponse(new BulkResponse(new BulkItemResponse[0], 0));
        } else {
          doExecute(task, bulkRequest, actionListener);
        }
      }
    },
    indexRequest -> bulkRequestModifier.markCurrentItemAsDropped());
}

代码示例来源:origin: org.elasticsearch/elasticsearch

void markCurrentItemAsDropped() {
  IndexRequest indexRequest = getIndexWriteRequest(bulkRequest.requests().get(currentSlot));
  failedSlots.set(currentSlot);
  itemResponses.add(
    new BulkItemResponse(currentSlot, indexRequest.opType(),
      new UpdateResponse(
        new ShardId(indexRequest.index(), IndexMetaData.INDEX_UUID_NA_VALUE, 0),
        indexRequest.type(), indexRequest.id(), indexRequest.version(), DocWriteResponse.Result.NOOP
      )
    )
  );
}

代码示例来源:origin: com.strapdata.elasticsearch/elasticsearch

private BulkRequest createBulkRequestForRetry(BulkResponse bulkItemResponses) {
  BulkRequest requestToReissue = new BulkRequest();
  int index = 0;
  for (BulkItemResponse bulkItemResponse : bulkItemResponses.getItems()) {
    if (bulkItemResponse.isFailed()) {
      requestToReissue.add(currentBulkRequest.requests().get(index));
    }
    index++;
  }
  return requestToReissue;
}

代码示例来源:origin: org.apache.servicemix.bundles/org.apache.servicemix.bundles.elasticsearch

private BulkRequest createBulkRequestForRetry(BulkResponse bulkItemResponses) {
  BulkRequest requestToReissue = new BulkRequest();
  int index = 0;
  for (BulkItemResponse bulkItemResponse : bulkItemResponses.getItems()) {
    if (bulkItemResponse.isFailed()) {
      requestToReissue.add(currentBulkRequest.requests().get(index));
    }
    index++;
  }
  return requestToReissue;
}

相关文章