java.util.stream.Stream.forEachOrdered()方法的使用及代码示例

x33g5p2x  于2022-01-16 转载在 其他  
字(9.7k)|赞(0)|评价(0)|浏览(210)

本文整理了Java中java.util.stream.Stream.forEachOrdered()方法的一些代码示例,展示了Stream.forEachOrdered()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Stream.forEachOrdered()方法的具体详情如下:
包路径:java.util.stream.Stream
类名称:Stream
方法名:forEachOrdered

Stream.forEachOrdered介绍

[英]Performs an action for each element of this stream, in the encounter order of the stream if the stream has a defined encounter order.

This is a terminal operation.

This operation processes the elements one at a time, in encounter order if one exists. Performing the action for one element happens-before performing the action for subsequent elements, but for any given element, the action may be performed in whatever thread the library chooses.
[中]如果流具有定义的相遇顺序,则按照流的相遇顺序对此流的每个元素执行操作。
这是一个terminal operation
此操作一次处理一个元素,如果存在元素,则按遭遇顺序处理。为一个元素执行操作happens-before为后续元素执行操作,但对于任何给定元素,该操作可以在库选择的任何线程中执行。

代码示例

代码示例来源:origin: neo4j/neo4j

private void processElements( RoundEnvironment roundEnv )
{
  Set<? extends Element> functions = roundEnv.getElementsAnnotatedWith( supportedAnnotationType );
  visitedElements.addAll( functions );
  functions.stream().flatMap( this::validate ).forEachOrdered( messagePrinter::print );
}

代码示例来源:origin: Vedenin/useful-java-links

private static void testForEach() {
  System.out.println();
  System.out.println("For each start");
  Collection<String> collection = Arrays.asList("a1", "a2", "a3", "a1");
  // Print debug information for every element
  System.out.print("forEach = ");
  collection.stream().map(String::toUpperCase).forEach((e) -> System.out.print(e + ",")); // print  forEach = A1,A2,A3,A1,
  System.out.println();
  Collection<StringBuilder> list = Arrays.asList(new StringBuilder("a1"), new StringBuilder("a2"), new StringBuilder("a3"));
  list.stream().forEachOrdered((p) -> p.append("_new"));
  System.out.println("forEachOrdered = " + list); // print  forEachOrdered = [a1_new, a2_new, a3_new]
}

代码示例来源:origin: apache/flink

private static Collection<OptionWithMetaInfo> findCommonOptions(String rootDir, String module, String packageName, String pathPrefix) throws IOException, ClassNotFoundException {
  Collection<OptionWithMetaInfo> commonOptions = new ArrayList<>(32);
  processConfigOptions(rootDir, module, packageName, pathPrefix, optionsClass -> extractConfigOptions(optionsClass).stream()
    .filter(optionWithMetaInfo -> optionWithMetaInfo.field.getAnnotation(Documentation.CommonOption.class) != null)
    .forEachOrdered(commonOptions::add));
  return commonOptions;
}

代码示例来源:origin: apache/hbase

/**
 * Populate the Queue with given HFiles
 */
private void populateLoadQueue(Deque<LoadQueueItem> ret, Map<byte[], List<Path>> map) {
 map.forEach((k, v) -> v.stream().map(p -> new LoadQueueItem(k, p)).forEachOrdered(ret::add));
}

代码示例来源:origin: google/guava

/**
 * Reads all lines of text from this source, running the given {@code action} for each line as it
 * is read.
 *
 * <p>Like {@link BufferedReader#readLine()}, this method considers a line to be a sequence of
 * text that is terminated by (but does not include) one of {@code \r\n}, {@code \r} or {@code
 * \n}. If the source's content does not end in a line termination sequence, it is treated as if
 * it does.
 *
 * @throws IOException if an I/O error occurs while reading from this source or if {@code action}
 *     throws an {@code UncheckedIOException}
 * @since 22.0
 */
@Beta
public void forEachLine(Consumer<? super String> action) throws IOException {
 try (Stream<String> lines = lines()) {
  // The lines should be ordered regardless in most cases, but use forEachOrdered to be sure
  lines.forEachOrdered(action);
 } catch (UncheckedIOException e) {
  throw e.getCause();
 }
}

代码示例来源:origin: apache/hbase

private <T> void addToLockedResources(List<LockedResource> lockedResources,
  Map<T, LockAndQueue> locks, Function<T, String> keyTransformer,
  LockedResourceType resourcesType) {
 locks.entrySet().stream().filter(e -> e.getValue().isLocked())
  .map(e -> createLockedResource(resourcesType, keyTransformer.apply(e.getKey()), e.getValue()))
  .forEachOrdered(lockedResources::add);
}

代码示例来源:origin: google/error-prone

@Override
 public Void visitArray(List<? extends AnnotationValue> vals, Void aVoid) {
  vals.stream().forEachOrdered(v -> v.accept(this, null));
  return super.visitArray(vals, aVoid);
 }
},

代码示例来源:origin: apache/flink

private static void printJobStatusMessages(List<JobStatusMessage> jobs) {
  SimpleDateFormat dateFormat = new SimpleDateFormat("dd.MM.yyyy HH:mm:ss");
  Comparator<JobStatusMessage> startTimeComparator = (o1, o2) -> (int) (o1.getStartTime() - o2.getStartTime());
  Comparator<Map.Entry<JobStatus, List<JobStatusMessage>>> statusComparator =
    (o1, o2) -> String.CASE_INSENSITIVE_ORDER.compare(o1.getKey().toString(), o2.getKey().toString());
  Map<JobStatus, List<JobStatusMessage>> jobsByState = jobs.stream().collect(Collectors.groupingBy(JobStatusMessage::getJobState));
  jobsByState.entrySet().stream()
    .sorted(statusComparator)
    .map(Map.Entry::getValue).flatMap(List::stream).sorted(startTimeComparator)
    .forEachOrdered(job ->
      System.out.println(dateFormat.format(new Date(job.getStartTime()))
        + " : " + job.getJobId() + " : " + job.getJobName()
        + " (" + job.getJobState() + ")"));
}

代码示例来源:origin: apache/hbase

public HDFSBlocksDistribution getHDFSBlocksDistribution() {
 HDFSBlocksDistribution hdfsBlocksDistribution = new HDFSBlocksDistribution();
 stores.values().stream().filter(s -> s.getStorefiles() != null)
   .flatMap(s -> s.getStorefiles().stream()).map(HStoreFile::getHDFSBlockDistribution)
   .forEachOrdered(hdfsBlocksDistribution::add);
 return hdfsBlocksDistribution;
}

代码示例来源:origin: google/error-prone

/** Returns a string descriptor of a method's reference type. */
 private String methodReferenceDescriptor(Types types, MethodSymbol sym) {
  StringBuilder sb = new StringBuilder();
  sb.append(sym.getSimpleName()).append('(');
  if (!sym.isStatic()) {
   sb.append(Signatures.descriptor(sym.owner.type, types));
  }
  sym.params().stream().map(p -> Signatures.descriptor(p.type, types)).forEachOrdered(sb::append);
  sb.append(")");
  return sb.toString();
 }
}

代码示例来源:origin: apache/hbase

private void logRegionFiles() {
 if (LOG.isTraceEnabled()) {
  LOG.trace(getRegionInfo().getEncodedName() + " : Store files for region: ");
  stores.values().stream().filter(s -> s.getStorefiles() != null)
    .flatMap(s -> s.getStorefiles().stream())
    .forEachOrdered(sf -> LOG.trace(getRegionInfo().getEncodedName() + " : " + sf));
 }
}

代码示例来源:origin: SonarSource/sonarqube

private void assertLogLevel(LoggerLevel level, String... expected) {
 List<String> logs = logTester.logs(level);
 assertThat(logs).hasSize(expected.length);
 Iterator<String> iterator = logs.iterator();
 Arrays.stream(expected).forEachOrdered(log -> {
  if (log.endsWith(" | time=")) {
   assertThat(iterator.next()).startsWith(log);
  } else {
   assertThat(iterator.next()).isEqualTo(log);
  }
 });
}

代码示例来源:origin: apache/hbase

@Override
protected void serializeStateData(ProcedureStateSerializer serializer) throws IOException {
 super.serializeStateData(serializer);
 ReopenTableRegionsStateData.Builder builder = ReopenTableRegionsStateData.newBuilder()
  .setTableName(ProtobufUtil.toProtoTableName(tableName));
 regions.stream().map(ProtobufUtil::toRegionLocation).forEachOrdered(builder::addRegion);
 serializer.serialize(builder.build());
}

代码示例来源:origin: apache/hbase

private TableDescriptor getBasic3FamilyHTableDescriptor(TableName tableName) {
 TableDescriptorBuilder builder = TableDescriptorBuilder.newBuilder(tableName);
 Arrays.stream(TEST_FAMILY).map(ColumnFamilyDescriptorBuilder::of)
   .forEachOrdered(builder::setColumnFamily);
 return builder.build();
}

代码示例来源:origin: apache/hbase

public static TableDescriptor createTableDescriptor(TableName tableName, byte[]... family) {
 TableDescriptorBuilder builder = TableDescriptorBuilder.newBuilder(tableName);
 Stream.of(family).map(ColumnFamilyDescriptorBuilder::of)
  .forEachOrdered(builder::setColumnFamily);
 return builder.build();
}

代码示例来源:origin: apache/hbase

private static TableDescriptor createHTableDescriptor(final int regionNum,
  final int numFamilies) {
 TableDescriptorBuilder builder =
   TableDescriptorBuilder.newBuilder(TableName.valueOf(TABLE_NAME + ":" + regionNum));
 IntStream.range(0, numFamilies)
   .mapToObj(i -> ColumnFamilyDescriptorBuilder.of(FAMILY_PREFIX + i))
   .forEachOrdered(builder::setColumnFamily);
 return builder.build();
}

代码示例来源:origin: SonarSource/sonarqube

private static void validateFilterKeys(String... metricKeys) {
  ProjectMeasuresQuery query = new ProjectMeasuresQuery();
  Arrays.stream(metricKeys).forEachOrdered(metricKey -> query.addMetricCriterion(create(metricKey, ProjectMeasuresQuery.Operator.LT, 80d)));
  ProjectMeasuresQueryValidator.validate(query);
 }
}

代码示例来源:origin: apache/hbase

private static WAL.Entry generateEdit(int i, RegionInfo hri, TableName tableName, byte[] row,
  int columnCount, long timestamp, MultiVersionConcurrencyControl mvcc) {
 WALKeyImpl key = new WALKeyImpl(hri.getEncodedNameAsBytes(), tableName, i, timestamp,
   HConstants.DEFAULT_CLUSTER_ID, mvcc);
 WALEdit edit = new WALEdit();
 int prefix = i;
 IntStream.range(0, columnCount).mapToObj(j -> toValue(prefix, j))
   .map(value -> new KeyValue(row, row, row, timestamp, value)).forEachOrdered(edit::add);
 return new WAL.Entry(key, edit);
}

代码示例来源:origin: apache/hbase

private TableDescriptor getTableDesc(TableName tableName, byte[]... families) {
 TableDescriptorBuilder builder = TableDescriptorBuilder.newBuilder(tableName);
 Arrays.stream(families).map(
  f -> ColumnFamilyDescriptorBuilder.newBuilder(f).setMaxVersions(Integer.MAX_VALUE).build())
   .forEachOrdered(builder::setColumnFamily);
 return builder.build();
}

代码示例来源:origin: apache/hbase

private TableDescriptor getTableDesc(TableName tableName, int regionReplication,
  byte[]... families) {
 TableDescriptorBuilder builder =
   TableDescriptorBuilder.newBuilder(tableName).setRegionReplication(regionReplication);
 Arrays.stream(families).map(family -> ColumnFamilyDescriptorBuilder.newBuilder(family)
   .setMaxVersions(Integer.MAX_VALUE).build()).forEachOrdered(builder::setColumnFamily);
 return builder.build();
}

相关文章