org.apache.flink.api.java.operators.MapOperator.partitionByHash()方法的使用及代码示例

x33g5p2x  于2022-01-25 转载在 其他  
字(2.6k)|赞(0)|评价(0)|浏览(80)

本文整理了Java中org.apache.flink.api.java.operators.MapOperator.partitionByHash()方法的一些代码示例,展示了MapOperator.partitionByHash()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。MapOperator.partitionByHash()方法的具体详情如下:
包路径:org.apache.flink.api.java.operators.MapOperator
类名称:MapOperator
方法名:partitionByHash

MapOperator.partitionByHash介绍

暂无

代码示例

代码示例来源:origin: apache/flink

@Test
public void testHashPartitionByKeyField2() throws Exception {
  /*
   * Test hash partition by key field
   */
  final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
  DataSet<Tuple3<Integer, Long, String>> ds = CollectionDataSets.get3TupleDataSet(env);
  AggregateOperator<Tuple3<Integer, Long, String>> sum = ds
    .map(new PrefixMapper())
    .partitionByHash(1, 2)
    .groupBy(1, 2)
    .sum(0);
  List<Tuple3<Integer, Long, String>> result = sum.collect();
  String expected = "(1,1,Hi)\n" +
    "(5,2,Hello)\n" +
    "(4,3,Hello)\n" +
    "(5,3,I am )\n" +
    "(6,3,Luke )\n" +
    "(34,4,Comme)\n" +
    "(65,5,Comme)\n" +
    "(111,6,Comme)";
  compareResultAsText(result, expected);
}

代码示例来源:origin: dbs-leipzig/gradoop

/**
 * creates the initial working set from the edge candidates
 *
 * @return initial working set with the expand embeddings
 */
private DataSet<ExpandEmbedding> preProcess() {
 if (direction == ExpandDirection.IN) {
  candidateEdges = candidateEdges
   .map(new ReverseEdgeEmbedding())
   .name(getName() + " - Reverse Edges");
 }
 this.candidateEdgeTuples = candidateEdges
  .map(new ExtractKeyedCandidateEdges())
  .name(getName() + " - Create candidate edge tuples")
  .partitionByHash(0)
  .name(getName() + " - Partition edge tuples");
 return input.join(candidateEdgeTuples, joinHint)
  .where(new ExtractExpandColumn(expandColumn)).equalTo(0)
  .with(new CreateExpandEmbedding(
   distinctVertexColumns,
   distinctEdgeColumns,
   closingColumn
  ))
  .name(getName() + " - Initial expansion");
}

代码示例来源:origin: org.gradoop/gradoop-flink

/**
 * creates the initial working set from the edge candidates
 *
 * @return initial working set with the expand embeddings
 */
private DataSet<ExpandEmbedding> preProcess() {
 if (direction == ExpandDirection.IN) {
  candidateEdges = candidateEdges
   .map(new ReverseEdgeEmbedding())
   .name(getName() + " - Reverse Edges");
 }
 this.candidateEdgeTuples = candidateEdges
  .map(new ExtractKeyedCandidateEdges())
  .name(getName() + " - Create candidate edge tuples")
  .partitionByHash(0)
  .name(getName() + " - Partition edge tuples");
 return input.join(candidateEdgeTuples, joinHint)
  .where(new ExtractExpandColumn(expandColumn)).equalTo(0)
  .with(new CreateExpandEmbedding(
   distinctVertexColumns,
   distinctEdgeColumns,
   closingColumn
  ))
  .name(getName() + " - Initial expansion");
}

相关文章