org.apache.calcite.rel.RelNode类的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(17.1k)|赞(0)|评价(0)|浏览(233)

本文整理了Java中org.apache.calcite.rel.RelNode类的一些代码示例,展示了RelNode类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。RelNode类的具体详情如下:
包路径:org.apache.calcite.rel.RelNode
类名称:RelNode

RelNode介绍

[英]A RelNode is a relational expression.

Relational expressions process data, so their names are typically verbs: Sort, Join, Project, Filter, Scan, Sample.

A relational expression is not a scalar expression; see org.apache.calcite.sql.SqlNode and RexNode.

If this type of relational expression has some particular planner rules, it should implement the public static method AbstractRelNode#register.

When a relational expression comes to be implemented, the system allocates a org.apache.calcite.plan.RelImplementor to manage the process. Every implementable relational expression has a RelTraitSet describing its physical attributes. The RelTraitSet always contains a Conventiondescribing how the expression passes data to its consuming relational expression, but may contain other traits, including some applied externally. Because traits can be applied externally, implementations of RelNode should never assume the size or contents of their trait set (beyond those traits configured by the RelNode itself).

For each calling-convention, there is a corresponding sub-interface of RelNode. For example, org.apache.calcite.adapter.enumerable.EnumerableRelhas operations to manage the conversion to a graph of org.apache.calcite.adapter.enumerable.EnumerableConventioncalling-convention, and it interacts with a EnumerableRelImplementor.

A relational expression is only required to implement its calling-convention's interface when it is actually implemented, that is, converted into a plan/program. This means that relational expressions which cannot be implemented, such as converters, are not required to implement their convention's interface.

Every relational expression must derive from AbstractRelNode. (Why have the RelNode interface, then? We need a root interface, because an interface can only derive from an interface.)
[中]RelNode是一个关系表达式。
关系表达式处理数据,因此它们的名称通常是动词:排序、连接、项目、筛选、扫描和采样。
关系表达式不是标量表达式;见org。阿帕奇。方解石sql。SqlNode和RexNode。
如果这种类型的关系表达式有一些特定的规划器规则,那么它应该实现public static方法AbstractRelNode#register。
当实现一个关系表达式时,系统会分配一个组织。阿帕奇。方解石计划重新设置Mentor以管理流程。每个可实现的关系表达式都有一个描述其物理属性的RelTraitSet。RelTraitSet始终包含一个约定,描述表达式如何将数据传递给其消费关系表达式,但可能包含其他特性,包括一些外部应用的特性。因为特征可以在外部应用,所以RelNode的实现永远不应该假设其特征集的大小或内容(除了RelNode本身配置的那些特征)。
对于每个调用约定,RelNode都有相应的子接口。例如,org。阿帕奇。方解石适配器。可枚举的。EnumerableRelhas操作用于管理到组织图的转换。阿帕奇。方解石适配器。可枚举的。EnumerableConventioncalling约定,它与EnumerableLimpleMentor交互。
关系表达式只需要在实际实现时实现其调用约定的接口,即转换为计划/程序。这意味着不能实现的关系表达式(如转换器)不需要实现其约定的接口。
每个关系表达式都必须派生自AbstractRelNode。(那么为什么有RelNode接口呢?我们需要一个根接口,因为接口只能从接口派生。)

代码示例

代码示例来源:origin: apache/hive

public static ExprNodeDesc getExprNode(Integer inputRefIndx, RelNode inputRel,
  ExprNodeConverter exprConv) {
 ExprNodeDesc exprNode = null;
 RexNode rexInputRef = new RexInputRef(inputRefIndx, inputRel.getRowType()
   .getFieldList().get(inputRefIndx).getType());
 exprNode = rexInputRef.accept(exprConv);
 return exprNode;
}

代码示例来源:origin: apache/hive

private static ExprNodeDesc convertToExprNode(RexNode rn, RelNode inputRel, String tabAlias,
    Set<Integer> vcolsInCalcite) {
 return rn.accept(new ExprNodeConverter(tabAlias, inputRel.getRowType(), vcolsInCalcite,
   inputRel.getCluster().getTypeFactory(), true));
}

代码示例来源:origin: apache/hive

public RelNode align(RelNode rel, List<RelFieldCollation> collations) {
 ImmutableList.Builder<RelNode> newInputs = new ImmutableList.Builder<>();
 for (RelNode input : rel.getInputs()) {
  newInputs.add(dispatchAlign(input, ImmutableList.<RelFieldCollation>of()));
 }
 return rel.copy(rel.getTraitSet(), newInputs.build());
}

代码示例来源:origin: apache/kylin

public void fixSharedOlapTableScanAt(RelNode parent, int ordinalInParent) {
  OLAPTableScan copy = copyTableScanIfNeeded(parent.getInputs().get(ordinalInParent));
  if (copy != null)
    parent.replaceInput(ordinalInParent, copy);
}

代码示例来源:origin: apache/hive

private static void replaceEmptyGroupAggr(final RelNode rel, RelNode parent) {
  // If this function is called, the parent should only include constant
  List<RexNode> exps = parent.getChildExps();
  for (RexNode rexNode : exps) {
   if (!rexNode.accept(new HiveCalciteUtil.ConstantFinder())) {
    throw new RuntimeException("We expect " + parent.toString()
      + " to contain only constants. However, " + rexNode.toString() + " is "
      + rexNode.getKind());
   }
  }
  HiveAggregate oldAggRel = (HiveAggregate) rel;
  RelDataTypeFactory typeFactory = oldAggRel.getCluster().getTypeFactory();
  RelDataType longType = TypeConverter.convert(TypeInfoFactory.longTypeInfo, typeFactory);
  RelDataType intType = TypeConverter.convert(TypeInfoFactory.intTypeInfo, typeFactory);
  // Create the dummy aggregation.
  SqlAggFunction countFn = SqlFunctionConverter.getCalciteAggFn("count", false,
    ImmutableList.of(intType), longType);
  // TODO: Using 0 might be wrong; might need to walk down to find the
  // proper index of a dummy.
  List<Integer> argList = ImmutableList.of(0);
  AggregateCall dummyCall = new AggregateCall(countFn, false, argList, longType, null);
  Aggregate newAggRel = oldAggRel.copy(oldAggRel.getTraitSet(), oldAggRel.getInput(),
    oldAggRel.indicator, oldAggRel.getGroupSet(), oldAggRel.getGroupSets(),
    ImmutableList.of(dummyCall));
  RelNode select = introduceDerivedTable(newAggRel);
  parent.replaceInput(0, select);
 }
}

代码示例来源:origin: apache/hive

/**
 * Creates a LogicalAggregate that removes all duplicates from the result of
 * an underlying relational expression.
 *
 * @param rel underlying rel
 * @return rel implementing SingleValueAgg
 */
public static RelNode createSingleValueAggRel(
  RelOptCluster cluster,
  RelNode rel,
  RelFactories.AggregateFactory aggregateFactory) {
 // assert (rel.getRowType().getFieldCount() == 1);
 final int aggCallCnt = rel.getRowType().getFieldCount();
 final List<AggregateCall> aggCalls = new ArrayList<>();
 for (int i = 0; i < aggCallCnt; i++) {
  aggCalls.add(
    AggregateCall.create(
      SqlStdOperatorTable.SINGLE_VALUE, false, false,
      ImmutableList.of(i), -1, 0, rel, null, null));
 }
 return aggregateFactory.createAggregate(rel, false, ImmutableBitSet.of(), null, aggCalls);
}

代码示例来源:origin: apache/hive

private Frame(RelNode rel) {
 this(rel, ImmutableList.of(Pair.of(deriveAlias(rel), rel.getRowType())));
}

代码示例来源:origin: apache/hive

private RelNode copyNodeScan(RelNode scan) {
  final RelNode newScan;
  if (scan instanceof DruidQuery) {
   final DruidQuery dq = (DruidQuery) scan;
   // Ideally we should use HiveRelNode convention. However, since Volcano planner
   // throws in that case because DruidQuery does not implement the interface,
   // we set it as Bindable. Currently, we do not use convention in Hive, hence that
   // should be fine.
   // TODO: If we want to make use of convention (e.g., while directly generating operator
   // tree instead of AST), this should be changed.
   newScan = DruidQuery.create(optCluster, optCluster.traitSetOf(BindableConvention.INSTANCE),
     scan.getTable(), dq.getDruidTable(),
     ImmutableList.<RelNode>of(dq.getTableScan()));
  } else {
   newScan = new HiveTableScan(optCluster, optCluster.traitSetOf(HiveRelNode.CONVENTION),
     (RelOptHiveTable) scan.getTable(), ((RelOptHiveTable) scan.getTable()).getName(),
     null, false, false);
  }
  return newScan;
 }
}

代码示例来源:origin: apache/hive

/** Creates a group key with grouping sets, both identified by field positions
 * in the underlying relational expression.
 *
 * <p>This method of creating a group key does not allow you to group on new
 * expressions, only column projections, but is efficient, especially when you
 * are coming from an existing {@link Aggregate}. */
public GroupKey groupKey(ImmutableBitSet groupSet, boolean indicator,
             ImmutableList<ImmutableBitSet> groupSets) {
 if (groupSet.length() > peek().getRowType().getFieldCount()) {
  throw new IllegalArgumentException("out of bounds: " + groupSet);
 }
 if (groupSets == null) {
  groupSets = ImmutableList.of(groupSet);
 }
 final ImmutableList<RexNode> nodes =
   fields(ImmutableIntList.of(groupSet.toArray()));
 final List<ImmutableList<RexNode>> nodeLists =
   Lists.transform(groupSets,
     new Function<ImmutableBitSet, ImmutableList<RexNode>>() {
      public ImmutableList<RexNode> apply(ImmutableBitSet input) {
       return fields(ImmutableIntList.of(input.toArray()));
      }
     });
 return groupKey(nodes, indicator, nodeLists);
}

代码示例来源:origin: apache/incubator-druid

private PlannerResult planExplanation(
   final RelNode rel,
   final SqlExplain explain,
   final Set<String> datasourceNames
 )
 {
  final String explanation = RelOptUtil.dumpPlan("", rel, explain.getFormat(), explain.getDetailLevel());
  final Supplier<Sequence<Object[]>> resultsSupplier = Suppliers.ofInstance(
    Sequences.simple(ImmutableList.of(new Object[]{explanation})));
  final RelDataTypeFactory typeFactory = rel.getCluster().getTypeFactory();
  return new PlannerResult(
    resultsSupplier,
    typeFactory.createStructType(
      ImmutableList.of(Calcites.createSqlType(typeFactory, SqlTypeName.VARCHAR)),
      ImmutableList.of("PLAN")
    ),
    datasourceNames
  );
 }
}

代码示例来源:origin: apache/hive

LOG.debug("Matched HiveSemiJoinRule");
final RelOptCluster cluster = join.getCluster();
final RexBuilder rexBuilder = cluster.getRexBuilder();
final ImmutableBitSet rightBits =
  ImmutableBitSet.range(left.getRowType().getFieldCount(),
             join.getRowType().getFieldCount());
if (topRefs.intersects(rightBits)) {
 return;
if (!joinInfo.rightSet().equals(
  ImmutableBitSet.range(aggregate.getGroupCount()))) {
 call.transformTo(topOperator.copy(topOperator.getTraitSet(), ImmutableList.of(left)));
 return;
 Join rightJoin = (Join)(((HepRelVertex)aggregate.getInput()).getCurrentRel());
 List<RexNode> projects = new ArrayList<>();
 for(int i=0; i<rightJoin.getRowType().getFieldCount(); i++){
  projects.add(rexBuilder.makeInputRef(rightJoin, i));
 RelNode topProject =  call.builder().push(rightJoin).project(projects, rightJoin.getRowType().getFieldNames(),
 semi = call.builder().push(left).push(aggregate.getInput()).semiJoin(newCondition).build();
call.transformTo(topOperator.copy(topOperator.getTraitSet(), ImmutableList.of(semi)));

代码示例来源:origin: apache/hive

public void onMatch(RelOptRuleCall call) {
 final HiveFilter filter = call.rel(0);
 final HiveSortLimit sort = call.rel(1);
 final RelNode newFilter = filter.copy(sort.getInput().getTraitSet(),
     ImmutableList.<RelNode>of(sort.getInput()));
 final HiveSortLimit newSort = sort.copy(sort.getTraitSet(),
     newFilter, sort.collation, sort.offset, sort.fetch);
 call.transformTo(newSort);
}

代码示例来源:origin: apache/hive

if(subQueryDesc.getRexSubQuery().getRowType().getFieldCount() > 1) {
 throw new CalciteSubquerySemanticException(ErrorMsg.INVALID_SUBQUERY_EXPRESSION.getMsg(
     "SubQuery can contain only 1 item in Select List."));
                    ImmutableList.<RexNode>of(rexNodeLhs) );
return  rexSubQuery;
if(subQueryDesc.getRexSubQuery().getRowType().getFieldCount() > 1) {
 throw new CalciteSubquerySemanticException(ErrorMsg.INVALID_SUBQUERY_EXPRESSION.getMsg(
     "SubQuery can contain only 1 item in Select List."));

代码示例来源:origin: apache/hive

final int count = sort.getInput().getRowType().getFieldCount();
if (count == 1) {
List<RelDataTypeField> fields = sort.getInput().getRowType().getFieldList();
List<Pair<RexNode, String>> newChildExprs = new ArrayList<>();
List<RexNode> topChildExprs = new ArrayList<>();
List<String> topChildExprsFields = new ArrayList<>();
for (int i = 0; i < count ; i++) {
 RexNode expr = rexBuilder.makeInputRef(sort.getInput(), i);
 RelDataTypeField field = fields.get(i);
 if (constants.containsKey(expr)) {
  topChildExprs.add(constants.get(expr));
  topChildExprsFields.add(field.getName());
 } else {
  newChildExprs.add(Pair.<RexNode,String>of(expr, field.getName()));
  topChildExprs.add(expr);
  topChildExprsFields.add(field.getName());
    RelOptUtil.permutation(Pair.left(newChildExprs), sort.getInput().getRowType()).inverse();
List<RelFieldCollation> fieldCollations = new ArrayList<>();
for (RelFieldCollation fc : sort.getCollation().getFieldCollations()) {
for (RelNode child : parent.getInputs()) {
 if (!((HepRelVertex) child).getCurrentRel().equals(sort)) {
  inputs.add(child);
call.transformTo(parent.copy(parent.getTraitSet(), inputs));

代码示例来源:origin: apache/hive

public static HiveTableFunctionScan createUDTFForSetOp(RelOptCluster cluster, RelNode input)
  throws SemanticException {
 RelTraitSet traitSet = TraitsUtil.getDefaultTraitSet(cluster);
 List<RexNode> originalInputRefs = Lists.transform(input.getRowType().getFieldList(),
   new Function<RelDataTypeField, RexNode>() {
    @Override
    public RexNode apply(RelDataTypeField input) {
     return new RexInputRef(input.getIndex(), input.getType());
    }
   });
 ImmutableList.Builder<RelDataType> argTypeBldr = ImmutableList.<RelDataType> builder();
 for (int i = 0; i < originalInputRefs.size(); i++) {
  argTypeBldr.add(originalInputRefs.get(i).getType());
 }
 RelDataType retType = input.getRowType();
 String funcName = "replicate_rows";
 FunctionInfo fi = FunctionRegistry.getFunctionInfo(funcName);
 SqlOperator calciteOp = SqlFunctionConverter.getCalciteOperator(funcName, fi.getGenericUDTF(),
   argTypeBldr.build(), retType);
 // Hive UDTF only has a single input
 List<RelNode> list = new ArrayList<>();
 list.add(input);
 RexNode rexNode = cluster.getRexBuilder().makeCall(calciteOp, originalInputRefs);
 return HiveTableFunctionScan.create(cluster, traitSet, list, rexNode, null, retType, null);
}

代码示例来源:origin: apache/hive

/** Returns references to the fields of a given input. */
public ImmutableList<RexNode> fields(int inputCount, int inputOrdinal) {
 final RelNode input = peek(inputCount, inputOrdinal);
 final RelDataType rowType = input.getRowType();
 final ImmutableList.Builder<RexNode> nodes = ImmutableList.builder();
 for (int fieldOrdinal : Util.range(rowType.getFieldCount())) {
  nodes.add(field(inputCount, inputOrdinal, fieldOrdinal));
 }
 return nodes.build();
}

代码示例来源:origin: apache/hive

/**
 * TODO: 1) isSamplingPred 2) sampleDesc 3) isSortedFilter
 */
OpAttr visit(HiveFilter filterRel) throws SemanticException {
 OpAttr inputOpAf = dispatch(filterRel.getInput());
 if (LOG.isDebugEnabled()) {
  LOG.debug("Translating operator rel#" + filterRel.getId() + ":" + filterRel.getRelTypeName()
    + " with row type: [" + filterRel.getRowType() + "]");
 }
 ExprNodeDesc filCondExpr = filterRel.getCondition().accept(
   new ExprNodeConverter(inputOpAf.tabAlias, filterRel.getInput().getRowType(), inputOpAf.vcolsInCalcite,
     filterRel.getCluster().getTypeFactory(), true));
 FilterDesc filDesc = new FilterDesc(filCondExpr, false);
 ArrayList<ColumnInfo> cinfoLst = createColInfos(inputOpAf.inputs.get(0));
 FilterOperator filOp = (FilterOperator) OperatorFactory.getAndMakeChild(filDesc,
   new RowSchema(cinfoLst), inputOpAf.inputs.get(0));
 if (LOG.isDebugEnabled()) {
  LOG.debug("Generated " + filOp + " with row schema: [" + filOp.getSchema() + "]");
 }
 return inputOpAf.clone(filOp);
}

代码示例来源:origin: apache/hive

private RelNode projectLeftOuterSide(RelNode srcRel, int numColumns) throws SemanticException {
 RowResolver iRR = relToHiveRR.get(srcRel);
 RowResolver oRR = new RowResolver();
 RowResolver.add(oRR, iRR, numColumns);
 List<RexNode> calciteColLst = new ArrayList<RexNode>();
 List<String> oFieldNames = new ArrayList<String>();
 RelDataType iType = srcRel.getRowType();
 for (int i = 0; i < iType.getFieldCount(); i++) {
  RelDataTypeField fType = iType.getFieldList().get(i);
  String fName = iType.getFieldNames().get(i);
  calciteColLst.add(cluster.getRexBuilder().makeInputRef(fType.getType(), i));
  oFieldNames.add(fName);
 }
 HiveRelNode selRel = HiveProject.create(srcRel, calciteColLst, oFieldNames);
 this.relToHiveColNameCalcitePosMap.put(selRel, buildHiveToCalciteColumnMap(oRR, selRel));
 this.relToHiveRR.put(selRel, oRR);
 return selRel;
}

代码示例来源:origin: apache/hive

public String generateSql() {
 SqlDialect dialect = getJdbcDialect();
 final HiveJdbcImplementor jdbcImplementor =
   new HiveJdbcImplementor(dialect,
     (JavaTypeFactory) getCluster().getTypeFactory());
 Project topProject;
 if (getInput() instanceof Project) {
  topProject = (Project) getInput();
 } else {
  // If it is not a project operator, we add it on top of the input
  // to force generating the column names instead of * while
  // translating to SQL
  RelNode nodeToTranslate = getInput();
  RexBuilder builder = getCluster().getRexBuilder();
  List<RexNode> projects = new ArrayList<>(
    nodeToTranslate.getRowType().getFieldList().size());
  for (int i = 0; i < nodeToTranslate.getRowType().getFieldCount(); i++) {
   projects.add(builder.makeInputRef(nodeToTranslate, i));
  }
  topProject = new JdbcProject(nodeToTranslate.getCluster(),
    nodeToTranslate.getTraitSet(), nodeToTranslate,
    projects, nodeToTranslate.getRowType());
 }
 final HiveJdbcImplementor.Result result =
   jdbcImplementor.translate(topProject);
 return result.asStatement().toSqlString(dialect).getSql();
}

代码示例来源:origin: apache/hive

private RelNode createFirstGB(RelNode input, boolean left, RelOptCluster cluster,
  RexBuilder rexBuilder) throws CalciteSemanticException {
 final List<RexNode> gbChildProjLst = Lists.newArrayList();
 final List<Integer> groupSetPositions = Lists.newArrayList();
 for (int cInd = 0; cInd < input.getRowType().getFieldList().size(); cInd++) {
  gbChildProjLst.add(rexBuilder.makeInputRef(input, cInd));
  groupSetPositions.add(cInd);
 }
 if (left) {
  gbChildProjLst.add(rexBuilder.makeBigintLiteral(new BigDecimal(2)));
 } else {
  gbChildProjLst.add(rexBuilder.makeBigintLiteral(new BigDecimal(1)));
 }
 // also add the last VCol
 groupSetPositions.add(input.getRowType().getFieldList().size());
 // create the project before GB
 RelNode gbInputRel = HiveProject.create(input, gbChildProjLst, null);
 // groupSetPosition includes all the positions
 final ImmutableBitSet groupSet = ImmutableBitSet.of(groupSetPositions);
 List<AggregateCall> aggregateCalls = Lists.newArrayList();
 RelDataType aggFnRetType = TypeConverter.convert(TypeInfoFactory.longTypeInfo,
   cluster.getTypeFactory());
 AggregateCall aggregateCall = HiveCalciteUtil.createSingleArgAggCall("count", cluster,
   TypeInfoFactory.longTypeInfo, input.getRowType().getFieldList().size(), aggFnRetType);
 aggregateCalls.add(aggregateCall);
 return new HiveAggregate(cluster, cluster.traitSetOf(HiveRelNode.CONVENTION), gbInputRel,
   groupSet, null, aggregateCalls);
}

相关文章