org.apache.calcite.rel.type.RelDataTypeFactory类的使用及代码示例

x33g5p2x  于2022-01-28 转载在 其他  
字(13.1k)|赞(0)|评价(0)|浏览(102)

本文整理了Java中org.apache.calcite.rel.type.RelDataTypeFactory类的一些代码示例,展示了RelDataTypeFactory类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。RelDataTypeFactory类的具体详情如下:
包路径:org.apache.calcite.rel.type.RelDataTypeFactory
类名称:RelDataTypeFactory

RelDataTypeFactory介绍

[英]RelDataTypeFactory is a factory for datatype descriptors. It defines methods for instantiating and combining SQL, Java, and collection types. The factory also provides methods for return type inference for arithmetic in cases where SQL 2003 is implementation defined or impractical.

This interface is an example of the org.apache.calcite.util.Glossary#ABSTRACT_FACTORY_PATTERN. Any implementation of RelDataTypeFactory must ensure that type objects are canonical: two types are equal if and only if they are represented by the same Java object. This reduces memory consumption and comparison cost.
[中]RelDataTypeFactory是数据类型描述符的工厂。它定义了实例化和组合SQL、Java和集合类型的方法。在SQL 2003是实现定义的或不实用的情况下,该工厂还提供了算法返回类型推断的方法。
这个界面是组织的一个例子。阿帕奇。方解石util。词汇表#抽象#工厂#模式。RelDataTypeFactory的任何实现都必须确保类型对象是规范的:当且仅当两个类型由同一个Java对象表示时,它们是相等的。这降低了内存消耗和比较成本。

代码示例

代码示例来源:origin: apache/hive

public List<RelDataType> getParameterTypes(RelDataTypeFactory typeFactory) {
 return ImmutableList.of(
   typeFactory.createTypeWithNullability(
     typeFactory.createSqlType(SqlTypeName.ANY), true));
}

代码示例来源:origin: apache/incubator-druid

private PlannerResult planExplanation(
   final RelNode rel,
   final SqlExplain explain,
   final Set<String> datasourceNames
 )
 {
  final String explanation = RelOptUtil.dumpPlan("", rel, explain.getFormat(), explain.getDetailLevel());
  final Supplier<Sequence<Object[]>> resultsSupplier = Suppliers.ofInstance(
    Sequences.simple(ImmutableList.of(new Object[]{explanation})));
  final RelDataTypeFactory typeFactory = rel.getCluster().getTypeFactory();
  return new PlannerResult(
    resultsSupplier,
    typeFactory.createStructType(
      ImmutableList.of(Calcites.createSqlType(typeFactory, SqlTypeName.VARCHAR)),
      ImmutableList.of("PLAN")
    ),
    datasourceNames
  );
 }
}

代码示例来源:origin: apache/kylin

@Override
public RelDataType deriveRowType() {
  final List<RelDataTypeField> fieldList = table.getRowType().getFieldList();
  final RelDataTypeFactory.FieldInfoBuilder builder = getCluster().getTypeFactory().builder();
  for (int field : fields) {
    builder.add(fieldList.get(field));
  }
  return getCluster().getTypeFactory().createStructType(builder);
}

代码示例来源:origin: apache/hive

public RelDataType getReturnType(RelDataTypeFactory typeFactory) {
 return typeFactory.createTypeWithNullability(
   typeFactory.createSqlType(SqlTypeName.ANY), true);
}

代码示例来源:origin: apache/hive

private RexNode makeCast(SqlTypeName typeName, final RexNode child) {
 RelDataType sqlType = cluster.getTypeFactory().createSqlType(typeName);
 RelDataType nullableType = cluster.getTypeFactory().createTypeWithNullability(sqlType, true);
 return cluster.getRexBuilder().makeCast(nullableType, child);
}

代码示例来源:origin: Qihoo360/Quicksql

@Override public List<RelProtoDataType> getParams() {
  return ImmutableList.of(
    typeFactory -> typeFactory.createArrayType(
      typeFactory.createSqlType(SqlTypeName.INTEGER), -1),
    typeFactory -> typeFactory.createSqlType(SqlTypeName.INTEGER));
 }
}

代码示例来源:origin: apache/hive

int nFields = left.getRowType().getFieldCount();
 ImmutableBitSet allCols = ImmutableBitSet.range(nFields);
        ImmutableList.of(
            Pair.<RexNode, String>of(rexBuilder.makeLiteral(true),
                "nullIndicator")));
int nullIndicatorPos = join.getRowType().getFieldCount() - 1;
        cluster.getTypeFactory().createTypeWithNullability(
            join.getRowType().getFieldList()
                .get(nullIndicatorPos).getType(),
            true));
 joinOutputProjects.add(
     rexBuilder.makeInputRef(
         leftInputFieldType.getFieldList().get(i).getType(), i));
newAggOutputProjectList.add(
    rexBuilder.makeCast(
        cluster.getTypeFactory().createTypeWithNullability(
            newAggOutputProjects.getType(),
            true),
        newAggOutputProjects));

代码示例来源:origin: apache/hive

ImmutableList.of(rexBuilder.makeCast(
    cluster.getTypeFactory().createTypeWithNullability(projExprs.get(0).getType(), true),
    projExprs.get(0))),
null, false, relBuilder);

代码示例来源:origin: apache/incubator-druid

@Test
public void testTimeMinusDayTimeInterval()
{
 final Period period = new Period("P1DT1H1M");
 testExpression(
   rexBuilder.makeCall(
     typeFactory.createSqlType(SqlTypeName.TIMESTAMP),
     SqlStdOperatorTable.MINUS_DATE,
     ImmutableList.of(
       inputRef("t"),
       rexBuilder.makeIntervalLiteral(
         new BigDecimal(period.toStandardDuration().getMillis()), // DAY-TIME literals value is millis
         new SqlIntervalQualifier(TimeUnit.DAY, TimeUnit.MINUTE, SqlParserPos.ZERO)
       )
     )
   ),
   DruidExpression.of(
     null,
     "(\"t\" - 90060000)"
   ),
   DateTimes.of("2000-02-03T04:05:06").minus(period).getMillis()
 );
}

代码示例来源:origin: apache/hive

private static RelNode createMaterializedViewScan(HiveConf conf, Table viewTable) {
 final RexBuilder rexBuilder = new RexBuilder(
   new JavaTypeFactoryImpl(
     new HiveTypeSystemImpl()));
 final RelOptCluster cluster = RelOptCluster.create(planner, rexBuilder);
  RelDataTypeFactory dtFactory = cluster.getRexBuilder().getTypeFactory();
  for (RelDataTypeField field : rowType.getFieldList()) {
   if (DruidTable.DEFAULT_TIMESTAMP_COLUMN.equals(field.getName())) {
    druidColTypes.add(dtFactory.createTypeWithNullability(field.getType(), false));
   } else {
    druidColTypes.add(field.getType());
   druidColNames.add(field.getName());
   if (field.getType().getSqlTypeName() == SqlTypeName.VARCHAR) {
  rowType = dtFactory.createStructType(druidColTypes, druidColNames);
  RelOptHiveTable optTable = new RelOptHiveTable(null, cluster.getTypeFactory(), fullyQualifiedTabName,
    rowType, viewTable, nonPartitionColumns, partitionColumns, new ArrayList<>(),
    conf, new HashMap<>(), new HashMap<>(), new AtomicInteger());
    optTable, viewTable.getTableName(), null, false, false);
  tableRel = DruidQuery.create(cluster, cluster.traitSetOf(BindableConvention.INSTANCE),
    optTable, druidTable, ImmutableList.<RelNode>of(scan), ImmutableMap.of());
 } else {

代码示例来源:origin: apache/hive

int rowIDPos = tableScan.getTable().getRowType().getField(
  VirtualColumn.ROWID.getName(), false, false).getIndex();
RexNode rowIDFieldAccess = rexBuilder.makeFieldAccess(
  rexBuilder.makeInputRef(tableScan.getTable().getRowType().getFieldList().get(rowIDPos).getType(), rowIDPos),
  0);
relBuilder.push(tableScan);
List<RexNode> conds = new ArrayList<>();
RelDataType bigIntType = relBuilder.getTypeFactory().createSqlType(SqlTypeName.BIGINT);
final RexNode literalHighWatermark = rexBuilder.makeLiteral(
  tableMaterializationTxnList.getHighWatermark(), bigIntType, false);
conds.add(
  rexBuilder.makeCall(
    SqlStdOperatorTable.LESS_THAN_OR_EQUAL,
    ImmutableList.of(rowIDFieldAccess, literalHighWatermark)));
for (long invalidTxn : tableMaterializationTxnList.getInvalidWriteIds()) {
 final RexNode literalInvalidTxn = rexBuilder.makeLiteral(
   rexBuilder.makeCall(
     SqlStdOperatorTable.NOT_EQUALS,
     ImmutableList.of(rowIDFieldAccess, literalInvalidTxn)));

代码示例来源:origin: apache/hive

final RexBuilder rexBuilder = cluster.getRexBuilder();
final RelDataTypeFactory typeFactory = cluster.getTypeFactory();
final RelDataType argOrdinalType = getFieldType(oldAggRel.getInput(), argOrdinal);
final RelDataType oldCallType =
  typeFactory.createTypeWithNullability(oldCall.getType(), true);
      oldCall.isDistinct(),
      ReturnTypes.explicit(sumSquaredReturnType),
      InferTypes.explicit(Collections.singletonList(argSquared.getType())),
    newCalls,
    aggCallMapping,
    ImmutableList.of(sumArgSquaredAggCall.getType()));
    newCalls,
    aggCallMapping,
    ImmutableList.of(sumArgAggCall.getType()));
    SqlStdOperatorTable.MULTIPLY, sumArgCast, sumArgCast);
RelDataType countRetType = typeFactory.createTypeWithNullability(typeFactory.createSqlType(SqlTypeName.BIGINT), true);
final AggregateCall countArgAggCall =
  AggregateCall.create(
    newCalls,
    aggCallMapping,
    ImmutableList.of(argOrdinalType));

代码示例来源:origin: apache/hive

List<RexNode> inputExprs) {
final int nGroups = oldAggRel.getGroupCount();
final RexBuilder rexBuilder = oldAggRel.getCluster().getRexBuilder();
final RelDataTypeFactory typeFactory = oldAggRel.getCluster().getTypeFactory();
final int iAvgInput = oldCall.getArgList().get(0);
final RelDataType sum0InputType = typeFactory.createTypeWithNullability(
  getFieldType(oldAggRel.getInput(), iAvgInput), true);
final RelDataType sumReturnType = getSumReturnType(
  rexBuilder.getTypeFactory(), sum0InputType, oldCall.getType());
final AggregateCall sumCall =
  AggregateCall.create(
  rexBuilder.addAggCall(sumCall,
    nGroups,
    oldAggRel.indicator,
    newCalls,
    aggCallMapping,
    ImmutableList.of(sum0InputType));
refSum = rexBuilder.ensureType(oldCall.getType(), refSum, true);
final RexNode coalesce = rexBuilder.makeCall(
  SqlStdOperatorTable.COALESCE, refSum, rexBuilder.makeZeroLiteral(refSum.getType()));
return rexBuilder.makeCast(oldCall.getType(), coalesce);

代码示例来源:origin: apache/incubator-druid

@Test
public void testConcat()
{
 testExpression(
   rexBuilder.makeCall(
     typeFactory.createSqlType(SqlTypeName.VARCHAR),
     SqlStdOperatorTable.CONCAT,
     ImmutableList.of(
       inputRef("s"),
       rexBuilder.makeLiteral("bar")
     )
   ),
   DruidExpression.fromExpression("concat(\"s\",'bar')"),
   "foobar"
 );
}

代码示例来源:origin: apache/hive

List<RexNode> inputExprs) {
final int nGroups = oldAggRel.getGroupCount();
final RexBuilder rexBuilder = oldAggRel.getCluster().getRexBuilder();
final RelDataTypeFactory typeFactory = oldAggRel.getCluster().getTypeFactory();
final int iAvgInput = oldCall.getArgList().get(0);
final RelDataType avgInputType = typeFactory.createTypeWithNullability(
  getFieldType(oldAggRel.getInput(), iAvgInput), true);
final RelDataType sumReturnType = getSumReturnType(
  rexBuilder.getTypeFactory(), avgInputType, oldCall.getType());
final AggregateCall sumCall =
  AggregateCall.create(
    null,
    null);
RelDataType countRetType = typeFactory.createTypeWithNullability(
  typeFactory.createSqlType(SqlTypeName.BIGINT), true);
final AggregateCall countCall =
  AggregateCall.create(

代码示例来源:origin: apache/storm

final BlockBuilder builder = new BlockBuilder();
final JavaTypeFactoryImpl javaTypeFactory =
  new JavaTypeFactoryImpl(rexBuilder.getTypeFactory().getTypeSystem());
    ImmutableList.of(
      Pair.<Expression, PhysType>of(
        Expressions.field(context,

代码示例来源:origin: apache/flink

final Permute left = new Permute(join.getLeft(), offset);
final int fieldCount =
  getValidatedNodeType(join.getLeft()).getFieldList().size();
final Permute right =
  new Permute(join.getRight(), offset + fieldCount);
final List<ImmutableIntList> sources = new ArrayList<>();
final Set<ImmutableIntList> sourceSet = new HashSet<>();
final RelDataTypeFactory.Builder b = typeFactory.builder();
if (names != null) {
  for (String name : names) {
    final RelDataTypeField f = left.field(name);
    final ImmutableIntList source = left.sources.get(f.getIndex());
    sourceSet.add(source);
    final RelDataTypeField f2 = right.field(name);
    final ImmutableIntList source2 = right.sources.get(f2.getIndex());
    sourceSet.add(source2);
    sources.add(source.appendAll(source2));
    final boolean nullable =
      (f.getType().isNullable()
         || join.getJoinType().generatesNullsOnLeft())
        && (f2.getType().isNullable()
            || join.getJoinType().generatesNullsOnRight());
    b.add(f).nullable(nullable);
this.sources = ImmutableList.copyOf(sources);
this.trivial = left.trivial
  && right.trivial

代码示例来源:origin: apache/flink

final List<SqlNode> oldSelectItems = ImmutableList.copyOf(selectItems);
selectItems.clear();
final List<Map.Entry<String, RelDataType>> oldFields =
  ImmutableList.copyOf(fields);
fields.clear();
for (ImmutableIntList source : sources) {
    final RelDataType type1 = field1.getValue();
    final boolean nullable = type.isNullable() && type1.isNullable();
    final RelDataType type2 =
      SqlTypeUtil.leastRestrictiveForComparison(typeFactory, type,
          maybeCast(selectItem1, type1, type2)),
        new SqlIdentifier(name, SqlParserPos.ZERO));
    type = typeFactory.createTypeWithNullability(type2, nullable);

代码示例来源:origin: apache/hive

final int fieldCount = getRowType().getFieldCount();
if (fieldsUsed.equals(ImmutableBitSet.range(fieldCount)) && extraFields.isEmpty()) {
 return this;
final List<RelDataTypeField> fields = getRowType().getFieldList();
List<RelDataType> fieldTypes = new LinkedList<RelDataType>();
List<String> fieldNames = new LinkedList<String>();
List<RexNode> exprList = new ArrayList<RexNode>();
RexBuilder rexBuilder = getCluster().getRexBuilder();
for (int i : fieldsUsed) {
 RelDataTypeField field = fields.get(i);
 fieldTypes.add(field.getType());
 fieldNames.add(field.getName());
 exprList.add(rexBuilder.makeInputRef(this, i));
HiveTableScan newHT = copy(getCluster().getTypeFactory().createStructType(fieldTypes,
  fieldNames));

代码示例来源:origin: apache/drill

ASTNode cond = where.getCondition().accept(new RexVisitor(schema));
hiveAST.where = ASTBuilder.where(cond);
for (int pos : hiveAgg.getAggregateColumnsOrder()) {
 RexInputRef iRef = new RexInputRef(groupBy.getGroupSet().nth(pos),
   groupBy.getCluster().getTypeFactory().createSqlType(SqlTypeName.ANY));
 b.add(iRef.accept(new RexVisitor(schema)));
 if (!hiveAgg.getAggregateColumnsOrder().contains(pos)) {
  RexInputRef iRef = new RexInputRef(groupBy.getGroupSet().nth(pos),
    groupBy.getCluster().getTypeFactory().createSqlType(SqlTypeName.ANY));
  b.add(iRef.accept(new RexVisitor(schema)));
      HiveParser.TOK_GROUPING_SETS_EXPRESSION, "TOK_GROUPING_SETS_EXPRESSION");
  for (int i : groupSet) {
   RexInputRef iRef = new RexInputRef(i, groupBy.getCluster().getTypeFactory()
     .createSqlType(SqlTypeName.ANY));
   expression.add(iRef.accept(new RexVisitor(schema)));
ASTNode cond = having.getCondition().accept(new RexVisitor(schema));
hiveAST.having = ASTBuilder.having(cond);
 int i = 0;
   r = select.getCluster().getRexBuilder().makeAbstractCast(r.getType(), r);
  r = select.getCluster().getRexBuilder().makeAbstractCast(r.getType(), r);

相关文章