org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.getDataColumnNums()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(2.9k)|赞(0)|评价(0)|浏览(96)

本文整理了Java中org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.getDataColumnNums()方法的一些代码示例,展示了VectorizedRowBatchCtx.getDataColumnNums()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。VectorizedRowBatchCtx.getDataColumnNums()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx
类名称:VectorizedRowBatchCtx
方法名:getDataColumnNums

VectorizedRowBatchCtx.getDataColumnNums介绍

暂无

代码示例

代码示例来源:origin: apache/drill

@Explain(vectorization = Vectorization.DETAIL, displayName = "includeColumns", explainLevels = { Level.DEFAULT, Level.EXTENDED })
public String getDataColumnNums() {
 int[] dataColumnNums = vectorizedRowBatchCtx.getDataColumnNums();
 if (dataColumnNums == null) {
  return null;
 }
 return Arrays.toString(vectorizedRowBatchCtx.getDataColumnNums());
}

代码示例来源:origin: apache/hive

@Explain(vectorization = Vectorization.DETAIL, displayName = "includeColumns",
  explainLevels = { Level.DEFAULT, Level.EXTENDED })
public String getDataColumnNums() {
 int[] dataColumnNums = vectorizedRowBatchCtx.getDataColumnNums();
 if (dataColumnNums == null) {
  return null;
 }
 return Arrays.toString(dataColumnNums);
}

代码示例来源:origin: apache/hive

public DruidVectorizedWrapper(DruidQueryRecordReader reader, Configuration jobConf) {
 this.rbCtx = Utilities.getVectorizedRowBatchCtx(jobConf);
 if (rbCtx.getDataColumnNums() != null) {
  projectedColumns = rbCtx.getDataColumnNums();
 } else {
  // case all the columns are selected
  projectedColumns = new int[rbCtx.getRowColumnTypeInfos().length];
  for (int i = 0; i < projectedColumns.length; i++) {
   projectedColumns[i] = i;
  }
 }
 this.serDe = createAndInitializeSerde(jobConf);
 this.baseReader = Preconditions.checkNotNull(reader);
 // row parser and row assigner initializing
 try {
  vectorAssignRow.init((StructObjectInspector) serDe.getObjectInspector());
 } catch (HiveException e) {
  throw new RuntimeException(e);
 }
 druidWritable = baseReader.createValue();
 rowBoat = new Object[rbCtx.getDataColumnCount()];
}

代码示例来源:origin: apache/hive

VectorizedRowBatchCtx batchContext = baseWork.getVectorizedRowBatchCtx();
LOG.debug(name + " dataColumnCount: " + batchContext.getDataColumnCount());
int[] dataColumnNums = batchContext.getDataColumnNums();
if (dataColumnNums != null) {
 LOG.debug(name + " includeColumns: " + Arrays.toString(dataColumnNums));

代码示例来源:origin: apache/hive

VectorizedKafkaRecordReader(KafkaInputSplit inputSplit, Configuration jobConf) {
 if (rbCtx.getDataColumnNums() != null) {
  projectedColumns = rbCtx.getDataColumnNums();
 } else {

代码示例来源:origin: apache/drill

partitionValues = new Object[partitionColumnCount];
dataColumnNums = batchContext.getDataColumnNums();
Preconditions.checkState(dataColumnNums != null);

代码示例来源:origin: apache/hive

hasRowIdentifier = (rowIdentifierColumnNum != -1);
dataColumnNums = batchContext.getDataColumnNums();
Preconditions.checkState(dataColumnNums != null);

代码示例来源:origin: org.apache.hive/kafka-handler

VectorizedKafkaRecordReader(KafkaInputSplit inputSplit, Configuration jobConf) {
 if (rbCtx.getDataColumnNums() != null) {
  projectedColumns = rbCtx.getDataColumnNums();
 } else {

相关文章