org.apache.hadoop.hive.ql.exec.Utilities.getIsVectorized()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(5.6k)|赞(0)|评价(0)|浏览(92)

本文整理了Java中org.apache.hadoop.hive.ql.exec.Utilities.getIsVectorized()方法的一些代码示例,展示了Utilities.getIsVectorized()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Utilities.getIsVectorized()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.exec.Utilities
类名称:Utilities
方法名:getIsVectorized

Utilities.getIsVectorized介绍

[英]Returns true if a plan is both configured for vectorized execution and the node is vectorized. The plan may be configured for vectorization but vectorization disallowed eg. for FetchOperator execution.
[中]如果计划配置为矢量化执行,且节点矢量化,则返回true。该计划可配置为矢量化,但不允许矢量化,例如用于执行FetchOperator。

代码示例

代码示例来源:origin: apache/hive

@Override public RecordReader<NullWritable, KafkaWritable> getRecordReader(InputSplit inputSplit,
  JobConf jobConf,
  Reporter reporter) {
 if (Utilities.getIsVectorized(jobConf)) {
  //noinspection unchecked
  return (RecordReader) new VectorizedKafkaRecordReader((KafkaInputSplit) inputSplit, jobConf);
 }
 return new KafkaRecordReader((KafkaInputSplit) inputSplit, jobConf);
}

代码示例来源:origin: apache/hive

@SuppressWarnings({ "unchecked", "rawtypes" })
@Override
public org.apache.hadoop.mapred.RecordReader<NullWritable, ArrayWritable> getRecordReader(
  final org.apache.hadoop.mapred.InputSplit split,
  final org.apache.hadoop.mapred.JobConf job,
  final org.apache.hadoop.mapred.Reporter reporter
  ) throws IOException {
 try {
  if (Utilities.getIsVectorized(job)) {
   if (LOG.isDebugEnabled()) {
    LOG.debug("Using vectorized record reader");
   }
   return (RecordReader) vectorizedSelf.getRecordReader(split, job, reporter);
  }
  else {
   if (LOG.isDebugEnabled()) {
    LOG.debug("Using row-mode record reader");
   }
   return new ParquetRecordReaderWrapper(realInput, split, job, reporter);
  }
 } catch (final InterruptedException e) {
  throw new RuntimeException("Cannot create a RecordReaderWrapper", e);
 }
}

代码示例来源:origin: apache/hive

public NullRowsRecordReader(Configuration conf, InputSplit split) throws IOException {
 boolean isVectorMode = Utilities.getIsVectorized(conf);
 if (LOG.isDebugEnabled()) {
  LOG.debug(getClass().getSimpleName() + " in "
    + (isVectorMode ? "" : "non-") + "vector mode");
 }
 if (isVectorMode) {
  rbCtx = Utilities.getVectorizedRowBatchCtx(conf);
  int partitionColumnCount = rbCtx.getPartitionColumnCount();
  if (partitionColumnCount > 0) {
   partitionValues = new Object[partitionColumnCount];
   VectorizedRowBatchCtx.getPartitionValues(rbCtx, conf, (FileSplit)split, partitionValues);
  } else {
   partitionValues = null;
  }
 } else {
  rbCtx = null;
  partitionValues = null;
 }
}

代码示例来源:origin: apache/hive

@Override public org.apache.hadoop.mapred.RecordReader<NullWritable, DruidWritable> getRecordReader(
  org.apache.hadoop.mapred.InputSplit split,
  JobConf job,
  Reporter reporter) throws IOException {
 // We need to provide a different record reader for every type of Druid query.
 // The reason is that Druid results format is different for each type.
 final DruidQueryRecordReader<?> reader;
 // By default, we use druid scan query as fallback.
 final String druidQueryType = job.get(Constants.DRUID_QUERY_TYPE, Query.SCAN);
 reader = getDruidQueryReader(druidQueryType);
 reader.initialize((HiveDruidSplit) split, job);
 if (Utilities.getIsVectorized(job)) {
  //noinspection unchecked
  return (org.apache.hadoop.mapred.RecordReader) new DruidVectorizedWrapper(reader, job);
 }
 return reader;
}

代码示例来源:origin: apache/hive

private List<OrcSplit> callInternal() throws IOException {
 boolean isAcid = AcidUtils.isFullAcidScan(context.conf);
 boolean vectorMode = Utilities.getIsVectorized(context.conf);

代码示例来源:origin: apache/hive

@Override
public boolean validateInput(FileSystem fs, HiveConf conf,
               List<FileStatus> files
              ) throws IOException {
 if (Utilities.getIsVectorized(conf)) {
  return new VectorizedOrcInputFormat().validateInput(fs, conf, files);
 }
 if (files.size() <= 0) {
  return false;
 }
 for (FileStatus file : files) {
  if (!HiveConf.getVar(conf, ConfVars.HIVE_EXECUTION_ENGINE).equals("mr")) {
   // 0 length files cannot be ORC files, not valid for MR.
   if (file.getLen() == 0) {
    return false;
   }
  }
  try {
   OrcFile.createReader(file.getPath(),
     OrcFile.readerOptions(conf).filesystem(fs).maxLength(file.getLen()));
  } catch (IOException e) {
   return false;
  }
 }
 return true;
}

代码示例来源:origin: apache/hive

List<OrcSplit> splits = Lists.newArrayList();
boolean isAcid = AcidUtils.isFullAcidScan(conf);
boolean vectorMode = Utilities.getIsVectorized(conf);
OrcSplit.OffsetAndBucketProperty offsetAndBucket = null;
for (HdfsFileStatusWithId file : fileStatuses) {

代码示例来源:origin: apache/hive

final boolean hasDelta = deltas != null && !deltas.isEmpty();
final boolean isAcidRead = AcidUtils.isFullAcidScan(conf);
final boolean isVectorized = Utilities.getIsVectorized(conf);
Boolean isSplitUpdate = null;
if (isAcidRead) {

代码示例来源:origin: apache/hive

boolean isSupported = inputFormat instanceof LlapWrappableInputFormatInterface;
boolean isCacheOnly = inputFormat instanceof LlapCacheOnlyInputFormatInterface;
boolean isVectorized = Utilities.getIsVectorized(conf);
if (!isVectorized) {

代码示例来源:origin: apache/hive

if (isLlapOn) {
 canWrapAny = Utilities.getIsVectorized(conf, this);

代码示例来源:origin: apache/hive

if (!Utilities.getIsVectorized(job)) {
 result = null;
 if (HiveConf.getBoolVar(job, ConfVars.LLAP_IO_ROW_WRAPPER_ENABLED)) {

代码示例来源:origin: apache/hive

Reporter reporter) throws IOException {
boolean vectorMode = Utilities.getIsVectorized(conf);
boolean isAcidRead = isFullAcidRead(conf, inputSplit);
if (!isAcidRead) {

代码示例来源:origin: org.apache.hive/kafka-handler

@Override public RecordReader<NullWritable, KafkaWritable> getRecordReader(InputSplit inputSplit,
  JobConf jobConf,
  Reporter reporter) {
 if (Utilities.getIsVectorized(jobConf)) {
  //noinspection unchecked
  return (RecordReader) new VectorizedKafkaRecordReader((KafkaInputSplit) inputSplit, jobConf);
 }
 return new KafkaRecordReader((KafkaInputSplit) inputSplit, jobConf);
}

代码示例来源:origin: org.apache.hive/hive-llap-server

if (!Utilities.getIsVectorized(job)) {
 result = null;
 if (HiveConf.getBoolVar(job, ConfVars.LLAP_IO_ROW_WRAPPER_ENABLED)) {

相关文章

微信公众号

最新文章

更多

Utilities类方法