org.apache.hadoop.hive.ql.metadata.Hive.validatePartitionNameCharacters()方法的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(2.6k)|赞(0)|评价(0)|浏览(118)

本文整理了Java中org.apache.hadoop.hive.ql.metadata.Hive.validatePartitionNameCharacters()方法的一些代码示例,展示了Hive.validatePartitionNameCharacters()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Hive.validatePartitionNameCharacters()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.metadata.Hive
类名称:Hive
方法名:validatePartitionNameCharacters

Hive.validatePartitionNameCharacters介绍

暂无

代码示例

代码示例来源:origin: apache/drill

List<String> partVals = MetaStoreUtils.getPvals(table.getPartCols(),
  tbd.getPartitionSpec());
db.validatePartitionNameCharacters(partVals);
db.loadPartition(tbd.getSourcePath(), tbd.getTable().getTableName(),
  tbd.getPartitionSpec(), tbd.getReplace(),

代码示例来源:origin: apache/hive

private DataContainer handleStaticParts(Hive db, Table table, LoadTableDesc tbd,
  TaskInformation ti) throws HiveException, IOException, InvalidOperationException {
 List<String> partVals = MetaStoreUtils.getPvals(table.getPartCols(),  tbd.getPartitionSpec());
 db.validatePartitionNameCharacters(partVals);
 if (Utilities.FILE_OP_LOGGER.isTraceEnabled()) {
  Utilities.FILE_OP_LOGGER.trace("loadPartition called from " + tbd.getSourcePath()
   + " into " + tbd.getTable().getTableName());
 }
 db.loadPartition(tbd.getSourcePath(), db.getTable(tbd.getTable().getTableName()),
     tbd.getPartitionSpec(), tbd.getLoadFileType(), tbd.getInheritTableSpecs(),
     tbd.getInheritLocation(), isSkewedStoredAsDirs(tbd), work.isSrcLocal(),
     work.getLoadTableWork().getWriteType() != AcidUtils.Operation.NOT_ACID &&
         !tbd.isMmTable(),
     resetStatisticsProps(table), tbd.getWriteId(), tbd.getStmtId(),
     tbd.isInsertOverwrite());
 Partition partn = db.getPartition(table, tbd.getPartitionSpec(), false);
 // See the comment inside updatePartitionBucketSortColumns.
 if (!tbd.isMmTable() && (ti.bucketCols != null || ti.sortCols != null)) {
  updatePartitionBucketSortColumns(db, table, partn, ti.bucketCols,
    ti.numBuckets, ti.sortCols);
 }
 DataContainer dc = new DataContainer(table.getTTable(), partn.getTPartition());
 // add this partition to post-execution hook
 if (work.getOutputs() != null) {
  DDLTask.addIfAbsentByName(new WriteEntity(partn,
   getWriteType(tbd, work.getLoadTableWork().getWriteType())), work.getOutputs());
 }
 return dc;
}

代码示例来源:origin: com.facebook.presto.hive/hive-apache

} else {
 try {
  validatePartitionNameCharacters(
    Warehouse.getPartValuesFromPartName(s.getPath().getParent().toString()));
 } catch (MetaException e) {

代码示例来源:origin: com.facebook.presto.hive/hive-apache

List<String> partVals = MetaStoreUtils.getPvals(table.getPartCols(),
  tbd.getPartitionSpec());
db.validatePartitionNameCharacters(partVals);
db.loadPartition(tbd.getSourcePath(), tbd.getTable().getTableName(),
  tbd.getPartitionSpec(), tbd.getReplace(), tbd.getHoldDDLTime(),

相关文章

微信公众号

最新文章

更多

Hive类方法