org.apache.hadoop.hive.ql.metadata.Partition.getPartitionPath()方法的使用及代码示例

x33g5p2x  于2022-01-26 转载在 其他  
字(3.4k)|赞(0)|评价(0)|浏览(91)

本文整理了Java中org.apache.hadoop.hive.ql.metadata.Partition.getPartitionPath()方法的一些代码示例,展示了Partition.getPartitionPath()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Partition.getPartitionPath()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.metadata.Partition
类名称:Partition
方法名:getPartitionPath

Partition.getPartitionPath介绍

暂无

代码示例

代码示例来源:origin: apache/hive

@Override
public Path getPath() {
 return partition.getPartitionPath();
}

代码示例来源:origin: apache/hive

try {
 for (Partition partition : partitions) {
  final FileSystem newPathFileSystem = partition.getPartitionPath().getFileSystem(this.getConf());
  boolean isAutoPurge = "true".equalsIgnoreCase(tbl.getProperty("auto.purge"));
  final FileStatus status = newPathFileSystem.getFileStatus(partition.getPartitionPath());
  Hive.trashFiles(newPathFileSystem, new FileStatus[]{status}, this.getConf(), isAutoPurge);

代码示例来源:origin: apache/hive

final FileSystem newPathFileSystem = newTPart.getPartitionPath().getFileSystem(this.getConf());
 boolean isAutoPurge = "true".equalsIgnoreCase(tbl.getProperty("auto.purge"));
 final FileStatus status = newPathFileSystem.getFileStatus(newTPart.getPartitionPath());
 Hive.trashFiles(newPathFileSystem, new FileStatus[]{status}, this.getConf(), isAutoPurge);
} catch (IOException io) {

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

public Path[] getPath() {
 Path[] ret = new Path[]{getPartitionPath()};
 return ret;
}

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

final public URI getDataLocation() {
 if (uri == null) {
  uri = getPartitionPath().toUri();
 }
 return uri;
}

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

/**
 * mapping from bucket number to bucket path
 */
// TODO: add test case and clean it up
@SuppressWarnings("nls")
public Path getBucketPath(int bucketNum) {
 try {
  // Previously, this got the filesystem of the Table, which could be
  // different from the filesystem of the partition.
  FileSystem fs = FileSystem.get(getPartitionPath().toUri(), Hive.get()
    .getConf());
  String pathPattern = getPartitionPath().toString();
  if (getBucketCount() > 0) {
   pathPattern = pathPattern + "/*";
  }
  LOG.info("Path pattern = " + pathPattern);
  FileStatus srcs[] = fs.globStatus(new Path(pathPattern));
  Arrays.sort(srcs);
  for (FileStatus src : srcs) {
   LOG.info("Got file: " + src.getPath());
  }
  if (srcs.length == 0) {
   return null;
  }
  return srcs[bucketNum].getPath();
 } catch (Exception e) {
  throw new RuntimeException("Cannot get bucket path for bucket "
    + bucketNum, e);
 }
}

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

Path partPath = partition.getPartitionPath();
fs = partPath.getFileSystem(conf);
if (!fs.exists(partPath)) {

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

Path oldPartPath = null;
if(oldPart != null) {
 oldPartPath = oldPart.getPartitionPath();

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

Partition part = iterParts.next();
listP.add(part.getPartitionPath().toString());
try {
 partP.add(Utilities.getPartitionDesc(part));

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

fileSys = partn.getPartitionPath().getFileSystem(conf);
fileStatus = Utilities.getFileStatusRecurse(partn.getPartitionPath(), 1, fileSys);
newPartStats.setNumFiles(fileStatus.length);

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

Path originalDir = p.getPartitionPath();
Path intermediateArchivedDir = new Path(originalDir.getParent(),
  originalDir.getName() + INTERMEDIATE_ARCHIVED_DIR_SUFFIX);

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

dest_tab = dest_part.getTable();
Path tabPath = dest_tab.getPath();
Path partPath = dest_part.getPartitionPath();

相关文章

微信公众号

最新文章

更多