org.apache.hadoop.fs.Path.mergePaths()方法的使用及代码示例

x33g5p2x  于2022-01-25 转载在 其他  
字(7.5k)|赞(0)|评价(0)|浏览(206)

本文整理了Java中org.apache.hadoop.fs.Path.mergePaths()方法的一些代码示例,展示了Path.mergePaths()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Path.mergePaths()方法的具体详情如下:
包路径:org.apache.hadoop.fs.Path
类名称:Path
方法名:mergePaths

Path.mergePaths介绍

[英]Merge 2 paths such that the second path is appended relative to the first. The returned path has the scheme and authority of the first path. On Windows, the drive specification in the second path is discarded.
[中]合并两条路径,使第二条路径相对于第一条路径追加。返回的路径具有第一条路径的方案和权限。在Windows上,第二条路径中的驱动器规格将被丢弃。

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-common

private Path makeTrashRelativePath(Path basePath, Path rmFilePath) {
 return Path.mergePaths(basePath, rmFilePath);
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

private Path createCollectorPath(Path filePath) {
 return mergePaths(filePath.getParent(),
   mergePaths(new Path(filePath.getName().split("\\.")[0]),
     mergePaths(new Path("_multipart"),
       new Path(Path.SEPARATOR))));
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

@Override
public PartHandle putPart(Path filePath, InputStream inputStream,
  int partNumber, UploadHandle uploadId, long lengthInBytes)
  throws IOException {
 byte[] uploadIdByteArray = uploadId.toByteArray();
 checkUploadId(uploadIdByteArray);
 Path collectorPath = new Path(new String(uploadIdByteArray, 0,
   uploadIdByteArray.length, Charsets.UTF_8));
 Path partPath =
   mergePaths(collectorPath, mergePaths(new Path(Path.SEPARATOR),
     new Path(Integer.toString(partNumber) + ".part")));
 try(FSDataOutputStream fsDataOutputStream =
     fs.createFile(partPath).build()) {
  IOUtils.copy(inputStream, fsDataOutputStream, 4096);
 } finally {
  org.apache.hadoop.io.IOUtils.cleanupWithLogger(LOG, inputStream);
 }
 return BBPartHandle.from(ByteBuffer.wrap(
   partPath.toString().getBytes(Charsets.UTF_8)));
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

fs.create(filePath).close();
} else {
 Path filePathInsideCollector = mergePaths(collectorPath,
   new Path(Path.SEPARATOR + filePath.getName()));
 fs.create(filePathInsideCollector).close();

代码示例来源:origin: apache/hive

backupDataPath = new Path(scheme, authority, DEFAULT_BACKUP_PATH + filePath);
} else {
 backupDataPath = Path.mergePaths(new Path(backup), inputPath);

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

private Path makeTrashRelativePath(Path basePath, Path rmFilePath) {
 return Path.mergePaths(basePath, rmFilePath);
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-common

private Path makeTrashRelativePath(Path basePath, Path rmFilePath) {
 return Path.mergePaths(basePath, rmFilePath);
}

代码示例来源:origin: io.hops/hadoop-common

private Path makeTrashRelativePath(Path basePath, Path rmFilePath) {
 return Path.mergePaths(basePath, rmFilePath);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-common

private Path makeTrashRelativePath(Path basePath, Path rmFilePath) {
 return Path.mergePaths(basePath, rmFilePath);
}

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-registry

private Path formatDataPath(String basePath) {
 return Path.mergePaths(new Path(basePath), new Path("/_record"));
}

代码示例来源:origin: dremio/dremio-oss

public static FileSelection createWithFullSchema(final FileSystemWrapper fs, final String parent, final String fullSchemaPath) throws IOException {
 final Path combined = Path.mergePaths(new Path(parent), PathUtils.toFSPath(fullSchemaPath));
 return create(fs, combined);
}

代码示例来源:origin: dremio/dremio-oss

/**
 * Temporary location for file upload.
 * Add uuid so that this location remains unique even across file renames.
 * @param filePath file path in under home space
 * @param extension file extension
 * @return location of staging dir where user file is uploaded.
 */
private Path getStagingLocation(FilePath filePath, String extension) {
 FilePath uniquePath = filePath.rename(format("%s_%s-%s", filePath.getFileName().toString(), extension, UUID.randomUUID().toString()));
 return Path.mergePaths(config.getStagingPath(hostname), PathUtils.toFSPath(uniquePath.toPathList()));
}

代码示例来源:origin: ch.cern.hadoop/hadoop-common

protected static void checkTrash(FileSystem trashFs, Path trashRoot,
  Path path) throws IOException {
 Path p = Path.mergePaths(trashRoot, path);
 assertTrue("Could not find file in trash: "+ p , trashFs.exists(p));
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-common

protected static void checkTrash(FileSystem trashFs, Path trashRoot,
  Path path) throws IOException {
 Path p = Path.mergePaths(trashRoot, path);
 assertTrue("Could not find file in trash: "+ p , trashFs.exists(p));
}

代码示例来源:origin: kite-sdk/kite

public static void assertDirectoriesInTrash(FileSystem fs, TrashPolicy trashPolicy, Path... dirs)
     throws IOException {
  Path trashDir = trashPolicy.getCurrentTrashDir();

  for (Path path : dirs) {
   Path trashPath = Path.mergePaths(trashDir, fs.makeQualified(path));
   assertTrue("Directory should exist in trash: " + trashPath, fs.exists(trashPath));
  }
 }
}

代码示例来源:origin: org.apache.hive/hive-orc

backupDataPath = new Path(scheme, authority, DEFAULT_BACKUP_PATH + filePath);
} else {
 backupDataPath = Path.mergePaths(new Path(backup), corruptPath);

代码示例来源:origin: dremio/dremio-oss

private Path getUploadLocation(FilePath filePath, String extension) {
 FilePath filePathWithExtension = filePath.rename(format("%s_%s", filePath.getFileName().getName(), extension));
 return Path.mergePaths(config.getInnerUploads(), PathUtils.toFSPath(filePathWithExtension.toPathList()));
}

代码示例来源:origin: ch.cern.hadoop/hadoop-common

@Test (timeout = 30000)
public void testMergePaths() {
 assertEquals(new Path("/foo/bar"),
  Path.mergePaths(new Path("/foo"),
   new Path("/bar")));
 assertEquals(new Path("/foo/bar/baz"),
  Path.mergePaths(new Path("/foo/bar"),
   new Path("/baz")));
 assertEquals(new Path("/foo/bar/baz"),
  Path.mergePaths(new Path("/foo"),
   new Path("/bar/baz")));
 assertEquals(new Path(Shell.WINDOWS ? "/C:/foo/bar" : "/C:/foo/C:/bar"),
  Path.mergePaths(new Path("/C:/foo"),
   new Path("/C:/bar")));
 assertEquals(new Path(Shell.WINDOWS ? "/C:/bar" : "/C:/C:/bar"),
   Path.mergePaths(new Path("/C:/"),
    new Path("/C:/bar")));
 assertEquals(new Path("/bar"),
   Path.mergePaths(new Path("/"), new Path("/bar")));
 assertEquals(new Path("viewfs:///foo/bar"),
  Path.mergePaths(new Path("viewfs:///foo"),
   new Path("file:///bar")));
 assertEquals(new Path("viewfs://vfsauthority/foo/bar"),
  Path.mergePaths(new Path("viewfs://vfsauthority/foo"),
   new Path("file://fileauthority/bar")));
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-common

@Test (timeout = 30000)
public void testMergePaths() {
 assertEquals(new Path("/foo/bar"),
  Path.mergePaths(new Path("/foo"),
   new Path("/bar")));
 assertEquals(new Path("/foo/bar/baz"),
  Path.mergePaths(new Path("/foo/bar"),
   new Path("/baz")));
 assertEquals(new Path("/foo/bar/baz"),
  Path.mergePaths(new Path("/foo"),
   new Path("/bar/baz")));
 assertEquals(new Path(Shell.WINDOWS ? "/C:/foo/bar" : "/C:/foo/C:/bar"),
  Path.mergePaths(new Path("/C:/foo"),
   new Path("/C:/bar")));
 assertEquals(new Path(Shell.WINDOWS ? "/C:/bar" : "/C:/C:/bar"),
   Path.mergePaths(new Path("/C:/"),
    new Path("/C:/bar")));
 assertEquals(new Path("/bar"),
   Path.mergePaths(new Path("/"), new Path("/bar")));
 assertEquals(new Path("viewfs:///foo/bar"),
  Path.mergePaths(new Path("viewfs:///foo"),
   new Path("file:///bar")));
 assertEquals(new Path("viewfs://vfsauthority/foo/bar"),
  Path.mergePaths(new Path("viewfs://vfsauthority/foo"),
   new Path("file://fileauthority/bar")));
}

代码示例来源:origin: ch.cern.hadoop/hadoop-common

Path trashDir = Path.mergePaths(new Path(trashRoot.toUri().getPath()),
 new Path(myFile.getParent().toUri().getPath()));

相关文章