org.apache.hadoop.fs.FileUtil.compareFs()方法的使用及代码示例

x33g5p2x  于2022-01-19 转载在 其他  
字(1.3k)|赞(0)|评价(0)|浏览(125)

本文整理了Java中org.apache.hadoop.fs.FileUtil.compareFs()方法的一些代码示例,展示了FileUtil.compareFs()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。FileUtil.compareFs()方法的具体详情如下:
包路径:org.apache.hadoop.fs.FileUtil
类名称:FileUtil
方法名:compareFs

FileUtil.compareFs介绍

暂无

代码示例

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

private Path copyRemoteFiles(Path parentDir, Path originalPath,
  Configuration conf, short replication) throws IOException {
 // check if we do not need to copy the files
 // is jt using the same file system.
 // just checking for uri strings... doing no dns lookups
 // to see if the filesystems are the same. This is not optimal.
 // but avoids name resolution.
 FileSystem remoteFs = null;
 remoteFs = originalPath.getFileSystem(conf);
 if (FileUtil.compareFs(remoteFs, jtFs)) {
  return originalPath;
 }
 // this might have name collisions. copy will throw an exception
 // parse the original path to create new path
 Path newPath = new Path(parentDir, originalPath.getName());
 FileUtil.copy(remoteFs, originalPath, jtFs, newPath, false, conf);
 jtFs.setReplication(newPath, replication);
 return newPath;
}

代码示例来源:origin: org.apache.hadoop/hadoop-distcp

+ rand.nextInt());
FileSystem workFS = workDir.getFileSystem(configuration);
if (!FileUtil.compareFs(targetFS, workFS)) {
 throw new IllegalArgumentException("Work path " + workDir +
   " and target path " + targetPath + " are in different file system");

相关文章