在hadoop上运行java中的embedded pig时出现权限拒绝错误

ogq8wdun  于 2021-06-21  发布在  Pig
关注(0)|答案(1)|浏览(261)

上周,我使用用户“root”启动hadoop的dfs&mapreduce并运行嵌入的pig java代码。一切正常。本周,我将使用非root用户charlie执行相同的任务。在更改了几个目录的用户权限设置之后,现在我可以使用用户“charlie”启动hadoop的dfs&mapreduce,并且没有任何错误。但是,当我使用用户“charlie”运行嵌入的pig java代码时,它总是抱怨我在core-stie.xml中设置为/opt/hdfs/tmp/的hadoop.tmp.dir的权限:
java.io.filenotfoundexception:/opt/hdfs/tmp/mapred/local/localrunner/job\u local\u 0001.xml(权限被拒绝)
我已经检查了以下目录的权限,它们看起来都不错:

bash-3.2$ ls -lt /opt/hdfs/tmp
    total 4
    drwxr-xr-x 3 charlie comusers 4096 Apr 16 19:30 mapred
    bash-3.2$ ls -lt /opt/hdfs/tmp/mapred
    total 4
    drwxr-xr-x 2 charlie comusers 4096 Apr 16 19:30 local
    bash-3.2$ ls -lt /opt/hdfs/tmp/mapred/local
    total 0

我需要一些关于我做错了什么的指导。我在谷歌上搜索了那些关键词,但什么也没找到。任何帮助都将不胜感激!
我附上了清管器输出如下。希望这些信息能有所帮助。

12/04/16 19:31:28 INFO executionengine.HExecutionEngine: Connecting to hadoop file system at: hdfs://hadoop-namenode:9000
12/04/16 19:31:29 INFO pigstats.ScriptState: Pig features used in the script: HASH_JOIN,GROUP_BY,FILTER,CROSS
12/04/16 19:31:29 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/04/16 19:31:29 INFO mapReduceLayer.MRCompiler: File concatenation threshold: 100 optimistic? false
12/04/16 19:31:30 INFO mapReduceLayer.CombinerOptimizer: Choosing to move algebraic foreach to combiner
12/04/16 19:31:30 INFO mapReduceLayer.MRCompiler$LastInputStreamingOptimizer: Rewrite: POPackage->POForEach to POJoinPackage
12/04/16 19:31:30 INFO mapReduceLayer.MRCompiler$LastInputStreamingOptimizer: Rewrite: POPackage->POForEach to POJoinPackage
12/04/16 19:31:30 INFO mapReduceLayer.MRCompiler$LastInputStreamingOptimizer: Rewrite: POPackage->POForEach to POJoinPackage
12/04/16 19:31:30 INFO mapReduceLayer.MRCompiler$LastInputStreamingOptimizer: Rewrite: POPackage->POForEach to POJoinPackage
12/04/16 19:31:30 INFO mapReduceLayer.MRCompiler$LastInputStreamingOptimizer: Rewrite: POPackage->POForEach to POJoinPackage
12/04/16 19:31:30 INFO mapReduceLayer.MRCompiler$LastInputStreamingOptimizer: Rewrite: POPackage->POForEach to POJoinPackage
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size before optimization: 11
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 0 out of total 3 MR operators.
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 0 out of total 3 MR operators.
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 0 map-reduce splittees.
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 0 out of total 3 MR operators.
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 0 out of total 2 MR operators.
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 0 out of total 2 MR operators.
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 1 map-reduce splittees.
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 1 out of total 3 MR operators.
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size after optimization: 10
12/04/16 19:31:30 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
12/04/16 19:31:30 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
12/04/16 19:31:30 INFO pigstats.ScriptState: Pig script settings are added to the job
12/04/16 19:31:30 WARN pigstats.ScriptState: unable to read pigs manifest file
12/04/16 19:31:30 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
12/04/16 19:31:35 INFO mapReduceLayer.JobControlCompiler: Setting up multi store job
12/04/16 19:31:35 INFO mapReduceLayer.JobControlCompiler: BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=957600
12/04/16 19:31:35 INFO mapReduceLayer.JobControlCompiler: Neither PARALLEL nor default parallelism is set for this job. Setting number of reducers to 1
12/04/16 19:31:35 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
12/04/16 19:31:35 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission.
12/04/16 19:31:35 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/04/16 19:31:35 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
12/04/16 19:31:35 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
12/04/16 19:31:35 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
12/04/16 19:31:35 INFO input.FileInputFormat: Total input paths to process : 1
12/04/16 19:31:35 INFO util.MapRedUtil: Total input paths to process : 1
12/04/16 19:31:35 INFO util.MapRedUtil: Total input paths (combined) to process : 1
12/04/16 19:31:36 INFO mapReduceLayer.MapReduceLauncher: 0% complete
12/04/16 19:31:36 INFO mapReduceLayer.MapReduceLauncher: job null has failed! Stop running all dependent jobs
12/04/16 19:31:36 INFO mapReduceLayer.MapReduceLauncher: 100% complete
12/04/16 19:31:36 WARN mapReduceLayer.Launcher: There is no log file to write to.
12/04/16 19:31:36 ERROR mapReduceLayer.Launcher: Backend error message during job submission
java.io.FileNotFoundException: /opt/hdfs/tmp/mapred/local/localRunner/job_local_0001.xml (Permission denied)
    at java.io.FileOutputStream.open(Native Method)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:194)
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:180)
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:176)
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:234)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:335)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:368)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:484)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:465)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:372)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:208)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:142)
    at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1216)
    at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1197)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:92)
    at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:373)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:800)
    at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730)
    at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378)
    at org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247)
    at org.apache.hadoop.mapred.jobcontrol.JobControl.run(JobControl.java:279)
    at java.lang.Thread.run(Thread.java:662)

12/04/16 19:31:36 ERROR pigstats.SimplePigStats: ERROR 2997: Unable to recreate exception from backend error: java.io.FileNotFoundException: /opt/hdfs/tmp/mapred/local/localRunner/job_local_0001.xml (Permission denied)
12/04/16 19:31:36 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed!
12/04/16 19:31:36 WARN pigstats.ScriptState: unable to read pigs manifest file
12/04/16 19:31:36 INFO pigstats.SimplePigStats: Script Statistics: 

HadoopVersion   PigVersion  UserId  StartedAt   FinishedAt  Features
0.20.2      charlie 2012-04-16 19:31:30 2012-04-16 19:31:36 HASH_JOIN,GROUP_BY,FILTER,CROSS

Failed!

Failed Jobs:
JobId   Alias   Feature Message Outputs
N/A events,events1,grouped  MULTI_QUERY Message: java.io.FileNotFoundException: /opt/hdfs/tmp/mapred/local/localRunner/job_local_0001.xml (Permission denied)
    at java.io.FileOutputStream.open(Native Method)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:194)
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:180)
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:176)
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:234)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:335)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:368)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:484)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:465)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:372)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:208)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:142)
    at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1216)
    at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1197)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:92)
    at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:373)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:800)
    at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730)
    at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378)
    at org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247)
    at org.apache.hadoop.mapred.jobcontrol.JobControl.run(JobControl.java:279)
    at java.lang.Thread.run(Thread.java:662)

Input(s):
Failed to read data from "/grapevine/analysis/recommendation/input/article_based/all_grapevine_events.txt"

Output(s):

Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0

Job DAG:
null    ->  null,null,
null    ->  null,
null    ->  null,
null    ->  null,null,
null    ->  null,
null    ->  null,null,
null    ->  null,
null    ->  null,
null    ->  null,
null

12/04/16 19:31:36 INFO mapReduceLayer.MapReduceLauncher: Failed!
qcuzuvrc

qcuzuvrc1#

答案已经发布在评论中:
使用chmod-r youruser foldername递归地更改文件夹的所有者它不会给出错误消息

相关问题