hadoop:java.io.ioexception:调用localhost/127.0.0.1:54310本地异常失败:java.io.eofeexception

kg7wmglp  于 2021-06-04  发布在  Hadoop
关注(0)|答案(2)|浏览(499)

我是hadoop新手,今天才开始用它,我想把文件写到hdfs hadoop服务器上,使用的是hadoop 1.2.1服务器,当我在cli中给jps命令时可以看到所有的节点都在运行,

31895 Jps
29419 SecondaryNameNode
29745 TaskTracker
29257 DataNode

这是我的示例客户机代码,用于将文件写入hdfs系统

public static void main(String[] args) 
   {
        try {
          //1. Get the instance of COnfiguration
          Configuration configuration = new Configuration();
          configuration.addResource(new Path("/data/WorkArea/hadoop/hadoop-1.2.1/hadoop-1.2.1/conf/core-site.xml"));
          configuration.addResource(new Path("/data/WorkArea/hadoop/hadoop-1.2.1/hadoop-1.2.1/conf/hdfs-site.xml"));
          //2. Create an InputStream to read the data from local file
          InputStream inputStream = new BufferedInputStream(new FileInputStream("/home/local/PAYODA/hariprasanth.l/Desktop/ProjectionTest"));
          //3. Get the HDFS instance
          FileSystem hdfs = FileSystem.get(new URI("hdfs://localhost:54310"), configuration);
          //4. Open a OutputStream to write the data, this can be obtained from the FileSytem
          OutputStream outputStream = hdfs.create(new Path("hdfs://localhost:54310/user/hadoop/Hadoop_File.txt"),
          new Progressable() {  
                  @Override
                  public void progress() {
             System.out.println("....");
                  }
                        });
          try
          {
            IOUtils.copyBytes(inputStream, outputStream, 4096, false); 
          }
          finally
          {
            IOUtils.closeStream(inputStream);
            IOUtils.closeStream(outputStream);
          } 
       } catch (Exception e) {
           e.printStackTrace();
       }
   }

运行代码时出现异常,

java.io.IOException: Call to localhost/127.0.0.1:54310 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1063)
at org.apache.hadoop.ipc.Client.call(Client.java:1031)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at com.sun.proxy.$Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:235)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:275)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:249)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:163)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:283)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:247)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:109)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1792)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:76)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1826)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1808)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:265)
at com.test.hadoop.writefiles.FileWriter.main(FileWriter.java:27)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:760)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:698)

当我调试它时,当我尝试连接到hdfs本地服务器时,错误发生在线路上,

FileSystem hdfs = FileSystem.get(new URI("hdfs://localhost:54310"), configuration);

正如我在谷歌上搜索到的,它显示我的版本不匹配,
hadoop的服务器版本是-1.2.1客户机jar am using are

hadoop-common-0.22.0.jar
hadoop-hdfs-0.22.0.jar

请尽快告诉我问题所在,
如果可能的话,推荐一个我可以找到hadoop客户端jar的地方,把jar也命名一下。。。拜托。。。
你好,哈里

s2j5cfk0

s2j5cfk01#

没有namenode正在运行。你的namenode有问题。启动前格式化namenode了吗?

hadoop namenode -format
1l5u6lss

1l5u6lss2#

这是因为在不同的jar中有相同的类表示(即 hadoop commons 以及 hadoop core 有相同的阶级。实际上,我对使用相应的jar感到困惑。
最后我使用了 apache hadoop core . 它像苍蝇一样工作。

相关问题