在eclipse的hadoop上执行mr程序时出现问题

798qvoo8  于 2021-06-03  发布在  Hadoop
关注(0)|答案(2)|浏览(269)

在我的计算机上执行来自eclipe的mapreduce程序时遇到问题 windows
hadoop Pseudo cluster 正在运行 Ubuntu VM 在同一台机器上运行。
我在跑步 hadoop 1.0.3 . 我试过了 eclipse JunoGanymede
eclipse plugin 1.0.3 ,但没有成功。每当我添加 Linux VM 作为mapreduce位置。
我还尝试通过下面的配置直接设置属性,没有进一步的运气。我把它当作一个 java application .

Configuration conf = new Configuration();
conf.set("fs.default.name", "hdfs://192.168.65.132:8020/");
conf.set("mapred.job.tracker", "192.168.65.132:8021");
conf.set("hadoop.job.ugi", "hadoop");

我得到以下错误。

13/01/08 09:59:44 INFO ipc.Client: Retrying connect to server: 192.168.65.132/192.168.65.132:8020. Already tried 7 time(s).
13/01/08 09:59:46 INFO ipc.Client: Retrying connect to server: 192.168.65.132/192.168.65.132:8020. Already tried 8 time(s).
13/01/08 09:59:48 INFO ipc.Client: Retrying connect to server: 192.168.65.132/192.168.65.132:8020. Already tried 9 time(s).
Exception in thread "main" java.net.ConnectException: Call to 192.168.65.132/192.168.65.132:8020 failed on connection exception: java.net.ConnectException: Connection refused: no further information

at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
at org.apache.hadoop.ipc.Client.call(Client.java:1075)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:238)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(FileInputFormat.java:372)
at inverika.training.examples.WordCount.main(WordCount.java:37)
    Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1206)
at org.apache.hadoop.ipc.Client.call(Client.java:1050)
... 17 more

有人试过类似的设置吗?请分享你在这方面的经验。
netstat转储:

Proto Recv-Q Send-Q Local Address   Foreign Address State 
tcp6  0      0      127.0.0.1:8020  :::*            LISTEN 
tcp6  0      0      127.0.0.1:8021  :::*            LISTEN 
tcp6  0      0      127.0.0.1:58399 127.0.0.1:8020  ESTABLISHED 
tcp6  0      0      127.0.0.1:8020  127.0.0.1:58399 ESTABLISHED 
tcp6  0      0      127.0.0.1:45042 127.0.0.1:8021  ESTABLISHED 
tcp6  0      0      127.0.0.1:8021  127.0.0.1:45042 ESTABLISHED
t8e9dugd

t8e9dugd1#

对于ubuntu,我跟随micheal noll,它很管用。也许你应该试试这个http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

ykejflvf

ykejflvf2#

从netstat来看,您的伪vm似乎只接受来自自身(vm)的连接。我希望nn和jt的netstat条目看起来更像这样:

Proto Recv-Q Send-Q Local Address   Foreign Address State 
tcp   0      0      0.0.0.0:8020    :::*            LISTEN 
tcp   0      0      0.0.0.0:8021    :::*            LISTEN

我不确定proto是tcp6是否重要,但是如果您想从外部连接到它们,本地地址应该显示为0.0.0.0:[port]。127.0.0.1是一个环回地址,因此只能从vm本身接受连接。
如果不能从windows命令行ping vm地址,那么还需要查看vm网络的配置(nat、桥接等)( ping 192.168.65.132 )那么您也无法让eclipse与vm对话。什么地址 192.168.65.132 -这是您的windows ip地址,还是虚拟机的地址?
因此,要解决这个问题,您需要重新配置vm网络并修改hadoop配置以使用vm主机名/ip地址(不是127.0.0.1),或者您可以利用putty之类的工具创建一些动态ipv4 socks隧道。

相关问题