我想用两台计算机在同一个网络中创建一个自制的spark集群。设置如下:
a) 192.168.1.9安装了hadoop hdfs的spark master
hadoop有一个core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://0.0.0.0:9000</value>
</property>
</configuration>
b) 192.168.1.6仅带Spark(从属)
我想使用spark命令从b访问a的hadoop hdfs中的一个文件:
...
# Load files
file_1 = "input_1.pkl"
file_2 = "input_2.pkl"
hdfs_base_path = "hdfs://192.168.1.9:9000/folderx/"
sc.addFile(hdfs_base_path + file_1)
sc.addFile(hdfs_base_path + file_2)
# Get files back
with open(SparkFiles.get(file_1), 'rb') as fw:
// use fw
但是,如果我想测试b中的程序,当我使用以下命令执行b中的程序时:
./spark-submit --master local program.py
输出如下:
17/07/25 19:02:51 INFO SparkContext: Added file hdfs://192.168.1.9:9000/bigdata/input_1_new_grid.pkl at hdfs://192.168.1.9:9000/bigdata/input_1_new_grid.pkl with timestamp 1501002171301
17/07/25 19:02:51 INFO Utils: Fetching hdfs://192.168.1.9:9000/bigdata/input_1_new_grid.pkl to /tmp/spark-838c3774-36ec-4db1-ab01-a8a8c627b100/userFiles-b4973f80-be6e-4f2e-8ba1-cd64ddca369a/fetchFileTemp1979399086141127743.tmp
17/07/25 19:02:51 WARN BlockReaderFactory: I/O error constructing remote block reader.
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
后来:
17/07/25 19:02:51 WARN DFSClient: Failed to connect to /127.0.0.1:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection refused
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
程序试图访问127.0.0.1:50010,但它是错误的。我是否也应该在b中安装hadoop?如果没有必要,正确的配置是什么?谢谢您!
1条答案
按热度按时间ohtdti5x1#
顺便说一句,为了防止有人来寻找某种解决方案,我将quickstart.cloudera改为真正的ip地址,而不是127.0.0.1。默认的/etc/hosts是127.0.0.1 quickstart.cloudera quickstart localhost localhost.domain
您需要的是127.0.0.1 localhost localhost.domain ip\u地址\u您的\u vm quickstart.cloudera quickstart
您可能还需要修改/usr/bin/cloudera quickstart ip,因为每次重新启动vm时,hosts文件都可能再次被重置。