hadoop 在带Docker的本地机器上使用带Spark的HDFS

ruarlubt  于 2023-03-01  发布在  Hadoop
关注(0)|答案(1)|浏览(201)

我正在尝试使用docker,更具体地说是docker compose来运行一个Hadoop环境,在这个环境中我可以使用HDFS来存储文件,同时在我的本地机器上编写spark应用程序,这将访问HDFS:

services:
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8
container_name: namenode
restart: always
ports:
- 9870:9870
- 9000:9000
volumes:
- hadoop_namenode:/hadoop/dfs/name
- ./hadoop_namenode:/hadoop/dfs/data
environment:
- CLUSTER_NAME=test
env_file:
- ./hadoop.env

datanode:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
container_name: datanode
restart: always
volumes:
- hadoop_datanode:/hadoop/dfs/data
environment:
SERVICE_PRECONDITION: "namenode:9870"
env_file:
- ./hadoop.env

resourcemanager:
image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8
container_name: resourcemanager
restart: always
environment:
SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864"
env_file:
- ./hadoop.env

nodemanager1:
image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8
container_name: nodemanager
restart: always
environment:
SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864 resourcemanager:8088"
env_file:
- ./hadoop.env

historyserver:
image: bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8
container_name: historyserver
restart: always
environment:
SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864 resourcemanager:8088"
volumes:
- hadoop_historyserver:/hadoop/yarn/timeline
env_file:
- ./hadoop.env
volumes:
hadoop_namenode:
hadoop_datanode:
hadoop_historyserver:

然后我尝试在spark scala程序的hdfs环境中读取这个文件“/app/sports/nba/games/sample_game. json”:

package com.example.spark

import org.apache.spark.sql._

object Test {
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder
.appName("Test")
.config("fs.defaultFS", "hdfs://localhost:9000")
.master("local[*]")
.getOrCreate()

//    val df = spark.read.json("file:///C:/Users/user/Documents/dev/docker/hadoop_namenode") // NOTE: it reads successfully from the filesystem, just not HDFS.
val df = spark.read.json("hdfs://localhost:9000/app/sports/nba/games")
df.show(false)
}
}

我遇到了沿着的错误消息,内容如下:

java.net.ConnectException: Connection timed out: no further information
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)...

[Executor task launch worker for task 0] WARN org.apache.hadoop.hdfs.DFSClient - Failed to connect to /172.20.0.3:9866 for file /app/sports/nba/games/sample_game.json for block BP-2085377089-172.20.0.7-1676688130925:blk_1073741882_1058, add to deadNodes and continue.
java.net.ConnectException: Connection timed out: no further information
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)...

[Executor task launch worker for task 0] WARN org.apache.hadoop.hdfs.DFSClient - No live nodes contain block BP-2085377089-172.20.0.7-1676688130925:blk_1073741882_1058 after checking nodes = [DatanodeInfoWithStorage[172.20.0.3:9866,DS-81d2fe5a-74e5-43cc-a2c6-1d56e5f89821,DISK]], ignoredNodes = null
[Executor task launch worker for task 0] INFO org.apache.hadoop.hdfs.DFSClient - Could not obtain BP-2085377089-172.20.0.7-1676688130925:blk_1073741882_1058 from any node:  No live nodes contain current block Block locations: DatanodeInfoWithStorage[172.20.0.3:9866,DS-81d2fe5a-74e5-43cc-a2c6-1d56e5f89821,DISK] Dead nodes:  DatanodeInfoWithStorage[172.20.0.3:9866,DS-81d2fe5a-74e5-43cc-a2c6-1d56e5f89821,DISK]. Will get new block locations from namenode and retry...
[Executor task launch worker for task 0] WARN org.apache.hadoop.hdfs.DFSClient - DFS chooseDataNode: got # 1 IOException, will wait for 1444.1894602927216 msec.
[Executor task launch worker for task 0] WARN org.apache.hadoop.hdfs.client.impl.BlockReaderFactory - I/O error constructing remote block reader.
java.net.ConnectException: Connection timed out: no further information
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)
...

我做错什么了?我该怎么补救?

6l7fqoea

6l7fqoea1#

localhost将引用Spark容器,而不是名称节点。
您需要使用hdfs://namenode:9000/app/sports/nba/games/
你还需要使用hdfs dfs -put,如果你还没有的话。一个Docker挂载把文件放进HDFS。
不相关,但您可以删除YARN容器,因为您不使用它。
示例-https://github.com/OneCricketeer/docker-stacks/blob/master/hadoop-spark/spark-notebooks/hdfs-reader.ipynb

相关问题