如何在cloudera中配置yarn在所有集群节点上运行spark执行器?

e5nszbig  于 2021-06-04  发布在  Hadoop
关注(0)|答案(1)|浏览(286)

我们有一个cloudera集群,其中包含3个数据节点和相应的yarn节点管理器。然而,当我们以yarn客户机模式向集群提交spark脚本时,其中一个节点被处理为应用程序主节点,只有另外两个节点被处理为spark执行器。这在以下spark日志中显示,其中只有节点2和节点3被设置为执行器:

Using properties file: /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/conf/spark-defaults.conf
Adding default property: spark.eventLog.enabled=true 
Adding default property: spark.driver.extraLibraryPath=/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/native
Adding default property: spark.executor.extraLibraryPath=/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/native
Adding default property: spark.eventLog.dir=hdfs://node0.server.ads:8020/user/spark/applicationHistory
Parsed arguments:
  master                  yarn
  deployMode              client
  executorMemory          4500M
  executorCores           4
  totalExecutorCores      null
  propertiesFile          /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/conf/spark-defaults.conf
  driverMemory            2G
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
  driverExtraJavaOptions  null
  supervise               false
  queue                   null
  numExecutors            10
  files                   null
  pyFiles                 null
  archives                null
  jars                    null
  verbose                 true

Spark properties used, including those specified through
 --conf and those from the properties file /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/conf/spark-defaults.conf:
  spark.executor.extraLibraryPath -> /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/native
  spark.driver.extraLibraryPath -> /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/native
  spark.eventLog.enabled -> true
  spark.yarn.historyServer.address -> http://node0.server.ads:18088
  spark.kryoserializer.buffer.mb -> 500
  spark.eventLog.dir -> hdfs://node0.server.ads:8020/user/spark/applicationHistory

System properties:
spark.executor.extraLibraryPath -> /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/native
spark.driver.memory -> 2G
spark.executor.memory -> 4500M
spark.executor.instances -> 10
spark.driver.extraLibraryPath -> /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/native
spark.eventLog.enabled -> true
SPARK_SUBMIT -> true
spark.kryoserializer.buffer.mb -> 500
spark.master -> yarn-client
spark.executor.cores -> 4

15/01/20 15:10:56 INFO SecurityManager: Changing view acls to: ads
15/01/20 15:10:56 INFO SecurityManager: Changing modify acls to: ads
15/01/20 15:10:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ads); users with modify permissions: Set(ads)
15/01/20 15:10:56 INFO Slf4jLogger: Slf4jLogger started
15/01/20 15:10:56 INFO Remoting: Starting remoting
15/01/20 15:10:57 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@node2.server.ads:39325]
15/01/20 15:10:57 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@node2.server.ads:39325]
15/01/20 15:10:57 INFO Utils: Successfully started service 'sparkDriver' on port 39325.
15/01/20 15:10:57 INFO SparkEnv: Registering MapOutputTracker
15/01/20 15:10:57 INFO SparkEnv: Registering BlockManagerMaster
15/01/20 15:10:57 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20150120151057-4b21
15/01/20 15:10:57 INFO MemoryStore: MemoryStore started with capacity 1060.0 MB
15/01/20 15:10:57 INFO HttpFileServer: HTTP File server directory is /tmp/spark-7468fc6f-a7e9-45ce-acf3-003052cdb39e
15/01/20 15:10:57 INFO HttpServer: Starting HTTP Server
15/01/20 15:10:57 INFO Utils: Successfully started service 'HTTP file server' on port 35521.
15/01/20 15:10:57 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/01/20 15:10:57 INFO SparkUI: Started SparkUI at http://node2.server.ads:4040
15/01/20 15:10:57 INFO SparkContext: Added JAR file:/home/...
15/01/20 15:10:57 INFO RMProxy: Connecting to ResourceManager at node0.server.ads/10.10.10.24:8032
15/01/20 15:10:57 INFO Client: Requesting a new application from cluster with 3 NodeManagers
15/01/20 15:10:57 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (5421 MB per container)
15/01/20 15:10:57 INFO Client: Will allocate AM container, with 2432 MB memory including 384 MB overhead
15/01/20 15:10:57 INFO Client: Setting up container launch context for our AM
15/01/20 15:10:57 INFO Client: Preparing resources for our AM container
15/01/20 15:10:58 INFO Client: Uploading resource file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/spark-assembly-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar -> hdfs://node0.server.ads:8020/user/ads/.sparkStaging/application_1420724223178_0048/spark-assembly-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar
15/01/20 15:10:59 INFO Client: Setting up the launch envnode0ment for our AM container
15/01/20 15:10:59 INFO SecurityManager: Changing view acls to: ads
15/01/20 15:10:59 INFO SecurityManager: Changing modify acls to: ads
15/01/20 15:10:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ads); users with modify permissions: Set(ads)
15/01/20 15:10:59 INFO Client: Submitting application 48 to ResourceManager
15/01/20 15:10:59 INFO YarnClientImpl: Submitted application application_1420724223178_0048
15/01/20 15:11:00 INFO Client: Application report for application_1420724223178_0048 (state: ACCEPTED)
15/01/20 15:11:00 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: root.ads
     start time: 1421763059484
     final status: UNDEFINED
     tracking URL: http://node0.server.ads:8088/proxy/application_1420724223178_0048/
     user: ads
15/01/20 15:11:01 INFO Client: Application report for application_1420724223178_0048 (state: ACCEPTED)
15/01/20 15:11:02 INFO Client: Application report for application_1420724223178_0048 (state: ACCEPTED)
15/01/20 15:11:03 INFO Client: Application report for application_1420724223178_0048 (state: ACCEPTED)
15/01/20 15:11:04 INFO Client: Application report for application_1420724223178_0048 (state: ACCEPTED)
15/01/20 15:11:05 INFO Client: Application report for application_1420724223178_0048 (state: ACCEPTED)
15/01/20 15:11:06 INFO Client: Application report for application_1420724223178_0048 (state: ACCEPTED)
15/01/20 15:11:07 INFO Client: Application report for application_1420724223178_0048 (state: ACCEPTED)
15/01/20 15:11:07 INFO YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@node1.server.ads:34994/user/YarnAM#1676089466]
15/01/20 15:11:07 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> node0.server.ads, PROXY_URI_BASES -> http://node0.server.ads:8088/proxy/application_1420724223178_0048), /proxy/application_1420724223178_0048
15/01/20 15:11:07 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/01/20 15:11:08 INFO Client: Application report for application_1420724223178_0048 (state: RUNNING)
15/01/20 15:11:08 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: node1.server.ads
     ApplicationMaster RPC port: 0
     queue: root.ads
     start time: 1421763059484
     final status: UNDEFINED
     tracking URL: http://node0.server.ads:8088/proxy/application_1420724223178_0048/
     user: ads
15/01/20 15:11:08 INFO YarnClientSchedulerBackend: Application application_1420724223178_0048 has started running.
15/01/20 15:11:08 INFO NettyBlockTransferService: Server created on 47989
15/01/20 15:11:08 INFO BlockManagerMaster: Trying to register BlockManager
15/01/20 15:11:08 INFO BlockManagerMasterActor: Registering block manager node2.server.ads:47989 with 1060.0 MB RAM, BlockManagerId(<driver>, node2.server.ads, 47989)
15/01/20 15:11:08 INFO BlockManagerMaster: Registered BlockManager
15/01/20 15:11:08 INFO EventLoggingListener: Logging events to hdfs://node0.server.ads:8020/user/spark/applicationHistory/application_1420724223178_0048
15/01/20 15:11:14 INFO YarnClientSchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@node2.server.ads:50627/user/Executor#1287021831] with ID 1
15/01/20 15:11:14 INFO RackResolver: Resolved node2.server.ads to /default
15/01/20 15:11:14 INFO YarnClientSchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@node3.server.ads:55969/user/Executor#156589962] with ID 2
15/01/20 15:11:14 INFO RackResolver: Resolved node3.server.ads to /default
15/01/20 15:11:15 INFO BlockManagerMasterActor: Registering block manager node2.server.ads:36251 with 2.3 GB RAM, BlockManagerId(1, node2.server.ads, 36251)
15/01/20 15:11:15 INFO BlockManagerMasterActor: Registering block manager node3.server.ads:37204 with 2.3 GB RAM, BlockManagerId(2, node3.server.ads, 37204)
15/01/20 15:11:27 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
15/01/20 15:11:27 INFO UserIDSelector: starting...
15/01/20 15:11:27 INFO MemoryStore: ensureFreeSpace(256525) called with curMem=0, maxMem=1111511531
15/01/20 15:11:27 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 250.5 KB, free 1059.8 MB)
15/01/20 15:11:28 INFO MemoryStore: ensureFreeSpace(21336) called with curMem=256525, maxMem=1111511531
15/01/20 15:11:28 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.8 KB, free 1059.8 MB)
15/01/20 15:11:28 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on node2.server.ads:47989 (size: 20.8 KB, free: 1060.0 MB)
15/01/20 15:11:28 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
15/01/20 15:11:28 INFO SparkContext: Created broadcast 0 from newAPIHadoopFile at ApiParquetSparkConnector.java:43
15/01/20 15:11:28 INFO MemoryStore: ensureFreeSpace(256117) called with curMem=277861, maxMem=1111511531
15/01/20 15:11:28 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 250.1 KB, free 1059.5 MB)
15/01/20 15:11:28 INFO MemoryStore: ensureFreeSpace(21247) called with curMem=533978, maxMem=1111511531
15/01/20 15:11:28 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 20.7 KB, free 1059.5 MB)
15/01/20 15:11:28 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on node2.server.ads:47989 (size: 20.7 KB, free: 1060.0 MB)
15/01/20 15:11:28 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
15/01/20 15:11:28 INFO SparkContext: Created broadcast 1 from newAPIHadoopFile at ApiParquetSparkConnector.java:43
15/01/20 15:11:28 INFO FileInputFormat: Total input paths to process : 110
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
15/01/20 15:11:28 INFO deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
15/01/20 15:11:28 INFO deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
15/01/20 15:11:28 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
15/01/20 15:11:28 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
15/01/20 15:11:28 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
15/01/20 15:11:28 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
15/01/20 15:11:28 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
15/01/20 15:11:28 INFO SparkContext: Starting job: saveAsTextFile at UserIDSelector.java:74
15/01/20 15:11:28 INFO DAGScheduler: Registering RDD 7 (groupBy at UserIDSelector.java:72)
15/01/20 15:11:28 INFO DAGScheduler: Got job 0 (saveAsTextFile at UserIDSelector.java:74) with 110 output partitions (allowLocal=false)
15/01/20 15:11:28 INFO DAGScheduler: Final stage: Stage 1(saveAsTextFile at UserIDSelector.java:74)
15/01/20 15:11:28 INFO DAGScheduler: Parents of final stage: List(Stage 0)
15/01/20 15:11:28 INFO DAGScheduler: Missing parents: List(Stage 0)
15/01/20 15:11:28 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[7] at groupBy at UserIDSelector.java:72), which has no missing parents
15/01/20 15:11:28 INFO MemoryStore: ensureFreeSpace(3936) called with curMem=555225, maxMem=1111511531
15/01/20 15:11:28 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.8 KB, free 1059.5 MB)
15/01/20 15:11:28 INFO MemoryStore: ensureFreeSpace(2194) called with curMem=559161, maxMem=1111511531
15/01/20 15:11:28 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.1 KB, free 1059.5 MB)
15/01/20 15:11:28 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on node2.server.ads:47989 (size: 2.1 KB, free: 1060.0 MB)
15/01/20 15:11:28 INFO BlockManagerMaster: Updated info of block broadcast_2_piece0
15/01/20 15:11:28 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:838
15/01/20 15:11:28 INFO DAGScheduler: Submitting 110 missing tasks from Stage 0 (MappedRDD[7] at groupBy at UserIDSelector.java:72)
15/01/20 15:11:28 INFO YarnClientClusterScheduler: Adding task set 0.0 with 110 tasks
15/01/20 15:11:28 INFO RackResolver: Resolved node1.server.ads to /default
15/01/20 15:11:28 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, node3.server.ads, NODE_LOCAL, 5553 bytes)
15/01/20 15:11:28 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, node2.server.ads, NODE_LOCAL, 5553 bytes)
15/01/20 15:11:28 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, node3.server.ads, NODE_LOCAL, 5553 bytes)
15/01/20 15:11:28 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, node2.server.ads, NODE_LOCAL, 5553 bytes)
15/01/20 15:11:28 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, node3.server.ads, NODE_LOCAL, 5553 bytes)

问题是,我们如何配置yarn以使用所有三个节点作为spark执行器?

fsi0uk1n

fsi0uk1n1#

只有3个数据节点是不够的。您还必须在每个要进行Yarn加工的节点上安装Yarn节点管理器。数据节点仅仅意味着hdfs将在该节点上存储数据。

相关问题