Flume1.4与Hadoop2.2.0和hdfs接收器类型有问题

3okqufwl  于 2021-06-04  发布在  Hadoop
关注(0)|答案(1)|浏览(219)

下面提到的是我的flume配置文件。。。


# example.conf: A single-node Flume configuration

# Name the components on this agent

a1.sources = r1
a1.sinks = k1
a1.channels = c1

# reading file using tail command and sending data to channel

a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /data/apache-flume-1.4.0-bin/logs
a1.sources.r1.channels = c1

# Use a channel which buffers events in memory

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = hdfs://PPWFMD509:9160/flume-test
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute

在hadoop版本2.2.0上运行(已将hadoop core 1.2.1.jar文件添加到flume lib目录)
在maven存储库中,我无法为hadoop-core2.2.x归档jar。为什么?什么是hadoop-core-0.20版本?
在运行同一个文件和放置文件时有以下提到的异常

2014-02-26 14:51:30,865 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:219)] Creating hdfs://PPWFMD509:9160/flume-test/events-.1393406490812.tmp
2014-02-26 14:51:31,079 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:418)] HDFS IO error
org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
        at org.apache.hadoop.ipc.Client.call(Client.java:1113)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
k5hmc34c

k5hmc34c1#

信息 Server IPC version 9 cannot communicate with client version 4 您面临兼容性问题的要点。flume正在尝试使用与hadoop群集不兼容的hadoop客户端版本(1.2.1不能与2+版本一起使用)。
至于lib版本,这是来自hadoop发行版的引用:

1.2.X - current stable version, 1.2 release
2.5.X - current stable 2.x version
0.23.X - similar to 2.X.X but missing NN HA.

相关问题