hadoop—将数据写入flume,然后写入hdfs

cclgggtu  于 2021-05-30  发布在  Hadoop
关注(0)|答案(2)|浏览(453)

我正在使用flume 1.5.0.1和hadoop2.4.1尝试将字符串放入flume并保存到hdfs。Flume配置文件如下:

agentMe.channels = memory-channel
agentMe.sources = my-source AvroSource
agentMe.sinks = log-sink hdfs-sink

agentMe.sources.AvroSource.channels = memory-channel
agentMe.sources.AvroSource.type = avro
agentMe.sources.AvroSource.bind = 0.0.0.0 # i tried client ip as well
agentMe.sources.AvroSource.port = 41414

agentMe.channels.memory-channel.type = memory
agentMe.channels.memory-channel.capacity = 1000
agentMe.channels.memory-channel.transactionCapacity = 100

agentMe.sources.my-source.type = netcat
agentMe.sources.my-source.bind = 127.0.0.1 #If i use any other IP like the client from where the string is going to come from then i get unable to bind exception.
agentMe.sources.my-source.port = 9876
agentMe.sources.my-source.channels = memory-channel

# Define a sink that outputs to hdfs.

agentMe.sinks.hdfs-sink.channel = memory-channel
agentMe.sinks.hdfs-sink.type = hdfs
agentMe.sinks.hdfs-sink.hdfs.path = hdfs://localhost:54310/user/netlog/flume.txt
agentMe.sinks.hdfs-sink.hdfs.fileType = DataStream
agentMe.sinks.hdfs-sink.hdfs.batchSize = 2
agentMe.sinks.hdfs-sink.hdfs.rollCount = 0
agentMe.sinks.hdfs-sink.hdfs.inUsePrefix = tcptest-
agentMe.sinks.hdfs-sink.hdfs.inUseSuffix = .txt
agentMe.sinks.hdfs-sink.hdfs.rollSize = 0
agentMe.sinks.hdfs-sink.hdfs.rollInterval = 3
agentMe.sinks.hdfs-sink.hdfs.writeFormat = Text
agentMe.sinks.hdfs-sink.hdfs.path = /user/name/%y-%m-%d/%H%M/%S

我已经在这里提出了同样的问题

client.sendDataToFlume("hello world")

我发现NettyAvorPCClient无法连接到运行flume的服务器。但我只是发送一个简单的字符串,我遗漏了什么。
Maven建议

e4yzc0pl

e4yzc0pl1#

在我看来,如果您想连接nettyavrorpcclient,实际上需要设置一个avro-rpc源。示例配置如下:


# Define an Avro source called AvroSource on SpoolAgent and tell it

# to bind to 0.0.0.0:41414. Connect it to channel MemChannel.

agentMe.sources.AvroSource.channels = MemChannel
agentMe.sources.AvroSource.type = avro
agentMe.sources.AvroSource.bind = 0.0.0.0
agentMe.sources.AvroSource.port = 41414

这将在端口414上创建一个avrorpc源。

mepcadol

mepcadol2#

配置必须正确,否则事情可能无法解决。所以这里是将数据读入flume然后读入hdfs的配置。

a1.sources = r1
a1.sinks =  k2
a1.channels = c1

a1.channels.c1.type = memory

a1.sources.r1.channels = c1
a1.sources.r1.type = avro
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 41414
a1.sources.r1.interceptors = a
a1.sources.r1.interceptors.a.type = org.apache.flume.interceptor.TimestampInterceptor$Builder

a1.sinks.k2.type = hdfs
a1.sinks.k2.channel = c1
a1.sinks.k2.hdfs.fileType = DataStream
a1.sinks.k2.hdfs.batchSize = 10
a1.sinks.k2.hdfs.rollCount = 10
a1.sinks.k2.hdfs.rollSize = 10
a1.sinks.k2.hdfs.rollInterval = 10
a1.sinks.k2.hdfs.writeFormat = Text
a1.sinks.k2.hdfs.path = /user/flume/%y-%m-%d/%H%M/

# Use a channel which buffers events in memory

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel

a1.sources.r1.channels = c1
a1.sinks.k2.channel = c1

这应该有帮助:)

相关问题