用于单行和多行日志的自定义recordreader

b5buobof  于 2021-06-04  发布在  Hadoop
关注(0)|答案(1)|浏览(326)

我正在尝试创建一个mr作业,它将更改通过flume加载到hdfs中的日志文件的格式。我正在尝试将日志转换为字段由“::”分隔的格式。例如。

date/timestamp:::log-level:::rest-of-log

我遇到的问题是,有些日志是单行的,有些是多行的,我需要在其余的日志字段中保持多行日志的完整性。我写了一个习惯 InputFormat 以及 RecordReader 尝试这样做(基本上只是 NLineRecordReader 修改为在到达日期戳之前追加行,而不是追加固定数量的行)。我用来格式化日志的mr工作看起来很好,但是 RecordReader 似乎不能正确地通过多条线路,我不知道为什么。
下面是我的recordreader类:

public class LogRecordReader extends RecordReader<LongWritable, Text> {

private LineReader in;
private LongWritable key;
private Text value = new Text();
private long start = 0;
private long end = 0;
private long pos = 0;
private int maxLineLength;
private Text line = new Text(); // working line
private Text lineHasDate = new Text(); // if line encounters a date stamp, hold it here

public void close() throws IOException {
    if (in != null) {
        in.close();
    }
}

public LongWritable getCurrentKey() throws IOException,InterruptedException {
    return key;
}

public Text getCurrentValue() throws IOException, InterruptedException {
    return value;
}

public float getProgress() throws IOException, InterruptedException {
    if (start == end) {
        return 0.0f;
    }
    else {
        return Math.min(1.0f, (pos - start) / (float)(end - start));
    }
}

public void initialize(InputSplit genericSplit, TaskAttemptContext context) throws IOException, InterruptedException {

    FileSplit split = (FileSplit) genericSplit;
    final Path file = split.getPath();
    Configuration conf = context.getConfiguration();
    this.maxLineLength = conf.getInt("mapred.linerecordreader.maxlength",Integer.MAX_VALUE);
    FileSystem fs = file.getFileSystem(conf);
    start = split.getStart();
    end = start + split.getLength();
    boolean skipFirstLine = false;
    FSDataInputStream filein = fs.open(split.getPath());

    // if we're not starting at the beginning, we should skip the first line
    if (start != 0){
        skipFirstLine = true;
        --start;
        filein.seek(start);
    }

    in = new LineReader(filein, conf);

    // if we should skip the first line
    if(skipFirstLine){
        start += in.readLine(new Text(), 0, (int)Math.min((long)Integer.MAX_VALUE, end - start));
    }

    this.pos = start;
}

/**
 * create a complete log message from individual lines using date/time stamp as a breakpoint
 */
public boolean nextKeyValue() throws IOException, InterruptedException {

    // if key has not yet been initialized
    if (key == null) { 
        key = new LongWritable();
    }

    key.set(pos);

    // if value has not yet been initialized
    if (value == null) { 
        value = new Text();
    }

    value.clear();

    final Text endline = new Text("\n");
    int newSize = 0;

    // if a line with a date was encountered on the previous call
    if (lineHasDate.getLength() > 0) { 
        while (pos < end) {
            value.append(lineHasDate.getBytes(), 0, lineHasDate.getLength()); // append the line
            value.append(endline.getBytes(), 0, endline.getLength()); // append a line break
            pos += newSize;
            if (newSize == 0) break;
        }
        lineHasDate.clear(); // clean up
    }

    // to check buffer 'line' for date/time stamp
    Pattern regexDateTime = Pattern.compile("^\\d{2}\\s\\S+\\s\\d{4}\\s\\d{2}:\\d{2}:\\d{2},\\d{3}\\s");
    Matcher matcherDateTime = regexDateTime.matcher(line.toString());

    // read in a new line to the buffer 'line'
    newSize = in.readLine(line, maxLineLength, Math.max((int)Math.min(Integer.MAX_VALUE, end-pos), maxLineLength));

    // if the line in the buffer contains a date/time stamp, append it
    if (matcherDateTime.find()) {
        while (pos < end) {
            newSize = in.readLine(line, maxLineLength, Math.max((int)Math.min(Integer.MAX_VALUE, end-pos), maxLineLength));
            value.append(line.getBytes(), 0, line.getLength()); // append the line
            value.append(endline.getBytes(), 0, endline.getLength()); // append a line break
            if (newSize == 0) break;
            pos += newSize;
            if (newSize < maxLineLength) break;
        }
        // read in the next line to the buffer 'line'
        newSize = in.readLine(line, maxLineLength, Math.max((int)Math.min(Integer.MAX_VALUE, end-pos), maxLineLength));
    }

    // while lines in the buffer do not contain date/time stamps, append them
    while(!matcherDateTime.find()) {
            newSize = in.readLine(line, maxLineLength, Math.max((int)Math.min(Integer.MAX_VALUE, end-pos), maxLineLength));
            value.append(line.getBytes(), 0, line.getLength()); // append the line
            value.append(endline.getBytes(), 0, endline.getLength()); // append a line break
            if (newSize == 0) break;
            pos += newSize;
            if (newSize < maxLineLength) break;
        // read in the next line to the buffer 'line', and continue looping
        newSize = in.readLine(line, maxLineLength, Math.max((int)Math.min(Integer.MAX_VALUE, end-pos), maxLineLength));
    }

    // if the line in the buffer contains a date/time stamp (which it should since the loop broke) save it for next call
    if (matcherDateTime.find()) lineHasDate = line;

    // if there is no new line
    if (newSize == 0) {
        // TODO: if lineHasDate is the last line in the file, it must be appended (?)
        key = null;
        value = null;
        return false;
    } 

    return true;
}
}

下面是格式化日志的mr作业:

public class FlumeLogFormat extends Configured implements Tool {

/**
 * Map class
 */
public static class Map extends MapReduceBase 
    implements Mapper<LongWritable, Text, Text, Text> {

    private Text formattedLog = new Text();
    private Text keyDateTime = new Text(); // no value

    public void map(LongWritable key, Text value, 
        OutputCollector<Text, Text> output, Reporter reporter) throws IOException {

        String log = value.toString();
        StringBuffer buffer = new StringBuffer();

        Pattern regex = Pattern.compile("^(\\d{2}\\s\\S+\\s\\d{4}\\s\\d{2}:\\d{2}:\\d{2},\\d{3})\\s([A-Z]{4,5})\\s([\\s\\S]+)");
        Matcher matcher = regex.matcher(log);
        if (matcher.find()) {
            buffer.append(matcher.group(1)+":::"+matcher.group(2)+":::"+matcher.group(3)); // insert ":::" between fields to serve as a delimiter

        formattedLog.set(buffer.toString());
        keyDateTime.set(matcher.group(1));
        output.collect(keyDateTime, formattedLog);
        }
    }
}

/**
 * run method
 * @param args
 * @return int
 * @throws Exception
 */
public int run(String[] args) throws Exception {

    JobConf conf = new JobConf(getConf(), FlumeLogFormat.class); // class is LogFormat
    conf.setJobName("FlumeLogFormat");

    conf.setOutputKeyClass(Text.class);
    conf.setOutputValueClass(Text.class);

    conf.setMapperClass(Map.class);

    List<String> other_args = new ArrayList<String>();
    for(int i=0; i < args.length; ++i) {
      try {
        if ("-m".equals(args[i])) {
          conf.setNumMapTasks(Integer.parseInt(args[++i]));
        } else if ("-r".equals(args[i])) {
          conf.setNumReduceTasks(Integer.parseInt(args[++i]));
        } else {
          other_args.add(args[i]);
        }
      } catch (NumberFormatException exception) {
        System.out.println("Give int value instead of " + args[i]);
        //return printUsage();
      } catch (ArrayIndexOutOfBoundsException exception) {
        System.out.println("Parameter missing " +  args[i-1]);
        //return printUsage();
      }
    }

    if (other_args.size() != 2) {

      //return printUsage();
    }

    FileInputFormat.setInputPaths(conf, new Path(other_args.get(0)));
    FileOutputFormat.setOutputPath(conf, new Path(other_args.get(1)));

    conf.setInputFormat(LogInputFormat.class);
    conf.setOutputFormat(SequenceFileOutputFormat.class);

    JobClient.runJob(conf);
    return 0;
}

/**
 * Main method
 * @param args
 * @throws Exception
 */
public static void main(String[] args) throws Exception {

    int res = ToolRunner.run(new Configuration(), new FlumeLogFormat(), args);
    System.exit(res);
}
}

以下是日志:

21 July 2013 17:35:51,334 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:173)  - Starting Sink k1

25 May 2013 06:33:36,795 ERROR [lifecycleSupervisor-1-7] (org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run:253)  - Unable to start EventDrivenSourceRunner: { source:org.apache.flume.source.SpoolDirectorySource{name:r1,state:IDLE} } - Exception follows.
java.lang.IllegalStateException: Directory does not exist: /root/FlumeTest
        at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
        at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.<init>(ReliableSpoolingFileEventReader.java:129)
        at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.<init>(ReliableSpoolingFileEventReader.java:72)
        at org.apache.flume.client.avro.ReliableSpoolingFileEventReader$Builder.build(ReliableSpoolingFileEventReader.java:556)
        at org.apache.flume.source.SpoolDirectorySource.start(SpoolDirectorySource.java:75)
        at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
        at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:679)

01 June 2012 12:35:22,222 INFO noiweoqierwnvoirenvoiernv iorenvoiernve irnvoirenv
q9rjltbz

q9rjltbz1#

fwiw,我看到你在处理多行的堆栈跟踪。
目前,我正在使用一个使用log4j2的自定义flume构建来处理这些问题,并且我使用{separator(|)}语法将patternlayout中的%mex替换为|。

相关问题