org.apache.hadoop.mapred.Mapper.map()方法的使用及代码示例

x33g5p2x  于2022-01-25 转载在 其他  
字(8.6k)|赞(0)|评价(0)|浏览(92)

本文整理了Java中org.apache.hadoop.mapred.Mapper.map()方法的一些代码示例,展示了Mapper.map()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Mapper.map()方法的具体详情如下:
包路径:org.apache.hadoop.mapred.Mapper
类名称:Mapper
方法名:map

Mapper.map介绍

[英]Maps a single input key/value pair into an intermediate key/value pair.

Output pairs need not be of the same types as input pairs. A given input pair may map to zero or many output pairs. Output pairs are collected with calls to OutputCollector#collect(Object,Object).

Applications can use the Reporter provided to report progress or just indicate that they are alive. In scenarios where the application takes an insignificant amount of time to process individual key/value pairs, this is crucial since the framework might assume that the task has timed-out and kill that task. The other way of avoiding this is to set mapred.task.timeout to a high-enough value (or even zero for no time-outs).
[中]将单个输入键/值对映射到中间键/值对。
输出对不必与输入对的类型相同。给定的输入对可能映射到零或多个输出对。通过调用OutputCollector#collect(对象,对象)来收集输出对。
应用程序可以使用提供的Reporter来报告进度,也可以仅仅表明它们还活着。在应用程序处理单个键/值对所需时间微不足道的场景中,这一点至关重要,因为框架可能会假设任务已超时并终止该任务。另一种避免这种情况的方法是将mapred.task.timeout设置为足够高的值(甚至为零表示没有超时)。

代码示例

代码示例来源:origin: apache/flink

@Override
public void flatMap(final Tuple2<KEYIN, VALUEIN> value, final Collector<Tuple2<KEYOUT, VALUEOUT>> out)
    throws Exception {
  outputCollector.setFlinkCollector(out);
  mapper.map(value.f0, value.f1, outputCollector, reporter);
}

代码示例来源:origin: apache/ignite

throw new HadoopTaskCancelledException("Map task cancelled.");
mapper.map(key, val, collector, reporter);

代码示例来源:origin: org.apache.flink/flink-hadoop-compatibility_2.11

@Override
public void flatMap(final Tuple2<KEYIN, VALUEIN> value, final Collector<Tuple2<KEYOUT, VALUEOUT>> out)
    throws Exception {
  outputCollector.setFlinkCollector(out);
  mapper.map(value.f0, value.f1, outputCollector, reporter);
}

代码示例来源:origin: org.apache.flink/flink-hadoop-compatibility

@Override
public void flatMap(final Tuple2<KEYIN, VALUEIN> value, final Collector<Tuple2<KEYOUT, VALUEOUT>> out)
    throws Exception {
  outputCollector.setFlinkCollector(out);
  mapper.map(value.f0, value.f1, outputCollector, reporter);
}

代码示例来源:origin: com.alibaba.blink/flink-hadoop-compatibility

@Override
public void flatMap(final Tuple2<KEYIN, VALUEIN> value, final Collector<Tuple2<KEYOUT, VALUEOUT>> out)
    throws Exception {
  outputCollector.setFlinkCollector(out);
  mapper.map(value.f0, value.f1, outputCollector, reporter);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

@SuppressWarnings("unchecked")
public void map(K1 key, V1 value, OutputCollector<K2, V2> outputCollector,
  Reporter reporter) throws IOException {
 if (mapper == null) {
  // Find the Mapper from the TaggedInputSplit.
  TaggedInputSplit inputSplit = (TaggedInputSplit) reporter.getInputSplit();
  mapper = (Mapper<K1, V1, K2, V2>) ReflectionUtils.newInstance(inputSplit
    .getMapperClass(), conf);
 }
 mapper.map(key, value, outputCollector, reporter);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

@SuppressWarnings("unchecked")
public void map(K1 key, V1 value, OutputCollector<K2, V2> outputCollector,
  Reporter reporter) throws IOException {
 if (mapper == null) {
  // Find the Mapper from the TaggedInputSplit.
  TaggedInputSplit inputSplit = (TaggedInputSplit) reporter.getInputSplit();
  mapper = (Mapper<K1, V1, K2, V2>) ReflectionUtils.newInstance(inputSplit
    .getMapperClass(), conf);
 }
 mapper.map(key, value, outputCollector, reporter);
}

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

@SuppressWarnings("unchecked")
public void map(K1 key, V1 value, OutputCollector<K2, V2> outputCollector,
  Reporter reporter) throws IOException {
 if (mapper == null) {
  // Find the Mapper from the TaggedInputSplit.
  TaggedInputSplit inputSplit = (TaggedInputSplit) reporter.getInputSplit();
  mapper = (Mapper<K1, V1, K2, V2>) ReflectionUtils.newInstance(inputSplit
    .getMapperClass(), conf);
 }
 mapper.map(key, value, outputCollector, reporter);
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

@SuppressWarnings("unchecked")
public void map(K1 key, V1 value, OutputCollector<K2, V2> outputCollector,
  Reporter reporter) throws IOException {
 if (mapper == null) {
  // Find the Mapper from the TaggedInputSplit.
  TaggedInputSplit inputSplit = (TaggedInputSplit) reporter.getInputSplit();
  mapper = (Mapper<K1, V1, K2, V2>) ReflectionUtils.newInstance(inputSplit
    .getMapperClass(), conf);
 }
 mapper.map(key, value, outputCollector, reporter);
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

@SuppressWarnings("unchecked")
public void map(K1 key, V1 value, OutputCollector<K2, V2> outputCollector,
  Reporter reporter) throws IOException {
 if (mapper == null) {
  // Find the Mapper from the TaggedInputSplit.
  TaggedInputSplit inputSplit = (TaggedInputSplit) reporter.getInputSplit();
  mapper = (Mapper<K1, V1, K2, V2>) ReflectionUtils.newInstance(inputSplit
    .getMapperClass(), conf);
 }
 mapper.map(key, value, outputCollector, reporter);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

@SuppressWarnings("unchecked")
public void map(K1 key, V1 value, OutputCollector<K2, V2> outputCollector,
  Reporter reporter) throws IOException {
 if (mapper == null) {
  // Find the Mapper from the TaggedInputSplit.
  TaggedInputSplit inputSplit = (TaggedInputSplit) reporter.getInputSplit();
  mapper = (Mapper<K1, V1, K2, V2>) ReflectionUtils.newInstance(inputSplit
    .getMapperClass(), conf);
 }
 mapper.map(key, value, outputCollector, reporter);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

/**
 * Chains the <code>map(...)</code> methods of the Mappers in the chain.
 */
@SuppressWarnings({"unchecked"})
public void map(Object key, Object value, OutputCollector output,
        Reporter reporter) throws IOException {
 Mapper mapper = chain.getFirstMap();
 if (mapper != null) {
  mapper.map(key, value, chain.getMapperCollector(0, output, reporter),
        reporter);
 }
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

/**
 * Chains the <code>map(...)</code> methods of the Mappers in the chain.
 */
@SuppressWarnings({"unchecked"})
public void map(Object key, Object value, OutputCollector output,
        Reporter reporter) throws IOException {
 Mapper mapper = chain.getFirstMap();
 if (mapper != null) {
  mapper.map(key, value, chain.getMapperCollector(0, output, reporter),
        reporter);
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

/**
 * Chains the <code>map(...)</code> methods of the Mappers in the chain.
 */
@SuppressWarnings({"unchecked"})
public void map(Object key, Object value, OutputCollector output,
        Reporter reporter) throws IOException {
 Mapper mapper = chain.getFirstMap();
 if (mapper != null) {
  mapper.map(key, value, chain.getMapperCollector(0, output, reporter),
        reporter);
 }
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

/**
 * Chains the <code>map(...)</code> methods of the Mappers in the chain.
 */
@SuppressWarnings({"unchecked"})
public void map(Object key, Object value, OutputCollector output,
        Reporter reporter) throws IOException {
 Mapper mapper = chain.getFirstMap();
 if (mapper != null) {
  mapper.map(key, value, chain.getMapperCollector(0, output, reporter),
        reporter);
 }
}

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

/**
 * Chains the <code>map(...)</code> methods of the Mappers in the chain.
 */
@SuppressWarnings({"unchecked"})
public void map(Object key, Object value, OutputCollector output,
        Reporter reporter) throws IOException {
 Mapper mapper = chain.getFirstMap();
 if (mapper != null) {
  mapper.map(key, value, chain.getMapperCollector(0, output, reporter),
        reporter);
 }
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

/**
 * Chains the <code>map(...)</code> methods of the Mappers in the chain.
 */
@SuppressWarnings({"unchecked"})
public void map(Object key, Object value, OutputCollector output,
        Reporter reporter) throws IOException {
 Mapper mapper = chain.getFirstMap();
 if (mapper != null) {
  mapper.map(key, value, chain.getMapperCollector(0, output, reporter),
        reporter);
 }
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

/**
 * Chains the <code>map(...)</code> methods of the Mappers in the chain.
 */
@SuppressWarnings({"unchecked"})
public void map(Object key, Object value, OutputCollector output,
        Reporter reporter) throws IOException {
 Mapper mapper = chain.getFirstMap();
 if (mapper != null) {
  mapper.map(key, value, chain.getMapperCollector(0, output, reporter),
        reporter);
 }
}

代码示例来源:origin: org.apache.crunch/crunch-core

@Override
public void process(Pair<K1, V1> input, Emitter<Pair<K2, V2>> emitter) {
 outputCollector.set(emitter);
 try {
  instance.map(input.first(), input.second(), outputCollector, this);
 } catch (IOException e) {
  throw new CrunchRuntimeException(e);
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-mrunit

@Override
public List<Pair<K2, V2>> run() throws IOException {
 MockOutputCollector<K2, V2> outputCollector =
  new MockOutputCollector<K2, V2>();
 MockReporter reporter = new MockReporter(MockReporter.ReporterType.Mapper, getCounters());
 myMapper.map(inputKey, inputVal, outputCollector, reporter);
 return outputCollector.getOutputs();
}

相关文章

微信公众号

最新文章

更多