scala.collection.JavaConversions.asScalaIterator()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(7.4k)|赞(0)|评价(0)|浏览(154)

本文整理了Java中scala.collection.JavaConversions.asScalaIterator()方法的一些代码示例,展示了JavaConversions.asScalaIterator()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JavaConversions.asScalaIterator()方法的具体详情如下:
包路径:scala.collection.JavaConversions
类名称:JavaConversions
方法名:asScalaIterator

JavaConversions.asScalaIterator介绍

暂无

代码示例

代码示例来源:origin: OryxProject/oryx

@Override
public void consume(Iterator<KeyMessage<String,U>> updateIterator, Configuration hadoopConf) {
 scalaManager.consume(JavaConversions.asScalaIterator(updateIterator), hadoopConf);
}

代码示例来源:origin: OryxProject/oryx

@Override
public void consume(Iterator<KeyMessage<String, U>> updateIterator, Configuration hadoopConf) {
 scalaManager.consume(JavaConversions.asScalaIterator(updateIterator), hadoopConf);
}

代码示例来源:origin: org.apache.beam/beam-runners-spark

@Override
 public scala.collection.Iterator<scala.Tuple2<Source<T>, CheckpointMarkT>> compute(
   Partition split, TaskContext context) {
  @SuppressWarnings("unchecked")
  CheckpointableSourcePartition<T, CheckpointMarkT> partition =
    (CheckpointableSourcePartition<T, CheckpointMarkT>) split;
  scala.Tuple2<Source<T>, CheckpointMarkT> tuple2 =
    new scala.Tuple2<>(partition.getSource(), partition.checkpointMark);
  return JavaConversions.asScalaIterator(Collections.singleton(tuple2).iterator());
 }
}

代码示例来源:origin: com.cloudera.oryx/oryx-lambda

@Override
public void consume(Iterator<KeyMessage<String, U>> updateIterator, Configuration hadoopConf) {
 scalaManager.consume(JavaConversions.asScalaIterator(updateIterator), hadoopConf);
}

代码示例来源:origin: uk.gov.gchq.gaffer/spark-accumulo-library

@Override
public scala.collection.Iterator<Map.Entry<Key, Value>> compute(final Partition split, final TaskContext context) {
  final ByteArrayInputStream bais = new ByteArrayInputStream(serialisedConfiguration);
  final Configuration configuration = new Configuration();
  try {
    configuration.readFields(new DataInputStream(bais));
    bais.close();
  } catch (final IOException e) {
    throw new RuntimeException("IOException deserialising Configuration from byte array", e);
  }
  return new InterruptibleIterator<>(context,
      JavaConversions.asScalaIterator(new RFileReaderIterator(split, context, configuration, auths)));
}

代码示例来源:origin: com.github.pinterest/kafkastats

public void startAdminHttpService() {
 try {
  Properties properties = new Properties();
  properties.load(this.getClass().getResource("build.properties").openStream());
  LOG.info("build.properties build_revision: {}",
    properties.getProperty("build_revision", "unknown"));
 } catch (Throwable t) {
  LOG.warn("Failed to load properties from build.properties", t);
 }
 Duration[] defaultLatchIntervals = {Duration.apply(1, TimeUnit.MINUTES)};
 Iterator<Duration> durationIterator = Arrays.asList(defaultLatchIntervals).iterator();
 @SuppressWarnings("deprecation")
 AdminServiceFactory adminServiceFactory = new AdminServiceFactory(
   this.port,
   20,
   List$.MODULE$.empty(),
   Option.empty(),
   List$.MODULE$.empty(),
   Map$.MODULE$.empty(),
   JavaConversions.asScalaIterator(durationIterator).toList());
 RuntimeEnvironment runtimeEnvironment = new RuntimeEnvironment(this);
 AdminHttpService service = adminServiceFactory.apply(runtimeEnvironment);
 for (Map.Entry<String, CustomHttpHandler> entry : this.customHttpHandlerMap.entrySet()) {
  service.httpServer().createContext(entry.getKey(), entry.getValue());
 }
}

代码示例来源:origin: pinterest/doctorkafka

public void startAdminHttpService() {
 try {
  Properties properties = new Properties();
  properties.load(this.getClass().getResource("build.properties").openStream());
  LOG.info("build.properties build_revision: {}",
    properties.getProperty("build_revision", "unknown"));
 } catch (Throwable t) {
  LOG.warn("Failed to load properties from build.properties", t);
 }
 Duration[] defaultLatchIntervals = {Duration.apply(1, TimeUnit.MINUTES)};
 Iterator<Duration> durationIterator = Arrays.asList(defaultLatchIntervals).iterator();
 @SuppressWarnings("deprecation")
 AdminServiceFactory adminServiceFactory = new AdminServiceFactory(
   this.port,
   20,
   List$.MODULE$.empty(),
   Option.empty(),
   List$.MODULE$.empty(),
   Map$.MODULE$.empty(),
   JavaConversions.asScalaIterator(durationIterator).toList());
 RuntimeEnvironment runtimeEnvironment = new RuntimeEnvironment(this);
 AdminHttpService service = adminServiceFactory.apply(runtimeEnvironment);
 for (Map.Entry<String, CustomHttpHandler> entry : this.customHttpHandlerMap.entrySet()) {
  service.httpServer().createContext(entry.getKey(), entry.getValue());
 }
}

代码示例来源:origin: org.wso2.carbon.analytics/org.wso2.carbon.analytics.spark.core

@SuppressWarnings({"rawtypes", "unchecked"})
@Override
public scala.collection.Iterator<Row> compute(Partition split, TaskContext context) {
  AnalyticsPartition partition = (AnalyticsPartition) split;
  try {
    Iterator<Record> recordsItr = ServiceHolder.getAnalyticsDataService().readRecords(
        partition.getRecordStoreName(), partition.getRecordGroup());
    return new InterruptibleIterator(context, asScalaIterator(getRowRecordIteratorAdaptor(recordsItr, this.tenantId, this.incEnable, this.incID)));
  } catch (AnalyticsException e) {
    throw new RuntimeException(e.getMessage(), e);
  }
}

代码示例来源:origin: org.apache.beam/beam-runners-spark

@Override
public scala.collection.Iterator<WindowedValue<T>> compute(
  final Partition split, final TaskContext context) {
 final MetricsContainer metricsContainer = metricsAccum.localValue().getContainer(stepName);
 @SuppressWarnings("unchecked")
 final BoundedSource.BoundedReader<T> reader = createReader((SourcePartition<T>) split);
 final Iterator<WindowedValue<T>> readerIterator =
   new ReaderToIteratorAdapter<>(metricsContainer, reader);
 return new InterruptibleIterator<>(context, JavaConversions.asScalaIterator(readerIterator));
}

代码示例来源:origin: com.stratio.deep/deep-core

@Override
public Iterator<T> compute(Partition split, TaskContext context) {
  initExtractorClient();
  extractorClient.initIterator(split, config.getValue());
  context.addTaskCompletionListener(new AbstractFunction1<TaskContext, BoxedUnit>() {
    @Override
    public BoxedUnit apply(TaskContext v1) {
      extractorClient.close();
      return null;
    }
  });
  java.util.Iterator<T> iterator = new java.util.Iterator<T>() {
    @Override
    public boolean hasNext() {
      return extractorClient.hasNext();
    }
    @Override
    public T next() {
      return extractorClient.next();
    }
    @Override
    public void remove() {
      throw new DeepIOException(
          "Method not implemented (and won't be implemented anytime soon!!!)");
    }
  };
  return new InterruptibleIterator<>(context, asScalaIterator(iterator));
}

代码示例来源:origin: Stratio/deep-spark

@Override
public Iterator<T> compute(Partition split, TaskContext context) {
  initExtractorClient();
  extractorClient.initIterator(split, config.getValue());
  context.addTaskCompletionListener(new AbstractFunction1<TaskContext, BoxedUnit>() {
    @Override
    public BoxedUnit apply(TaskContext v1) {
      extractorClient.close();
      return null;
    }
  });
  java.util.Iterator<T> iterator = new java.util.Iterator<T>() {
    @Override
    public boolean hasNext() {
      return extractorClient.hasNext();
    }
    @Override
    public T next() {
      return extractorClient.next();
    }
    @Override
    public void remove() {
      throw new DeepIOException(
          "Method not implemented (and won't be implemented anytime soon!!!)");
    }
  };
  return new InterruptibleIterator<>(context, asScalaIterator(iterator));
}

代码示例来源:origin: org.apache.beam/beam-runners-spark

return scala.collection.JavaConversions.asScalaIterator(
  new UpdateStateByKeyOutputIterator(input, reduceFn, droppedDueToLateness));

相关文章