org.apache.spark.streaming.Durations.milliseconds()方法的使用及代码示例

x33g5p2x  于2022-01-18 转载在 其他  
字(4.2k)|赞(0)|评价(0)|浏览(183)

本文整理了Java中org.apache.spark.streaming.Durations.milliseconds()方法的一些代码示例,展示了Durations.milliseconds()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Durations.milliseconds()方法的具体详情如下:
包路径:org.apache.spark.streaming.Durations
类名称:Durations
方法名:milliseconds

Durations.milliseconds介绍

暂无

代码示例

代码示例来源:origin: cloudera-labs/envelope

private static void initializeStreamingJob() {
 int batchMilliseconds = INSTANCE.config.getInt(BATCH_MILLISECONDS_PROPERTY);
 final Duration batchDuration = Durations.milliseconds(batchMilliseconds);
 JavaStreamingContext jsc = new JavaStreamingContext(new JavaSparkContext(getSparkSession().sparkContext()),
   batchDuration);
 INSTANCE.jsc = jsc;
}

代码示例来源:origin: org.apache.spark/spark-streaming-kafka_2.10

@Before
public void setUp() {
 kafkaTestUtils = new KafkaTestUtils();
 kafkaTestUtils.setup();
 SparkConf sparkConf = new SparkConf()
  .setMaster("local[4]").setAppName(this.getClass().getSimpleName());
 ssc = new JavaStreamingContext(sparkConf, Durations.milliseconds(200));
}

代码示例来源:origin: org.apache.spark/spark-streaming-kafka-0-8_2.11

@Before
public void setUp() {
 kafkaTestUtils = new KafkaTestUtils();
 kafkaTestUtils.setup();
 SparkConf sparkConf = new SparkConf()
  .setMaster("local[4]").setAppName(this.getClass().getSimpleName());
 ssc = new JavaStreamingContext(sparkConf, Durations.milliseconds(200));
}

代码示例来源:origin: org.apache.spark/spark-streaming-kafka-0-8

@Before
public void setUp() {
 kafkaTestUtils = new KafkaTestUtils();
 kafkaTestUtils.setup();
 SparkConf sparkConf = new SparkConf()
  .setMaster("local[4]").setAppName(this.getClass().getSimpleName());
 ssc = new JavaStreamingContext(sparkConf, Durations.milliseconds(200));
}

代码示例来源:origin: org.apache.spark/spark-streaming-kafka_2.11

@Before
public void setUp() {
 kafkaTestUtils = new KafkaTestUtils();
 kafkaTestUtils.setup();
 SparkConf sparkConf = new SparkConf()
  .setMaster("local[4]").setAppName(this.getClass().getSimpleName());
 ssc = new JavaStreamingContext(sparkConf, Durations.milliseconds(200));
}

代码示例来源:origin: org.apache.spark/spark-streaming-kafka-0-10

@Before
public void setUp() {
 kafkaTestUtils = new KafkaTestUtils();
 kafkaTestUtils.setup();
 SparkConf sparkConf = new SparkConf()
  .setMaster("local[4]").setAppName(this.getClass().getSimpleName());
 ssc = new JavaStreamingContext(sparkConf, Durations.milliseconds(200));
}

代码示例来源:origin: org.apache.spark/spark-streaming_2.10

@Test
public void testMilliseconds() {
 Assert.assertEquals(new Duration(100), Durations.milliseconds(100));
}

代码示例来源:origin: org.apache.spark/spark-streaming_2.11

@Test
public void testMilliseconds() {
 Assert.assertEquals(new Duration(100), Durations.milliseconds(100));
}

代码示例来源:origin: org.apache.spark/spark-streaming-kafka

@Before
public void setUp() {
 kafkaTestUtils = new KafkaTestUtils();
 kafkaTestUtils.setup();
 SparkConf sparkConf = new SparkConf()
  .setMaster("local[4]").setAppName(this.getClass().getSimpleName());
 ssc = new JavaStreamingContext(sparkConf, Durations.milliseconds(200));
}

代码示例来源:origin: org.apache.spark/spark-streaming-kafka-0-10_2.11

@Before
public void setUp() {
 kafkaTestUtils = new KafkaTestUtils();
 kafkaTestUtils.setup();
 SparkConf sparkConf = new SparkConf()
  .setMaster("local[4]").setAppName(this.getClass().getSimpleName());
 ssc = new JavaStreamingContext(sparkConf, Durations.milliseconds(200));
}

代码示例来源:origin: co.cask.cdap/cdap-data-streams

@Override
 public JavaStreamingContext call() throws Exception {
  JavaStreamingContext jssc = new JavaStreamingContext(
   new JavaSparkContext(), Durations.milliseconds(pipelineSpec.getBatchIntervalMillis()));
  SparkStreamingPipelineRunner runner = new SparkStreamingPipelineRunner(sec, jssc, pipelineSpec, 
                                      pipelineSpec.isCheckpointsDisabled());
  PipelinePluginContext pluginContext = new PipelinePluginContext(sec.getPluginContext(), sec.getMetrics(),
                                  pipelineSpec.isStageLoggingEnabled(),
                                  pipelineSpec.isProcessTimingEnabled());
  // TODO: figure out how to get partitions to use for aggregators and joiners.
  // Seems like they should be set at configure time instead of runtime? but that requires an API change.
  try {
   runner.runPipeline(pipelinePhase, StreamingSource.PLUGIN_TYPE,
             sec, new HashMap<String, Integer>(), pluginContext,
             new HashMap<String, StageStatisticsCollector>());
  } catch (Exception e) {
   throw new RuntimeException(e);
  }
  if (checkpointDir != null) {
   jssc.checkpoint(checkpointDir);
  }
  return jssc;
 }
};

相关文章

微信公众号

最新文章

更多