scala.Option.isEmpty()方法的使用及代码示例

x33g5p2x  于2022-01-25 转载在 其他  
字(10.3k)|赞(0)|评价(0)|浏览(196)

本文整理了Java中scala.Option.isEmpty()方法的一些代码示例,展示了Option.isEmpty()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Option.isEmpty()方法的具体详情如下:
包路径:scala.Option
类名称:Option
方法名:isEmpty

Option.isEmpty介绍

暂无

代码示例

代码示例来源:origin: apache/flink

/**
 * Build a list of URIs for providing custom artifacts to Mesos tasks.
 * @param uris a comma delimited optional string listing artifact URIs
 */
public static List<String> buildUris(Option<String> uris) {
  if (uris.isEmpty()) {
    return Collections.emptyList();
  } else {
    List<String> urisList = new ArrayList<>();
    for (String uri : uris.get().split(",")) {
      urisList.add(uri.trim());
    }
    return urisList;
  }
}

代码示例来源:origin: apache/flink

public static List<Protos.Parameter> buildDockerParameters(Option<String> dockerParameters) {
  if (dockerParameters.isEmpty()) {
    return Collections.emptyList();
  } else {
    String[] dockerParameterSpecifications = dockerParameters.get().split(",");
    List<Protos.Parameter> parameters = new ArrayList<>(dockerParameterSpecifications.length);
    for (String dockerParameterSpecification : dockerParameterSpecifications) {
      if (!dockerParameterSpecification.trim().isEmpty()) {
        // split with the limit of 2 in case the value includes '='
        String[] match = dockerParameterSpecification.split("=", 2);
        if (match.length != 2) {
          throw new IllegalArgumentException("Docker parameter specification is invalid, given: "
            + dockerParameterSpecification);
        }
        Protos.Parameter.Builder parameter = Protos.Parameter.newBuilder();
        parameter.setKey(match[0]);
        parameter.setValue(match[1]);
        parameters.add(parameter.build());
      }
    }
    return parameters;
  }
}

代码示例来源:origin: apache/flink

Option messageOption = (Option) result;
LOG.debug("Received message option {}", messageOption);
if (messageOption.isEmpty()) {
  break;
} else {

代码示例来源:origin: apache/flink

if (containerVolumes.isEmpty()) {
  return Collections.emptyList();
} else {

代码示例来源:origin: OryxProject/oryx

protected final JavaStreamingContext buildStreamingContext() {
 log.info("Starting SparkContext with interval {} seconds", generationIntervalSec);
 SparkConf sparkConf = new SparkConf();
 // Only for tests, really
 if (sparkConf.getOption("spark.master").isEmpty()) {
  log.info("Overriding master to {} for tests", streamingMaster);
  sparkConf.setMaster(streamingMaster);
 }
 // Only for tests, really
 if (sparkConf.getOption("spark.app.name").isEmpty()) {
  String appName = "Oryx" + getLayerName();
  if (id != null) {
   appName = appName + "-" + id;
  }
  log.info("Overriding app name to {} for tests", appName);
  sparkConf.setAppName(appName);
 }
 extraSparkConfig.forEach((key, value) -> sparkConf.setIfMissing(key, value.toString()));
 // Turn this down to prevent long blocking at shutdown
 sparkConf.setIfMissing(
   "spark.streaming.gracefulStopTimeout",
   Long.toString(TimeUnit.MILLISECONDS.convert(generationIntervalSec, TimeUnit.SECONDS)));
 sparkConf.setIfMissing("spark.cleaner.ttl", Integer.toString(20 * generationIntervalSec));
 long generationIntervalMS =
   TimeUnit.MILLISECONDS.convert(generationIntervalSec, TimeUnit.SECONDS);
 JavaSparkContext jsc = JavaSparkContext.fromSparkContext(SparkContext.getOrCreate(sparkConf));
 return new JavaStreamingContext(jsc, new Duration(generationIntervalMS));
}

代码示例来源:origin: apache/flink

/**
 * Construct a Mesos URI.
 */
public static Protos.CommandInfo.URI uri(MesosArtifactResolver resolver, ContainerSpecification.Artifact artifact) {
  checkNotNull(resolver);
  checkNotNull(artifact);
  Option<URL> url = resolver.resolve(artifact.dest);
  if (url.isEmpty()) {
    throw new IllegalArgumentException("Unresolvable artifact: " + artifact.dest);
  }
  return Protos.CommandInfo.URI.newBuilder()
    .setValue(url.get().toExternalForm())
    .setOutputFile(artifact.dest.toString())
    .setExtract(artifact.extract)
    .setCache(artifact.cachable)
    .setExecutable(artifact.executable)
    .build();
}

代码示例来源:origin: linkedin/cruise-control

JavaConversions.seqAsJavaList(AdminUtils.getBrokerMetadatas(zkUtils, RackAwareMode.Enforced$.MODULE$, Option.empty()))) {
String rack = bm.rack().isEmpty() ? String.valueOf(bm.id()) : bm.rack().get();
brokersByRack.putIfAbsent(rack, new ArrayList<>());
brokersByRack.get(rack).add(bm.id());

代码示例来源:origin: Graylog2/graylog2-server

if (messageLog.isEmpty()) {
  kafkaLog = logManager.createLog(topicAndPartition, logManager.defaultConfig());
} else {

代码示例来源:origin: org.apache.spark/spark-core_2.11

@Test
public void testInProcessLauncher() throws Exception {
 // Because this test runs SparkLauncher in process and in client mode, it pollutes the system
 // properties, and that can cause test failures down the test pipeline. So restore the original
 // system properties after this test runs.
 Map<Object, Object> properties = new HashMap<>(System.getProperties());
 try {
  inProcessLauncherTestImpl();
 } finally {
  Properties p = new Properties();
  for (Map.Entry<Object, Object> e : properties.entrySet()) {
   p.put(e.getKey(), e.getValue());
  }
  System.setProperties(p);
  // Here DAGScheduler is stopped, while SparkContext.clearActiveContext may not be called yet.
  // Wait for a reasonable amount of time to avoid creating two active SparkContext in JVM.
  // See SPARK-23019 and SparkContext.stop() for details.
  eventually(Duration.ofSeconds(5), Duration.ofMillis(10), () -> {
   assertTrue("SparkContext is still alive.", SparkContext$.MODULE$.getActive().isEmpty());
  });
 }
}

代码示例来源:origin: org.apache.spark/spark-core

@Test
public void testInProcessLauncher() throws Exception {
 // Because this test runs SparkLauncher in process and in client mode, it pollutes the system
 // properties, and that can cause test failures down the test pipeline. So restore the original
 // system properties after this test runs.
 Map<Object, Object> properties = new HashMap<>(System.getProperties());
 try {
  inProcessLauncherTestImpl();
 } finally {
  Properties p = new Properties();
  for (Map.Entry<Object, Object> e : properties.entrySet()) {
   p.put(e.getKey(), e.getValue());
  }
  System.setProperties(p);
  // Here DAGScheduler is stopped, while SparkContext.clearActiveContext may not be called yet.
  // Wait for a reasonable amount of time to avoid creating two active SparkContext in JVM.
  // See SPARK-23019 and SparkContext.stop() for details.
  eventually(Duration.ofSeconds(5), Duration.ofMillis(10), () -> {
   assertTrue("SparkContext is still alive.", SparkContext$.MODULE$.getActive().isEmpty());
  });
 }
}

代码示例来源:origin: linkedin/kafka-monitor

private static List<PartitionInfo> getPartitionInfo(KafkaZkClient zkClient, String topic) {
 scala.collection.immutable.Set<String> topicList = new scala.collection.immutable.Set.Set1<>(topic);
 scala.collection.Map<Object, scala.collection.Seq<Object>> partitionAssignments =
   zkClient.getPartitionAssignmentForTopics(topicList).apply(topic);
 List<PartitionInfo> partitionInfoList = new ArrayList<>();
 scala.collection.Iterator<scala.Tuple2<Object, scala.collection.Seq<Object>>> it = partitionAssignments.iterator();
 while (it.hasNext()) {
  scala.Tuple2<Object, scala.collection.Seq<Object>> scalaTuple = it.next();
  Integer partition = (Integer) scalaTuple._1();
  scala.Option<Object> leaderOption = zkClient.getLeaderForPartition(new TopicPartition(topic, partition));
  Node leader = leaderOption.isEmpty() ?  null : new Node((Integer) leaderOption.get(), "", -1);
  Node[] replicas = new Node[scalaTuple._2().size()];
  for (int i = 0; i < replicas.length; i++) {
   Integer brokerId = (Integer) scalaTuple._2().apply(i);
   replicas[i] = new Node(brokerId, "", -1);
  }
  partitionInfoList.add(new PartitionInfo(topic, partition, leader, replicas, null));
 }
 return partitionInfoList;
}

代码示例来源:origin: apache/flink

if (frameworkID.isEmpty()) {
  LOG.info("Registering as new framework.");
} else {

代码示例来源:origin: apache/flink

@Override
  public JobManagerMessages.TriggerSavepointSuccess process(JobManagerMessages.TriggerSavepoint message) {
    Assert.assertEquals(expectedJobID, message.jobId());
    if (expectedTargetDirectory == null) {
      Assert.assertTrue(message.savepointDirectory().isEmpty());
    } else {
      Assert.assertEquals(expectedTargetDirectory, message.savepointDirectory().get());
    }
    return new JobManagerMessages.TriggerSavepointSuccess(message.jobId(), 0, savepointPathToReturn, 0);
  }
}

代码示例来源:origin: dibbhatt/kafka-spark-consumer

@Override
public void onOutputOperationCompleted(StreamingListenerOutputOperationCompleted outPutOpsComplete) {
 Option<String> reason = outPutOpsComplete.outputOperationInfo().failureReason();
 if (!reason.isEmpty()) {
   String failure = reason.get();
   if (failure != null) {
     LOG.error("Output Operation failed due to {}", failure);
   }
 }
}

代码示例来源:origin: apache/samza

public String getStreamSerdeName(String systemName, String streamName) {
  StreamConfig streamConfig =  new StreamConfig(this);
  scala.Option<String> option = streamConfig.getStreamMsgSerde(new SystemStream(systemName, streamName));
  return option.isEmpty() ? null : option.get();
 }
}

代码示例来源:origin: apache/samza

public String getStreamSerdeName(String systemName, String streamName) {
  StreamConfig streamConfig =  new StreamConfig(this);
  scala.Option<String> option = streamConfig.getStreamMsgSerde(new SystemStream(systemName, streamName));
  return option.isEmpty() ? null : option.get();
 }
}

代码示例来源:origin: apache/samza

public static Pair<String, String> getJobNameAndId(Config config) {
 JobConfig jobConfig = new JobConfig(config);
 Option jobNameOption = jobConfig.getName();
 if (jobNameOption.isEmpty()) {
  throw new ConfigException("Missing job name");
 }
 String jobName = (String) jobNameOption.get();
 return new ImmutablePair<>(jobName, jobConfig.getJobId());
}

代码示例来源:origin: twitter/GraphJet

@Override
public EdgeIterator getOutEdges(long node) {
 Option<Node> opt = graph.getNodeById((int) node);
 if (opt.isEmpty()) {
  return new EmptyEdgeIterator();
 }
 // Note that outboundNodes returns a CSeq, whereas randomOutboundNodeSet returns a Seq, so we
 // need different wrapper classes.
 return new CSeqEdgeIteratorWrapper(opt.get().outboundNodes());
}

代码示例来源:origin: twitter/GraphJet

@Override
public EdgeIterator getRandomOutEdges(long node, int numSamples, Random random) {
 Option<Node> opt = graph.getNodeById((int) node);
 if (opt.isEmpty()) {
  return new EmptyEdgeIterator();
 }
 // Note that randomOutboundNodeSet returns a Seq, whereas outboundNodes returns a CSeq, so we
 // need different wrapper classes.
 return new SeqEdgeIteratorWrapper(opt.get().randomOutboundNodeSet(
   numSamples, scala.util.Random.javaRandomToRandom(random)));
}

代码示例来源:origin: org.apache.flink/flink-runtime_2.11

private Tuple2<String, String> extractAddressHostname(ActorRef actorRef) {
  final String actorAddress = AkkaUtils.getAkkaURL(actorSystem, actorRef);
  final String hostname;
  Option<String> host = actorRef.path().address().host();
  if (host.isEmpty()) {
    hostname = "localhost";
  } else {
    hostname = host.get();
  }
  return Tuple2.of(actorAddress, hostname);
}

相关文章