org.apache.spark.api.java.JavaRDD.classTag()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(6.9k)|赞(0)|评价(0)|浏览(115)

本文整理了Java中org.apache.spark.api.java.JavaRDD.classTag()方法的一些代码示例,展示了JavaRDD.classTag()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JavaRDD.classTag()方法的具体详情如下:
包路径:org.apache.spark.api.java.JavaRDD
类名称:JavaRDD
方法名:classTag

JavaRDD.classTag介绍

暂无

代码示例

代码示例来源:origin: com.couchbase.client/spark-connector

@Override
  public ClassTag<T> classTag() {
    return source.classTag();
  }
}

代码示例来源:origin: com.couchbase.client/spark-connector

@Override
public ClassTag<T> classTag() {
  return source.classTag();
}

代码示例来源:origin: com.couchbase.client/spark-connector

protected CouchbaseDocumentRDD(JavaRDD<T> source) {
  super(source.rdd(), source.classTag());
  this.source = source;
}

代码示例来源:origin: com.couchbase.client/spark-connector

public static <T> CouchbaseRDD<T> couchbaseRDD(JavaRDD<T> source) {
  return new CouchbaseRDD<T>(source, source.classTag());
}

代码示例来源:origin: com.datastax.spark/spark-cassandra-connector-java_2.10

/**
 * Repartitions the data (via a shuffle) based upon the replication of the given {@code keyspaceName}
 * and {@code tableName}. Calling this method before using joinWithCassandraTable will ensure that
 * requests will be coordinator local. {@code partitionsPerHost} Controls the number of Spark
 * Partitions that will be created in this repartitioning event. The calling RDD must have rows that
 * can be converted into the partition key of the given Cassandra Table.
 */
public JavaRDD<T> repartitionByCassandraReplica(
    String keyspaceName,
    String tableName,
    int partitionsPerHost,
    ColumnSelector partitionkeyMapper,
    RowWriterFactory<T> rowWriterFactory
) {
  CassandraConnector connector = defaultConnector();
  ClassTag<T> ctT = rdd.toJavaRDD().classTag();
  CassandraPartitionedRDD<T> newRDD = rddFunctions.repartitionByCassandraReplica(
      keyspaceName,
      tableName,
      partitionsPerHost,
      partitionkeyMapper,
      connector,
      ctT,
      rowWriterFactory);
  return new JavaRDD<>(newRDD, ctT);
}

代码示例来源:origin: com.datastax.spark/spark-cassandra-connector_2.10

/**
 * Repartitions the data (via a shuffle) based upon the replication of the given {@code keyspaceName}
 * and {@code tableName}. Calling this method before using joinWithCassandraTable will ensure that
 * requests will be coordinator local. {@code partitionsPerHost} Controls the number of Spark
 * Partitions that will be created in this repartitioning event. The calling RDD must have rows that
 * can be converted into the partition key of the given Cassandra Table.
 */
public JavaRDD<T> repartitionByCassandraReplica(
    String keyspaceName,
    String tableName,
    int partitionsPerHost,
    ColumnSelector partitionkeyMapper,
    RowWriterFactory<T> rowWriterFactory
) {
  CassandraConnector connector = defaultConnector();
  ClassTag<T> ctT = rdd.toJavaRDD().classTag();
  CassandraPartitionedRDD<T> newRDD = rddFunctions.repartitionByCassandraReplica(
      keyspaceName,
      tableName,
      partitionsPerHost,
      partitionkeyMapper,
      connector,
      ctT,
      rowWriterFactory);
  return new JavaRDD<>(newRDD, ctT);
}

代码示例来源:origin: com.datastax.spark/spark-cassandra-connector

/**
 * Repartitions the data (via a shuffle) based upon the replication of the given {@code keyspaceName}
 * and {@code tableName}. Calling this method before using joinWithCassandraTable will ensure that
 * requests will be coordinator local. {@code partitionsPerHost} Controls the number of Spark
 * Partitions that will be created in this repartitioning event. The calling RDD must have rows that
 * can be converted into the partition key of the given Cassandra Table.
 */
public JavaRDD<T> repartitionByCassandraReplica(
    String keyspaceName,
    String tableName,
    int partitionsPerHost,
    ColumnSelector partitionkeyMapper,
    RowWriterFactory<T> rowWriterFactory
) {
  CassandraConnector connector = defaultConnector();
  ClassTag<T> ctT = rdd.toJavaRDD().classTag();
  CassandraPartitionedRDD<T> newRDD = rddFunctions.repartitionByCassandraReplica(
      keyspaceName,
      tableName,
      partitionsPerHost,
      partitionkeyMapper,
      connector,
      ctT,
      rowWriterFactory);
  return new JavaRDD<>(newRDD, ctT);
}

代码示例来源:origin: com.datastax.spark/spark-cassandra-connector-java

/**
 * Repartitions the data (via a shuffle) based upon the replication of the given {@code keyspaceName}
 * and {@code tableName}. Calling this method before using joinWithCassandraTable will ensure that
 * requests will be coordinator local. {@code partitionsPerHost} Controls the number of Spark
 * Partitions that will be created in this repartitioning event. The calling RDD must have rows that
 * can be converted into the partition key of the given Cassandra Table.
 */
public JavaRDD<T> repartitionByCassandraReplica(
    String keyspaceName,
    String tableName,
    int partitionsPerHost,
    ColumnSelector partitionkeyMapper,
    RowWriterFactory<T> rowWriterFactory
) {
  CassandraConnector connector = defaultConnector();
  ClassTag<T> ctT = rdd.toJavaRDD().classTag();
  CassandraPartitionedRDD<T> newRDD = rddFunctions.repartitionByCassandraReplica(
      keyspaceName,
      tableName,
      partitionsPerHost,
      partitionkeyMapper,
      connector,
      ctT,
      rowWriterFactory);
  return new JavaRDD<>(newRDD, ctT);
}

代码示例来源:origin: com.datastax.spark/spark-cassandra-connector-unshaded

/**
 * Repartitions the data (via a shuffle) based upon the replication of the given {@code keyspaceName}
 * and {@code tableName}. Calling this method before using joinWithCassandraTable will ensure that
 * requests will be coordinator local. {@code partitionsPerHost} Controls the number of Spark
 * Partitions that will be created in this repartitioning event. The calling RDD must have rows that
 * can be converted into the partition key of the given Cassandra Table.
 */
public JavaRDD<T> repartitionByCassandraReplica(
    String keyspaceName,
    String tableName,
    int partitionsPerHost,
    ColumnSelector partitionkeyMapper,
    RowWriterFactory<T> rowWriterFactory
) {
  CassandraConnector connector = defaultConnector();
  ClassTag<T> ctT = rdd.toJavaRDD().classTag();
  CassandraPartitionedRDD<T> newRDD = rddFunctions.repartitionByCassandraReplica(
      keyspaceName,
      tableName,
      partitionsPerHost,
      partitionkeyMapper,
      connector,
      ctT,
      rowWriterFactory);
  return new JavaRDD<>(newRDD, ctT);
}

代码示例来源:origin: com.datastax.spark/spark-cassandra-connector

RowWriterFactory<T> rowWriterFactory
) {
  ClassTag<T> classTagT = rdd.toJavaRDD().classTag();
  ClassTag<R> classTagR = JavaApiHelper.getClassTag(rowReaderFactory.targetClass());

代码示例来源:origin: com.datastax.spark/spark-cassandra-connector-java_2.10

RowWriterFactory<T> rowWriterFactory
) {
  ClassTag<T> classTagT = rdd.toJavaRDD().classTag();
  ClassTag<R> classTagR = JavaApiHelper.getClassTag(rowReaderFactory.targetClass());

代码示例来源:origin: com.datastax.spark/spark-cassandra-connector-java

RowWriterFactory<T> rowWriterFactory
) {
  ClassTag<T> classTagT = rdd.toJavaRDD().classTag();
  ClassTag<R> classTagR = JavaApiHelper.getClassTag(rowReaderFactory.targetClass());

代码示例来源:origin: com.datastax.spark/spark-cassandra-connector_2.10

RowWriterFactory<T> rowWriterFactory
) {
  ClassTag<T> classTagT = rdd.toJavaRDD().classTag();
  ClassTag<R> classTagR = JavaApiHelper.getClassTag(rowReaderFactory.targetClass());

代码示例来源:origin: com.datastax.spark/spark-cassandra-connector-unshaded

RowWriterFactory<T> rowWriterFactory
) {
  ClassTag<T> classTagT = rdd.toJavaRDD().classTag();
  ClassTag<R> classTagR = JavaApiHelper.getClassTag(rowReaderFactory.targetClass());

相关文章

微信公众号

最新文章

更多