org.apache.tephra.Transaction.getWritePointer()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(7.2k)|赞(0)|评价(0)|浏览(77)

本文整理了Java中org.apache.tephra.Transaction.getWritePointer()方法的一些代码示例,展示了Transaction.getWritePointer()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Transaction.getWritePointer()方法的具体详情如下:
包路径:org.apache.tephra.Transaction
类名称:Transaction
方法名:getWritePointer

Transaction.getWritePointer介绍

[英]Returns the write pointer to be used in persisting any changes. After a checkpoint is performed, this will differ from #getTransactionId(). This method should always be used when setting the timestamp for writes in order to ensure that the correct value is used.
[中]返回用于持久化任何更改的写指针。执行检查点后,这将不同于#getTransactionId()。设置写入时间戳时,应始终使用此方法,以确保使用正确的值。

代码示例

代码示例来源:origin: apache/phoenix

@Override
public long getWritePointer() {
  Transaction tx = getCurrentTransaction();
  return tx == null ? HConstants.LATEST_TIMESTAMP : tx.getWritePointer();
}

代码示例来源:origin: org.apache.tephra/tephra-core

/**
 * Returns the maximum timestamp to use for time-range operations, based on the given transaction.
 * @param tx The current transaction
 * @return The maximum timestamp (exclusive) to use for time-range operations
 */
public static long getMaxVisibleTimestamp(Transaction tx) {
 // NOTE: +1 here because we want read up to writepointer inclusive, but timerange's end is exclusive
 // however, we also need to guard against overflow in the case write pointer is set to MAX_VALUE
 return tx.getWritePointer() < Long.MAX_VALUE ?
   tx.getWritePointer() + 1 : tx.getWritePointer();
}

代码示例来源:origin: co.cask.cdap/cdap-spark-core2

/**
 * Block until the transaction used by this context is completed (either commit or invalidate).
 *
 * @throws InterruptedException if current thread is interrupted while waiting
 */
private void awaitCompletion() throws InterruptedException {
 LOG.debug("Awaiting completion for {}", transaction.getWritePointer());
 if (completion != null) {
  completion.await();
 }
 discardDatasets();
}

代码示例来源:origin: cdapio/cdap

/**
 * Block until the transaction used by this context is completed (either commit or invalidate).
 *
 * @throws InterruptedException if current thread is interrupted while waiting
 */
private void awaitCompletion() throws InterruptedException {
 LOG.debug("Awaiting completion for {}", transaction.getWritePointer());
 if (completion != null) {
  completion.await();
 }
 discardDatasets();
}

代码示例来源:origin: caskdata/cdap

/**
 * Block until the transaction used by this context is completed (either commit or invalidate).
 *
 * @throws InterruptedException if current thread is interrupted while waiting
 */
private void awaitCompletion() throws InterruptedException {
 LOG.debug("Awaiting completion for {}", transaction.getWritePointer());
 if (completion != null) {
  completion.await();
 }
 discardDatasets();
}

代码示例来源:origin: org.apache.phoenix/phoenix-core

@Override
public long getWritePointer() {
  Transaction tx = getCurrentTransaction();
  return tx == null ? HConstants.LATEST_TIMESTAMP : tx.getWritePointer();
}

代码示例来源:origin: com.aliyun.phoenix/ali-phoenix-core

@Override
public long getWritePointer() {
  Transaction tx = getCurrentTransaction();
  return tx == null ? HConstants.LATEST_TIMESTAMP : tx.getWritePointer();
}

代码示例来源:origin: cdapio/cdap

@Override
public void set(TransactionalDatasetContext value) {
 String txKey = Long.toString(value.getTransaction().getWritePointer());
 if (SparkRuntimeEnv.setLocalProperty(ACTIVE_TRANSACTION_KEY, txKey)) {
  transactionInfos.put(txKey, value);
 }
 super.set(value);
}

代码示例来源:origin: co.cask.cdap/cdap-spark-core2

@Override
public void set(TransactionalDatasetContext value) {
 String txKey = Long.toString(value.getTransaction().getWritePointer());
 if (SparkRuntimeEnv.setLocalProperty(ACTIVE_TRANSACTION_KEY, txKey)) {
  transactionInfos.put(txKey, value);
 }
 super.set(value);
}

代码示例来源:origin: caskdata/cdap

@Override
public void set(TransactionalDatasetContext value) {
 String txKey = Long.toString(value.getTransaction().getWritePointer());
 if (SparkRuntimeEnv.setLocalProperty(ACTIVE_TRANSACTION_KEY, txKey)) {
  transactionInfos.put(txKey, value);
 }
 super.set(value);
}

代码示例来源:origin: cdapio/cdap

@Override
public void onTransactionCompleted(boolean jobSucceeded, @Nullable TransactionFailureException failureCause) {
 // Shouldn't happen
 Preconditions.checkState(commitOnJobEnded(), "Not expecting transaction to be completed");
 transactionInfos.remove(Long.toString(transaction.getWritePointer()));
 if (jobSucceeded && failureCause == null) {
  postCommit();
 } else {
  rollbackWithoutFailure();
 }
 completion.countDown();
}

代码示例来源:origin: caskdata/cdap

@Override
public void onTransactionCompleted(boolean jobSucceeded, @Nullable TransactionFailureException failureCause) {
 // Shouldn't happen
 Preconditions.checkState(commitOnJobEnded(), "Not expecting transaction to be completed");
 transactionInfos.remove(Long.toString(transaction.getWritePointer()));
 if (jobSucceeded && failureCause == null) {
  postCommit();
 } else {
  rollbackWithoutFailure();
 }
 completion.countDown();
}

代码示例来源:origin: co.cask.cdap/cdap-spark-core2

@Override
public void onTransactionCompleted(boolean jobSucceeded, @Nullable TransactionFailureException failureCause) {
 // Shouldn't happen
 Preconditions.checkState(commitOnJobEnded(), "Not expecting transaction to be completed");
 transactionInfos.remove(Long.toString(transaction.getWritePointer()));
 if (jobSucceeded && failureCause == null) {
  postCommit();
 } else {
  rollbackWithoutFailure();
 }
 completion.countDown();
}

代码示例来源:origin: org.apache.tephra/tephra-core

@Override
 public Void execute(TransactionServiceThriftClient client) throws Exception {
  client.commit(tx.getTransactionId(), tx.getWritePointer());
  return null;
 }
});

代码示例来源:origin: co.cask.cdap/cdap-data-fabric

@Override
 protected void doRollback() {
  if (commitTransaction != null) {
   InMemoryQueue queue = getQueue();
   for (int seqId = 0; seqId < lastEnqueueCount; seqId++) {
    queue.undoEnqueue(commitTransaction.getWritePointer(), seqId);
   }
  }
 }
}

代码示例来源:origin: cdapio/cdap

@WriteOnly
private void undoPersisted(NavigableMap<byte[], NavigableMap<byte[], Update>> persisted) {
 // NOTE: we could just use merge and pass the changes with all values = null, but separate method is more efficient
 InMemoryTableService.undo(getTableName(), persisted, tx.getWritePointer());
}

代码示例来源:origin: co.cask.cdap/cdap-data-fabric

@WriteOnly
private void undoPersisted(NavigableMap<byte[], NavigableMap<byte[], Update>> persisted) {
 // NOTE: we could just use merge and pass the changes with all values = null, but separate method is more efficient
 InMemoryTableService.undo(getTableName(), persisted, tx.getWritePointer());
}

代码示例来源:origin: co.cask.cdap/cdap-data-fabric

public static void write(DataOutput dataOutput, Transaction tx) throws IOException {
 dataOutput.writeLong(tx.getReadPointer());
 dataOutput.writeLong(tx.getWritePointer());
 dataOutput.writeLong(tx.getFirstShortInProgress());
 write(dataOutput, tx.getInProgress());
 write(dataOutput, tx.getInvalids());
}

代码示例来源:origin: co.cask.cdap/cdap-data-fabric

private byte[] encodeStateColumn(ConsumerEntryState state) {
 // State column content is encoded as (writePointer) + (instanceId) + (state)
 byte[] stateContent = new byte[Longs.BYTES + Ints.BYTES + 1];
 Bytes.putLong(stateContent, 0, transaction.getWritePointer());
 Bytes.putInt(stateContent, Longs.BYTES, getConfig().getInstanceId());
 Bytes.putByte(stateContent, Longs.BYTES + Ints.BYTES, state.getState());
 return stateContent;
}

代码示例来源:origin: org.apache.tephra/tephra-core

public static TTransaction wrap(Transaction tx) {
 return new TTransaction(tx.getTransactionId(), tx.getReadPointer(),
             Longs.asList(tx.getInvalids()), Longs.asList(tx.getInProgress()),
             tx.getFirstShortInProgress(), getTTransactionType(tx.getType()),
             tx.getWritePointer(), Longs.asList(tx.getCheckpointWritePointers()),
             getTVisibilityLevel(tx.getVisibilityLevel()));
}

相关文章