com.google.protobuf.ByteString.asReadOnlyByteBuffer()方法的使用及代码示例

x33g5p2x  于2022-01-16 转载在 其他  
字(9.1k)|赞(0)|评价(0)|浏览(154)

本文整理了Java中com.google.protobuf.ByteString.asReadOnlyByteBuffer()方法的一些代码示例,展示了ByteString.asReadOnlyByteBuffer()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ByteString.asReadOnlyByteBuffer()方法的具体详情如下:
包路径:com.google.protobuf.ByteString
类名称:ByteString
方法名:asReadOnlyByteBuffer

ByteString.asReadOnlyByteBuffer介绍

[英]Constructs a read-only java.nio.ByteBuffer whose content is equal to the contents of this byte string. The result uses the same backing array as the byte string, if possible.
[中]构造一个只读java。尼奥。ByteBuffer,其内容等于此字节字符串的内容。如果可能,结果使用与字节字符串相同的后备数组。

代码示例

代码示例来源:origin: googleapis/google-cloud-java

/** Returns the content of this {@code ByteArray} as a read-only {@link ByteBuffer}. */
public final ByteBuffer asReadOnlyByteBuffer() {
 return byteString.asReadOnlyByteBuffer();
}

代码示例来源:origin: googleapis/google-cloud-java

/** Returns a read-only {@link ByteBuffer} for this blob content. */
public ByteBuffer asReadOnlyByteBuffer() {
 return byteString.asReadOnlyByteBuffer();
}

代码示例来源:origin: apache/hbase

byte[] getBytesFromResponse(ByteString response) {
  ByteBuffer bb = response.asReadOnlyByteBuffer();
  bb.rewind();
  byte[] bytes;
  if (bb.hasArray()) {
   bytes = bb.array();
  } else {
   bytes = response.toByteArray();
  }
  return bytes;
 }
}

代码示例来源:origin: apache/hbase

@Override
protected void aggregate(RegionInfo region, AggregateResponse resp) throws IOException {
 count += resp.getFirstPart(0).asReadOnlyByteBuffer().getLong();
}

代码示例来源:origin: apache/hbase

public static TableName toTableName(HBaseProtos.TableName tableNamePB) {
 return TableName.valueOf(tableNamePB.getNamespace().asReadOnlyByteBuffer(),
   tableNamePB.getQualifier().asReadOnlyByteBuffer());
}

代码示例来源:origin: apache/hbase

@Override
protected void aggregate(RegionInfo region, AggregateResponse resp) throws IOException {
 if (resp.getFirstPartCount() > 0) {
  sum = ci.add(sum, getPromotedValueFromProto(ci, resp, 0));
  count += resp.getSecondPart().asReadOnlyByteBuffer().getLong();
 }
}

代码示例来源:origin: com.google.protobuf/protobuf-java

@Override
public boolean equals(Object other) {
 if (other == this) {
  return true;
 }
 if (!(other instanceof ByteString)) {
  return false;
 }
 ByteString otherString = ((ByteString) other);
 if (size() != otherString.size()) {
  return false;
 }
 if (size() == 0) {
  return true;
 }
 if (other instanceof NioByteString) {
  return buffer.equals(((NioByteString) other).buffer);
 }
 if (other instanceof RopeByteString) {
  return other.equals(this);
 }
 return buffer.equals(otherString.asReadOnlyByteBuffer());
}

代码示例来源:origin: apache/hive

for (InputSplit split : entry.getValue()) {
 MRSplitProto serializedSplit = MRInputHelpers.createSplitProto(split);
 ByteBuffer bs = serializedSplit.toByteString().asReadOnlyByteBuffer();
 bucketToSerializedSplitMap.put(entry.getKey(), bs);

代码示例来源:origin: apache/hbase

@Override
protected void aggregate(RegionInfo region, AggregateResponse resp) throws IOException {
 if (resp.getFirstPartCount() > 0) {
  sum = ci.add(sum, getPromotedValueFromProto(ci, resp, 0));
  sumSq = ci.add(sumSq, getPromotedValueFromProto(ci, resp, 1));
  count += resp.getSecondPart().asReadOnlyByteBuffer().getLong();
 }
}

代码示例来源:origin: MovingBlocks/Terasology

@Override
public ByteBuffer getAsByteBuffer() {
  if (data.hasBytes()) {
    return data.getBytes().asReadOnlyByteBuffer();
  } else if (!isNull()) {
    throw new DeserializationException("Data is not bytes: " + data.toString());
  } else {
    return ByteBuffer.wrap(new byte[0]);
  }
}

代码示例来源:origin: googleapis/google-cloud-java

void ensureTxn() {
 if (transactionId == null) {
  span.addAnnotation("Creating Transaction");
  try {
   transactionId = session.beginTransaction();
   span.addAnnotation(
     "Transaction Creation Done",
     ImmutableMap.of(
       "Id", AttributeValue.stringAttributeValue(transactionId.toStringUtf8())));
   txnLogger.log(
     Level.FINER,
     "Started transaction {0}",
     txnLogger.isLoggable(Level.FINER) ? transactionId.asReadOnlyByteBuffer() : null);
  } catch (SpannerException e) {
   span.addAnnotation("Transaction Creation Failed", TraceUtil.getExceptionAnnotations(e));
   throw e;
  }
 } else {
  span.addAnnotation(
    "Transaction Initialized",
    ImmutableMap.of(
      "Id", AttributeValue.stringAttributeValue(transactionId.toStringUtf8())));
  txnLogger.log(
    Level.FINER,
    "Using prepared transaction {0}",
    txnLogger.isLoggable(Level.FINER) ? transactionId.asReadOnlyByteBuffer() : null);
 }
}

代码示例来源:origin: apache/hive

private Token<LlapTokenIdentifier> extractToken(ByteString tokenBytes) throws IOException {
 Token<LlapTokenIdentifier> token = new Token<>();
 DataInputByteBuffer in = new DataInputByteBuffer();
 in.reset(tokenBytes.asReadOnlyByteBuffer());
 token.readFields(in);
 return token;
}

代码示例来源:origin: Alluxio/alluxio

@Override
public DataBuffer readChunk() throws IOException {
 Preconditions.checkState(!mClient.isShutdown(),
   "Data reader is closed while reading data chunks.");
 ByteString buf;
 ReadResponse response = mStream.receive(mDataTimeoutMs);
 if (response == null) {
  return null;
 }
 Preconditions.checkState(response.hasChunk(), "response should always contain chunk");
 buf = response.getChunk().getData();
 mPosToRead += buf.size();
 Preconditions.checkState(mPosToRead - mReadRequest.getOffset() <= mReadRequest.getLength());
 return new NioDataBuffer(buf.asReadOnlyByteBuffer(), buf.size());
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

@Override
public ByteBuffer bytes() {
 LocalFileSystemPathHandleProto.Builder b =
   LocalFileSystemPathHandleProto.newBuilder();
 b.setPath(path);
 if (mtime != null) {
  b.setMtime(mtime);
 }
 return b.build().toByteString().asReadOnlyByteBuffer();
}

代码示例来源:origin: apache/hive

private List<Event> createEventList(boolean sendSerializedEvents, InputSplitInfoMem inputSplitInfo) {
 List<Event> events = Lists.newArrayListWithCapacity(inputSplitInfo.getNumTasks() + 1);
 InputConfigureVertexTasksEvent configureVertexEvent =
   InputConfigureVertexTasksEvent.create(inputSplitInfo.getNumTasks(),
   VertexLocationHint.create(inputSplitInfo.getTaskLocationHints()),
   InputSpecUpdate.getDefaultSinglePhysicalInputSpecUpdate());
 events.add(configureVertexEvent);
 if (sendSerializedEvents) {
  MRSplitsProto splitsProto = inputSplitInfo.getSplitsProto();
  int count = 0;
  for (MRSplitProto mrSplit : splitsProto.getSplitsList()) {
   InputDataInformationEvent diEvent = InputDataInformationEvent.createWithSerializedPayload(
     count++, mrSplit.toByteString().asReadOnlyByteBuffer());
   events.add(diEvent);
  }
 } else {
  int count = 0;
  for (org.apache.hadoop.mapred.InputSplit split : inputSplitInfo.getOldFormatSplits()) {
   InputDataInformationEvent diEvent = InputDataInformationEvent.createWithObjectPayload(
     count++, split);
   events.add(diEvent);
  }
 }
 return events;
}

代码示例来源:origin: Alluxio/alluxio

@Override
 protected void writeBuf(BlockWriteRequestContext context,
   StreamObserver<WriteResponse> observer, ByteString buf, long pos) throws Exception {
  Preconditions.checkState(context != null);
  WriteRequest request = context.getRequest();
  long bytesReserved = context.getBytesReserved();
  if (bytesReserved < pos) {
   long bytesToReserve = Math.max(FILE_BUFFER_SIZE, pos - bytesReserved);
   // Allocate enough space in the existing temporary block for the write.
   mWorker.requestSpace(request.getSessionId(), request.getId(), bytesToReserve);
   context.setBytesReserved(bytesReserved + bytesToReserve);
  }
  if (context.getBlockWriter() == null) {
   String metricName = WorkerMetrics.BYTES_WRITTEN_ALLUXIO;
   context.setBlockWriter(
     mWorker.getTempBlockWriterRemote(request.getSessionId(), request.getId()));
   context.setCounter(MetricsSystem.counter(metricName));
   context.setMeter(MetricsSystem.meter(WorkerMetrics.BYTES_WRITTEN_ALLUXIO_THROUGHPUT));
  }
  Preconditions.checkState(context.getBlockWriter() != null);
  int sz = buf.size();
  Preconditions.checkState(
    context.getBlockWriter().append(buf.asReadOnlyByteBuffer())  == sz);
 }
}

代码示例来源:origin: apache/hive

private static UserPayload convertPayloadFromProto(
  EntityDescriptorProto proto) {
 UserPayload userPayload = null;
 if (proto.hasUserPayload()) {
  if (proto.getUserPayload().hasUserPayload()) {
   userPayload =
     UserPayload.create(proto.getUserPayload().getUserPayload().asReadOnlyByteBuffer(), proto.getUserPayload().getVersion());
  } else {
   userPayload = UserPayload.create(null);
  }
 }
 return userPayload;
}

代码示例来源:origin: apache/hive

private void verifyOutputSpecAndProto(OutputSpec outputSpec,
                    IOSpecProto outputSpecProto) {
  assertEquals(outputSpec.getPhysicalEdgeCount(), outputSpecProto.getPhysicalEdgeCount());
  assertEquals(outputSpec.getDestinationVertexName(), outputSpecProto.getConnectedVertexName());
  assertEquals(outputSpec.getOutputDescriptor().getClassName(),
    outputSpecProto.getIoDescriptor().getClassName());
  assertEquals(outputSpec.getOutputDescriptor().getUserPayload().getPayload(),
    outputSpecProto.getIoDescriptor().getUserPayload().getUserPayload().asReadOnlyByteBuffer());
 }
}

代码示例来源:origin: apache/hive

private void verifyInputSpecAndProto(InputSpec inputSpec,
                  IOSpecProto inputSpecProto) {
 assertEquals(inputSpec.getPhysicalEdgeCount(), inputSpecProto.getPhysicalEdgeCount());
 assertEquals(inputSpec.getSourceVertexName(), inputSpecProto.getConnectedVertexName());
 assertEquals(inputSpec.getInputDescriptor().getClassName(),
   inputSpecProto.getIoDescriptor().getClassName());
 assertEquals(inputSpec.getInputDescriptor().getUserPayload().getPayload(),
   inputSpecProto.getIoDescriptor().getUserPayload().getUserPayload().asReadOnlyByteBuffer());
}

代码示例来源:origin: apache/hive

assertEquals(processorDescriptor.getUserPayload().getPayload(),
  vertexProto.getProcessorDescriptor().getUserPayload().getUserPayload()
    .asReadOnlyByteBuffer());
assertEquals(2, vertexProto.getInputSpecsCount());
assertEquals(2, vertexProto.getOutputSpecsCount());

相关文章