org.apache.hadoop.io.file.tfile.Utils.writeVInt()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(5.6k)|赞(0)|评价(0)|浏览(77)

本文整理了Java中org.apache.hadoop.io.file.tfile.Utils.writeVInt()方法的一些代码示例,展示了Utils.writeVInt()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Utils.writeVInt()方法的具体详情如下:
包路径:org.apache.hadoop.io.file.tfile.Utils
类名称:Utils
方法名:writeVInt

Utils.writeVInt介绍

[英]Encoding an integer into a variable-length encoding format. Synonymous to Utils#writeVLong(out, n).
[中]将整数编码为可变长度编码格式。与Utils#writeVLong(out, n)同义。

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-common

/**
 * Constructor.
 * 
 * @param out
 *          the underlying output stream.
 * @param size
 *          The total # of bytes to be written as a single chunk.
 * @throws java.io.IOException
 *           if an I/O error occurs.
 */
public SingleChunkEncoder(DataOutputStream out, int size)
  throws IOException {
 this.out = out;
 this.remain = size;
 Utils.writeVInt(out, size);
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

/**
 * Write out a chunk.
 * 
 * @param chunk
 *          The chunk buffer.
 * @param offset
 *          Offset to chunk buffer for the beginning of chunk.
 * @param len
 * @param last
 *          Is this the last call to flushBuffer?
 */
private void writeChunk(byte[] chunk, int offset, int len, boolean last)
  throws IOException {
 if (last) { // always write out the length for the last chunk.
  Utils.writeVInt(out, len);
  if (len > 0) {
   out.write(chunk, offset, len);
  }
 } else {
  if (len > 0) {
   Utils.writeVInt(out, -len);
   out.write(chunk, offset, len);
  }
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

/**
 * Write a String as a VInt n, followed by n Bytes as in Text format.
 * 
 * @param out
 * @param s
 * @throws IOException
 */
public static void writeString(DataOutput out, String s) throws IOException {
 if (s != null) {
  Text text = new Text(s);
  byte[] buffer = text.getBytes();
  int len = text.getLength();
  writeVInt(out, len);
  out.write(buffer, 0, len);
 } else {
  writeVInt(out, -1);
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

/**
 * Write out a chunk that is a concatenation of the internal buffer plus
 * user supplied data. This will never be the last block.
 * 
 * @param data
 *          User supplied data buffer.
 * @param offset
 *          Offset to user data buffer.
 * @param len
 *          User data buffer size.
 */
private void writeBufData(byte[] data, int offset, int len)
  throws IOException {
 if (count + len > 0) {
  Utils.writeVInt(out, -(count + len));
  out.write(buf, 0, count);
  count = 0;
  out.write(data, offset, len);
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

public void write(DataOutput out) throws IOException {
  Utils.writeVInt(out, key.length);
  out.write(key, 0, key.length);
  Utils.writeVLong(out, kvEntries);
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

public void write(DataOutput out) throws IOException {
  Utils.writeVInt(out, index.size());
  for (MetaIndexEntry indexEntry : index.values()) {
   indexEntry.write(out);
  }
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

public void write(DataOutput out) throws IOException {
  Utils.writeString(out, defaultCompressionAlgorithm.getName());
  Utils.writeVInt(out, listRegions.size());
  for (BlockRegion region : listRegions) {
   region.write(out);
  }
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

public void write(DataOutput out) throws IOException {
  if (firstKey == null) {
   Utils.writeVInt(out, 0);
   return;
  }
  DataOutputBuffer dob = new DataOutputBuffer();
  Utils.writeVInt(dob, firstKey.size());
  dob.write(firstKey.buffer());
  Utils.writeVInt(out, dob.size());
  out.write(dob.getData(), 0, dob.getLength());
  for (TFileIndexEntry entry : index) {
   dob.reset();
   entry.write(dob);
   Utils.writeVInt(out, dob.getLength());
   out.write(dob.getData(), 0, dob.getLength());
  }
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

Utils.writeVInt(blkAppender, len);
blkAppender.write(key, 0, len);
if (tfileIndex.getFirstKey() == null) {

代码示例来源:origin: ch.cern.hadoop/hadoop-common

public void write(DataOutput out) throws IOException {
  Utils.writeVInt(out, key.length);
  out.write(key, 0, key.length);
  Utils.writeVLong(out, kvEntries);
 }
}

代码示例来源:origin: io.hops/hadoop-common

public void write(DataOutput out) throws IOException {
  Utils.writeVInt(out, index.size());
  for (MetaIndexEntry indexEntry : index.values()) {
   indexEntry.write(out);
  }
 }
}

代码示例来源:origin: ch.cern.hadoop/hadoop-common

public void write(DataOutput out) throws IOException {
  Utils.writeVInt(out, index.size());
  for (MetaIndexEntry indexEntry : index.values()) {
   indexEntry.write(out);
  }
 }
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-common

public void write(DataOutput out) throws IOException {
  Utils.writeVInt(out, key.length);
  out.write(key, 0, key.length);
  Utils.writeVLong(out, kvEntries);
 }
}

代码示例来源:origin: org.apache.apex/malhar-library

public void write(DataOutput out) throws IOException {
  Utils.writeVInt(out, index.size());
  for (MetaIndexEntry indexEntry : index.values()) {
   indexEntry.write(out);
  }
 }
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

public void write(DataOutput out) throws IOException {
  Utils.writeVInt(out, key.length);
  out.write(key, 0, key.length);
  Utils.writeVLong(out, kvEntries);
 }
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

public void write(DataOutput out) throws IOException {
  Utils.writeVInt(out, index.size());
  for (MetaIndexEntry indexEntry : index.values()) {
   indexEntry.write(out);
  }
 }
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

public void write(DataOutput out) throws IOException {
  Utils.writeVInt(out, index.size());
  for (MetaIndexEntry indexEntry : index.values()) {
   indexEntry.write(out);
  }
 }
}

代码示例来源:origin: org.apache.apex/malhar-library

public void write(DataOutput out) throws IOException {
  Utils.writeString(out, defaultCompressionAlgorithm.getName());
  Utils.writeVInt(out, listRegions.size());
  for (BlockRegion region : listRegions) {
   region.write(out);
  }
 }
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

public void write(DataOutput out) throws IOException {
  Utils.writeString(out, defaultCompressionAlgorithm.getName());
  Utils.writeVInt(out, listRegions.size());
  for (BlockRegion region : listRegions) {
   region.write(out);
  }
 }
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

public void write(DataOutput out) throws IOException {
  Utils.writeString(out, defaultCompressionAlgorithm.getName());
  Utils.writeVInt(out, listRegions.size());
  for (BlockRegion region : listRegions) {
   region.write(out);
  }
 }
}

相关文章