本文整理了Java中org.apache.hadoop.hbase.util.Bytes.iterateOnSplits()
方法的一些代码示例,展示了Bytes.iterateOnSplits()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Bytes.iterateOnSplits()
方法的具体详情如下:
包路径:org.apache.hadoop.hbase.util.Bytes
类名称:Bytes
方法名:iterateOnSplits
[英]Iterate over keys within the passed range, splitting at an [a,b) boundary.
[中]迭代传递范围内的键,在[a,b]边界处拆分。
代码示例来源:origin: apache/hbase
/**
* Iterate over keys within the passed range, splitting at an [a,b) boundary.
*/
public static Iterable<byte[]> iterateOnSplits(final byte[] a,
final byte[] b, final int num)
{
return iterateOnSplits(a, b, false, num);
}
代码示例来源:origin: apache/hbase
/**
* Split passed range. Expensive operation relatively. Uses BigInteger math.
* Useful splitting ranges for MapReduce jobs.
* @param a Beginning of range
* @param b End of range
* @param inclusive Whether the end of range is prefix-inclusive or is
* considered an exclusive boundary. Automatic splits are generally exclusive
* and manual splits with an explicit range utilize an inclusive end of range.
* @param num Number of times to split range. Pass 1 if you want to split
* the range in two; i.e. one split.
* @return Array of dividing values
*/
public static byte[][] split(final byte[] a, final byte[] b,
boolean inclusive, final int num) {
byte[][] ret = new byte[num + 2][];
int i = 0;
Iterable<byte[]> iter = iterateOnSplits(a, b, inclusive, num);
if (iter == null)
return null;
for (byte[] elem : iter) {
ret[i++] = elem;
}
return ret;
}
代码示例来源:origin: apache/hbase
return iterateOnSplits(aPaddedAdditional, bPaddedAdditional, inclusive, num);
代码示例来源:origin: apache/hbase
try {
for (byte[] key : Bytes.iterateOnSplits(startKey, endKey, numRows - 2)) {
Cell kv = new KeyValue(key, family, qualifier, now, key);
if (withTag) {
代码示例来源:origin: apache/hbase
private void createHFile(Path path,
byte[] family, byte[] qualifier,
byte[] startKey, byte[] endKey, int numRows) throws IOException {
HFile.Writer writer = null;
long now = System.currentTimeMillis();
try {
HFileContext context = new HFileContextBuilder().build();
writer = HFile.getWriterFactory(conf, new CacheConfig(conf)).withPath(fs, path)
.withFileContext(context).create();
// subtract 2 since numRows doesn't include boundary keys
for (byte[] key : Bytes.iterateOnSplits(startKey, endKey, true, numRows - 2)) {
KeyValue kv = new KeyValue(key, family, qualifier, now, key);
writer.append(kv);
}
} finally {
if (writer != null) {
writer.close();
}
}
}
代码示例来源:origin: org.apache.hbase/hbase-common
/**
* Iterate over keys within the passed range, splitting at an [a,b) boundary.
*/
public static Iterable<byte[]> iterateOnSplits(final byte[] a,
final byte[] b, final int num)
{
return iterateOnSplits(a, b, false, num);
}
代码示例来源:origin: harbby/presto-connectors
/**
* Iterate over keys within the passed range, splitting at an [a,b) boundary.
*/
public static Iterable<byte[]> iterateOnSplits(final byte[] a,
final byte[] b, final int num)
{
return iterateOnSplits(a, b, false, num);
}
代码示例来源:origin: co.cask.hbase/hbase
/**
* Iterate over keys within the passed range, splitting at an [a,b) boundary.
*/
public static Iterable<byte[]> iterateOnSplits(final byte[] a,
final byte[] b, final int num)
{
return iterateOnSplits(a, b, false, num);
}
代码示例来源:origin: com.aliyun.hbase/alihbase-common
/**
* Iterate over keys within the passed range, splitting at an [a,b) boundary.
*/
public static Iterable<byte[]> iterateOnSplits(final byte[] a,
final byte[] b, final int num)
{
return iterateOnSplits(a, b, false, num);
}
代码示例来源:origin: com.aliyun.hbase/alihbase-common
/**
* Split passed range. Expensive operation relatively. Uses BigInteger math.
* Useful splitting ranges for MapReduce jobs.
* @param a Beginning of range
* @param b End of range
* @param inclusive Whether the end of range is prefix-inclusive or is
* considered an exclusive boundary. Automatic splits are generally exclusive
* and manual splits with an explicit range utilize an inclusive end of range.
* @param num Number of times to split range. Pass 1 if you want to split
* the range in two; i.e. one split.
* @return Array of dividing values
*/
public static byte[][] split(final byte[] a, final byte[] b,
boolean inclusive, final int num) {
byte[][] ret = new byte[num + 2][];
int i = 0;
Iterable<byte[]> iter = iterateOnSplits(a, b, inclusive, num);
if (iter == null)
return null;
for (byte[] elem : iter) {
ret[i++] = elem;
}
return ret;
}
代码示例来源:origin: harbby/presto-connectors
/**
* Split passed range. Expensive operation relatively. Uses BigInteger math.
* Useful splitting ranges for MapReduce jobs.
* @param a Beginning of range
* @param b End of range
* @param inclusive Whether the end of range is prefix-inclusive or is
* considered an exclusive boundary. Automatic splits are generally exclusive
* and manual splits with an explicit range utilize an inclusive end of range.
* @param num Number of times to split range. Pass 1 if you want to split
* the range in two; i.e. one split.
* @return Array of dividing values
*/
public static byte[][] split(final byte[] a, final byte[] b,
boolean inclusive, final int num) {
byte[][] ret = new byte[num + 2][];
int i = 0;
Iterable<byte[]> iter = iterateOnSplits(a, b, inclusive, num);
if (iter == null)
return null;
for (byte[] elem : iter) {
ret[i++] = elem;
}
return ret;
}
代码示例来源:origin: org.apache.hbase/hbase-common
/**
* Split passed range. Expensive operation relatively. Uses BigInteger math.
* Useful splitting ranges for MapReduce jobs.
* @param a Beginning of range
* @param b End of range
* @param inclusive Whether the end of range is prefix-inclusive or is
* considered an exclusive boundary. Automatic splits are generally exclusive
* and manual splits with an explicit range utilize an inclusive end of range.
* @param num Number of times to split range. Pass 1 if you want to split
* the range in two; i.e. one split.
* @return Array of dividing values
*/
public static byte[][] split(final byte[] a, final byte[] b,
boolean inclusive, final int num) {
byte[][] ret = new byte[num + 2][];
int i = 0;
Iterable<byte[]> iter = iterateOnSplits(a, b, inclusive, num);
if (iter == null)
return null;
for (byte[] elem : iter) {
ret[i++] = elem;
}
return ret;
}
代码示例来源:origin: co.cask.hbase/hbase
/**
* Split passed range. Expensive operation relatively. Uses BigInteger math.
* Useful splitting ranges for MapReduce jobs.
* @param a Beginning of range
* @param b End of range
* @param inclusive Whether the end of range is prefix-inclusive or is
* considered an exclusive boundary. Automatic splits are generally exclusive
* and manual splits with an explicit range utilize an inclusive end of range.
* @param num Number of times to split range. Pass 1 if you want to split
* the range in two; i.e. one split.
* @return Array of dividing values
*/
public static byte[][] split(final byte[] a, final byte[] b,
boolean inclusive, final int num) {
byte[][] ret = new byte[num + 2][];
int i = 0;
Iterable<byte[]> iter = iterateOnSplits(a, b, inclusive, num);
if (iter == null)
return null;
for (byte[] elem : iter) {
ret[i++] = elem;
}
return ret;
}
代码示例来源:origin: org.apache.hbase/hbase-common
return iterateOnSplits(aPaddedAdditional, bPaddedAdditional, inclusive, num);
代码示例来源:origin: com.aliyun.hbase/alihbase-common
return iterateOnSplits(aPaddedAdditional, bPaddedAdditional, inclusive, num);
代码示例来源:origin: harbby/presto-connectors
return iterateOnSplits(aPaddedAdditional, bPaddedAdditional, inclusive, num);
代码示例来源:origin: apache/bigtop
/**
* Create an HFile with the given number of rows between a given
* start key and end key.
*/
public static void createHFile(
Configuration conf,
FileSystem fs, Path path,
byte[] family, byte[] qualifier,
byte[] startKey, byte[] endKey, int numRows) throws IOException {
HFile.WriterFactory wf = HFile.getWriterFactory(conf, new CacheConfig(conf));
HFileContext hFileContext = new HFileContext();
wf.withFileContext(hFileContext);
wf.withComparator(KeyValue.COMPARATOR);
wf.withPath(fs, path);
HFile.Writer writer = wf.create();
long now = System.currentTimeMillis();
try {
// subtract 2 since iterateOnSplits doesn't include boundary keys
for (byte[] key : Bytes.iterateOnSplits(startKey, endKey, numRows - 2)) {
KeyValue kv = new KeyValue(key, family, qualifier, now, key);
writer.append(kv);
}
} finally {
writer.close();
}
}
代码示例来源:origin: org.apache.hbase/hbase-server
try {
for (byte[] key : Bytes.iterateOnSplits(startKey, endKey, numRows - 2)) {
Cell kv = new KeyValue(key, family, qualifier, now, key);
if (withTag) {
代码示例来源:origin: org.apache.hbase/hbase-server
private void createHFile(Path path,
byte[] family, byte[] qualifier,
byte[] startKey, byte[] endKey, int numRows) throws IOException {
HFile.Writer writer = null;
long now = System.currentTimeMillis();
try {
HFileContext context = new HFileContextBuilder().build();
writer = HFile.getWriterFactory(conf, new CacheConfig(conf)).withPath(fs, path)
.withFileContext(context).create();
// subtract 2 since numRows doesn't include boundary keys
for (byte[] key : Bytes.iterateOnSplits(startKey, endKey, true, numRows - 2)) {
KeyValue kv = new KeyValue(key, family, qualifier, now, key);
writer.append(kv);
}
} finally {
if (writer != null) {
writer.close();
}
}
}
内容来源于网络,如有侵权,请联系作者删除!