本文整理了Java中org.apache.hadoop.hive.ql.exec.Utilities.now()
方法的一些代码示例,展示了Utilities.now()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Utilities.now()
方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.exec.Utilities
类名称:Utilities
方法名:now
暂无
代码示例来源:origin: apache/hive
/**
* Throws MapJoinMemoryExhaustionException when the JVM has consumed the
* configured percentage of memory. The arguments are used simply for the error
* message.
*
* @param tableContainerSize currently table container size
* @param numRows number of rows processed
* @throws MapJoinMemoryExhaustionError
*/
public void checkMemoryStatus(long tableContainerSize, long numRows)
throws MapJoinMemoryExhaustionError {
long usedMemory = memoryMXBean.getHeapMemoryUsage().getUsed();
double percentage = (double) usedMemory / (double) maxHeapSize;
String msg = Utilities.now() + "\tProcessing rows:\t" + numRows + "\tHashtable size:\t"
+ tableContainerSize + "\tMemory usage:\t" + usedMemory + "\tpercentage:\t" + percentageNumberFormat.format(percentage);
console.printInfo(msg);
if(percentage > maxMemoryUsage) {
throw new MapJoinMemoryExhaustionError(msg);
}
}
}
代码示例来源:origin: apache/drill
/**
* Throws MapJoinMemoryExhaustionException when the JVM has consumed the
* configured percentage of memory. The arguments are used simply for the error
* message.
*
* @param tableContainerSize currently table container size
* @param numRows number of rows processed
* @throws MapJoinMemoryExhaustionException
*/
public void checkMemoryStatus(long tableContainerSize, long numRows)
throws MapJoinMemoryExhaustionException {
long usedMemory = memoryMXBean.getHeapMemoryUsage().getUsed();
double percentage = (double) usedMemory / (double) maxHeapSize;
String msg = Utilities.now() + "\tProcessing rows:\t" + numRows + "\tHashtable size:\t"
+ tableContainerSize + "\tMemory usage:\t" + usedMemory + "\tpercentage:\t" + percentageNumberFormat.format(percentage);
console.printInfo(msg);
if(percentage > maxMemoryUsage) {
throw new MapJoinMemoryExhaustionException(msg);
}
}
}
代码示例来源:origin: apache/drill
console.printInfo(Utilities.now()
+ "\tStarting to launch local task to process map join;\tmaximum memory = "
+ memoryMXBean.getHeapMemoryUsage().getMax());
long currentTime = System.currentTimeMillis();
long elapsed = currentTime - startTime;
console.printInfo(Utilities.now() + "\tEnd of local task; Time Taken: "
+ Utilities.showTime(elapsed) + " sec.");
} catch (Throwable throwable) {
代码示例来源:origin: apache/hive
console.printInfo(Utilities.now()
+ "\tStarting to launch local task to process map join;\tmaximum memory = "
+ memoryMXBean.getHeapMemoryUsage().getMax());
long currentTime = System.currentTimeMillis();
long elapsed = currentTime - startTime;
console.printInfo(Utilities.now() + "\tEnd of local task; Time Taken: "
+ Utilities.showTime(elapsed) + " sec.");
} catch (Throwable throwable) {
代码示例来源:origin: apache/hive
console.printInfo(Utilities.now() + "\tDump the side-table for tag: " + tag +
" with group count: " + tableContainer.size() + " into file: " + path);
console.printInfo(Utilities.now() + "\tUploaded 1 File to: " + path +
" (" + status.getLen() + " bytes)");
代码示例来源:origin: apache/hive
htsOperator.console.printInfo(Utilities.now() + "\tDump the side-table for tag: " + tag
+ " with group count: " + tableContainer.size() + " into file: " + path);
try {
htsOperator.console.printInfo(Utilities.now() + "\tUploaded 1 File to: " + path
+ " (" + status.getLen() + " bytes)");
} catch (Exception e) {
代码示例来源:origin: apache/drill
console.printInfo(Utilities.now() + "\tDump the side-table for tag: " + tag +
" with group count: " + tableContainer.size() + " into file: " + path);
console.printInfo(Utilities.now() + "\tUploaded 1 File to: " + path +
" (" + status.getLen() + " bytes)");
代码示例来源:origin: apache/drill
htsOperator.console.printInfo(Utilities.now() + "\tDump the side-table for tag: " + tag
+ " with group count: " + tableContainer.size() + " into file: " + path);
try {
htsOperator.console.printInfo(Utilities.now() + "\tUploaded 1 File to: " + path
+ " (" + status.getLen() + " bytes)");
} catch (Exception e) {
代码示例来源:origin: org.apache.hadoop.hive/hive-exec
public boolean isAbort(long numRows,LogHelper console) {
System.gc();
System.gc();
int size = mHash.size();
long usedMemory = memoryMXBean.getHeapMemoryUsage().getUsed();
double rate = (double) usedMemory / (double) maxMemory;
console.printInfo(Utilities.now() + "\tProcessing rows:\t" + numRows + "\tHashtable size:\t"
+ size + "\tMemory usage:\t" + usedMemory + "\trate:\t" + num.format(rate));
if (rate > (double) maxMemoryUsage) {
return true;
}
return false;
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
/**
* Throws MapJoinMemoryExhaustionException when the JVM has consumed the
* configured percentage of memory. The arguments are used simply for the error
* message.
*
* @param tableContainerSize currently table container size
* @param numRows number of rows processed
* @throws MapJoinMemoryExhaustionException
*/
public void checkMemoryStatus(long tableContainerSize, long numRows)
throws MapJoinMemoryExhaustionException {
long usedMemory = memoryMXBean.getHeapMemoryUsage().getUsed();
double percentage = (double) usedMemory / (double) maxHeapSize;
String msg = Utilities.now() + "\tProcessing rows:\t" + numRows + "\tHashtable size:\t"
+ tableContainerSize + "\tMemory usage:\t" + usedMemory + "\tpercentage:\t" + percentageNumberFormat.format(percentage);
console.printInfo(msg);
if(percentage > maxMemoryUsage) {
throw new MapJoinMemoryExhaustionException(msg);
}
}
}
代码示例来源:origin: org.apache.hadoop.hive/hive-exec
console.printInfo(Utilities.now() + "\tDump the hashtable into file: " + tmpURIPath);
Path path = new Path(tmpURIPath);
FileSystem fs = path.getFileSystem(job);
fs.create(path);
long fileLength = hashTable.flushMemoryCacheToPersistent(file);
console.printInfo(Utilities.now() + "\tUpload 1 File to: " + tmpURIPath + " File size: "
+ fileLength);
hashTable.close();
代码示例来源:origin: org.apache.hadoop.hive/hive-exec
console.printInfo(Utilities.now()
+ "\tStarting to launch local task to process map join;\tmaximum memory = "
+ memoryMXBean.getHeapMemoryUsage().getMax());
console.printInfo(Utilities.now() + "\tEnd of local task; Time Taken: "
+ Utilities.showTime(elapsed) + " sec.");
} catch (Throwable e) {
代码示例来源:origin: org.apache.hadoop.hive/hive-exec
console.printInfo(Utilities.now() + "\tDump the hashtable into file: " + tmpURIPath);
fs.create(path);
fileLength = hashTable.flushMemoryCacheToPersistent(file);
console.printInfo(Utilities.now() + "\tUpload 1 File to: " + tmpURIPath + " File size: "
+ fileLength);
代码示例来源:origin: com.facebook.presto.hive/hive-apache
console.printInfo(Utilities.now()
+ "\tStarting to launch local task to process map join;\tmaximum memory = "
+ memoryMXBean.getHeapMemoryUsage().getMax());
long currentTime = System.currentTimeMillis();
long elapsed = currentTime - startTime;
console.printInfo(Utilities.now() + "\tEnd of local task; Time Taken: "
+ Utilities.showTime(elapsed) + " sec.");
} catch (Throwable throwable) {
代码示例来源:origin: com.facebook.presto.hive/hive-apache
console.printInfo(Utilities.now() + "\tDump the side-table for tag: " + tag +
" with group count: " + tableContainer.size() + " into file: " + path);
console.printInfo(Utilities.now() + "\tUploaded 1 File to: " + path +
" (" + status.getLen() + " bytes)");
代码示例来源:origin: com.facebook.presto.hive/hive-apache
replication = (short) Math.max(MIN_REPLICATION, numOfPartitions);
htsOperator.console.printInfo(Utilities.now() + "\tDump the side-table for tag: " + tag
+ " with group count: " + tableContainer.size() + " into file: " + path);
mapJoinTableSerde.persist(out, tableContainer);
FileStatus status = fs.getFileStatus(path);
htsOperator.console.printInfo(Utilities.now() + "\tUploaded 1 File to: " + path
+ " (" + status.getLen() + " bytes)");
} catch (Exception e) {
内容来源于网络,如有侵权,请联系作者删除!