本文整理了Java中org.apache.storm.tuple.Tuple.getLong()
方法的一些代码示例,展示了Tuple.getLong()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Tuple.getLong()
方法的具体详情如下:
包路径:org.apache.storm.tuple.Tuple
类名称:Tuple
方法名:getLong
[英]Returns the Long at position i in the tuple.
[中]返回元组中位置i处的Long。
代码示例来源:origin: apache/storm
public void execute(Tuple input) {
Integer sourceId = input.getInteger(0);
Long eventId = input.getLong(1);
Long recentEvent = recentEventId.get(sourceId);
if (null != recentEvent && eventId <= recentEvent) {
String error = "Error: event id is not in strict order! event source Id: "
+ sourceId + ", last event Id: " + recentEvent + ", current event Id: " + eventId;
_collector.emit(input, new Values(error));
}
recentEventId.put(sourceId, eventId);
_collector.ack(input);
}
代码示例来源:origin: apache/storm
@Override
public void execute(Tuple tuple) {
Object obj = tuple.getValue(0);
long count = tuple.getLong(1);
int source = tuple.getSourceTask();
Map<Integer, Long> subCounts = counts.get(obj);
if (subCounts == null) {
subCounts = new HashMap<Integer, Long>();
counts.put(obj, subCounts);
}
//Update the current count for this object
subCounts.put(source, count);
//Output the sum of all the known counts so for this key
long sum = 0;
for (Long val : subCounts.values()) {
sum += val;
}
collector.emit(new Values(obj, sum));
}
代码示例来源:origin: apache/storm
pending.put(id, curr);
curr.updateAck(input.getLong(1));
curr.spoutTask = input.getInteger(2);
} else if (ACKER_ACK_STREAM_ID.equals(streamId)) {
pending.put(id, curr);
curr.updateAck(input.getLong(1));
} else if (ACKER_FAIL_STREAM_ID.equals(streamId)) {
代码示例来源:origin: intel-hadoop/HiBench
@Override
public void execute(Tuple tuple, BasicOutputCollector collector) {
String word = tuple.getString(0);
Integer count = counts.get(word);
if (count == null)
count = 0;
count++;
counts.put(word, count);
LatencyReporter latencyReporter = new KafkaReporter(config.reporterTopic, config.brokerList);
latencyReporter.report(tuple.getLong(1), System.currentTimeMillis());
}
代码示例来源:origin: intel-hadoop/HiBench
@Override
public void execute(TupleWindow inputWindow) {
Map<String, Long[]> counts = new HashMap<String, Long[]>();
for (Tuple tuple : inputWindow.get()) {
Long time = tuple.getLong(1);
String ip = tuple.getString(0);
Long[] timeAndCount = counts.get(ip);
if (null == timeAndCount) {
timeAndCount = new Long[2];
timeAndCount[0] = time;
timeAndCount[1] = 0L;
}
timeAndCount[0] = Math.min(timeAndCount[0], time);
timeAndCount[1]++;
counts.put(ip, timeAndCount);
}
LatencyReporter latencyReporter = new KafkaReporter(config.reporterTopic, config.brokerList);
for (Long[] timeAndCount : counts.values()) {
for (int i = 0; i < timeAndCount[1]; i++) {
latencyReporter.report(timeAndCount[0], System.currentTimeMillis());
}
}
}
代码示例来源:origin: org.apache.storm/storm-core
public void execute(Tuple input) {
Integer sourceId = input.getInteger(0);
Long eventId = input.getLong(1);
Long recentEvent = recentEventId.get(sourceId);
if (null != recentEvent && eventId <= recentEvent) {
String error = "Error: event id is not in strict order! event source Id: "
+ sourceId + ", last event Id: " + recentEvent + ", current event Id: " + eventId;
_collector.emit(input, new Values(error));
}
recentEventId.put(sourceId, eventId);
_collector.ack(input);
}
代码示例来源:origin: Paleozoic/storm_spring_boot_demo
@Override
public void execute(Tuple tuple) {
Object obj = tuple.getValue(0);
long count = tuple.getLong(1);
int source = tuple.getSourceTask();
Map<Integer, Long> subCounts = counts.get(obj);
if (subCounts == null) {
subCounts = new HashMap<Integer, Long>();
counts.put(obj, subCounts);
}
//Update the current count for this object
subCounts.put(source, count);
//Output the sum of all the known counts so for this key
long sum = 0;
for (Long val: subCounts.values()) {
sum += val;
}
collector.emit(new Values(obj, sum));
}
内容来源于网络,如有侵权,请联系作者删除!