本文整理了Java中backtype.storm.tuple.Tuple.getFields()
方法的一些代码示例,展示了Tuple.getFields()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Tuple.getFields()
方法的具体详情如下:
包路径:backtype.storm.tuple.Tuple
类名称:Tuple
方法名:getFields
[英]Gets the names of the fields in this tuple.
[中]
代码示例来源:origin: alibaba/mdrill
/**
* Returns a subset of the tuple based on the fields selector.
*/
public List<Object> select(Fields selector) {
return getFields().select(selector, values);
}
代码示例来源:origin: alibaba/mdrill
public int fieldIndex(String field) {
return getFields().fieldIndex(field);
}
代码示例来源:origin: alibaba/mdrill
private PersistentArrayMap toMap() {
Object array[] = new Object[values.size()*2];
List<String> fields = getFields().toList();
for(int i=0; i < values.size(); i++) {
array[i*2] = fields.get(i);
array[(i*2)+1] = values.get(i);
}
return new PersistentArrayMap(array);
}
代码示例来源:origin: alibaba/mdrill
public ISeq seq() {
if(values.size() > 0) {
return new Seq(getFields().toList(), values, 0);
}
return null;
}
代码示例来源:origin: apache/eagle
private static Map tupleToMap(Tuple tuple) {
Map values = new HashMap<>();
for (String field : tuple.getFields()) {
values.put(field, tuple.getValueByField(field));
}
return values;
}
代码示例来源:origin: XavientInformationSystems/Data-Ingestion-Platform
private List<String> getFileds(Tuple tuple) {
List<String> fields = new ArrayList<>();
Iterator<String> itr = tuple.getFields().iterator();
while (itr.hasNext()) {
String field = itr.next();
if (!"tableName".equals(field))
fields.add(field);
}
return fields;
}
代码示例来源:origin: hmsonline/storm-cassandra
@Override
public List<String> mapToColumnList(Tuple tuple) {
Fields fields = tuple.getFields();
List<String> result = new ArrayList<String>();
Iterator<String> it = fields.iterator();
while (it.hasNext()) {
String fieldName = it.next();
if (!fieldName.equals(rowKeyField) && !fieldName.equals(incrementAmountField))
result.add(tuple.getValueByField(fieldName).toString());
}
return result;
}
代码示例来源:origin: openimaj/openimaj
@Override
public void execute(Tuple input) {
for (final String field : input.getFields()) {
final Object value = input.getValueByField(field);
System.out.println(String.format(fieldValueFormat, field, value));
}
collector.ack(input);
}
代码示例来源:origin: org.openimaj.storm/core-storm
@Override
public void execute(Tuple input) {
for (final String field : input.getFields()) {
final Object value = input.getValueByField(field);
System.out.println(String.format(fieldValueFormat, field, value));
}
collector.ack(input);
}
代码示例来源:origin: ptgoetz/storm-hdfs
@Override
public byte[] format(Tuple tuple) {
StringBuilder sb = new StringBuilder();
Fields fields = this.fields == null ? tuple.getFields() : this.fields;
int size = fields.size();
for(int i = 0; i < size; i++){
sb.append(tuple.getValueByField(fields.get(i)));
if(i != size - 1){
sb.append(this.fieldDelimiter);
}
}
sb.append(this.recordDelimiter);
return sb.toString().getBytes();
}
}
代码示例来源:origin: hmsonline/storm-cassandra
/**
* Default behavior is to write each value in the tuple as a key:value pair
* in the Cassandra row.
*
* @param tuple
* @return
*/
@Override
public Map<String, String> mapToColumns(Tuple tuple) {
Fields fields = tuple.getFields();
Map<String, String> columns = new HashMap<String, String>();
for (int i = 0; i < fields.size(); i++) {
String name = fields.get(i);
Object value = tuple.getValueByField(name);
columns.put(name, (value != null ? value.toString() : ""));
}
return columns;
}
代码示例来源:origin: hmsonline/storm-cassandra
/**
* Write each value in the tuple as a key:value pair
* in the Cassandra row, excluding fields that were included in the row.
*
* @param tuple
* @return map of columns to values
*/
@Override
public Map<String, String> mapToColumns(Tuple tuple) {
Fields fields = tuple.getFields();
Map<String, String> columns = new HashMap<String, String>();
for (int i = 0; i < fields.size(); i++) {
String name = fields.get(i);
Boolean isRowField = Arrays.asList(this.rowKeyFields).contains(name);
if (!isRowField) {
Object value = tuple.getValueByField(name);
columns.put(name, (value != null ? value.toString() : ""));
}
}
return columns;
}
代码示例来源:origin: apache/eagle
@Override
public List<StreamEvent> map(Tuple tuple) throws Exception {
long timestamp;
if (tuple.getFields().contains(TIMESTAMP_FIELD)) {
try {
timestamp = tuple.getLongByField("timestamp");
} catch (Exception ex) {
// if timestamp is not null
LOGGER.error(ex.getMessage(), ex);
timestamp = 0;
}
} else {
timestamp = System.currentTimeMillis();
}
Object[] values = new Object[tuple.getFields().size()];
for (int i = 0; i < tuple.getFields().size(); i++) {
values[i] = tuple.getValue(i);
}
StreamEvent event = new StreamEvent();
event.setTimestamp(timestamp);
event.setStreamId(streamId);
event.setData(values);
return Collections.singletonList(event);
}
}
代码示例来源:origin: org.jwall/streams-storm
Fields fields = input.getFields();
for (int i = 0; i < fields.size(); i++) {
String key = fields.get(i);
代码示例来源:origin: tomdz/storm-esper
@Override
public void execute(Tuple tuple)
{
String eventType = getEventTypeName(tuple.getSourceComponent(), tuple.getSourceStreamId());
Map<String, Object> data = new HashMap<String, Object>();
Fields fields = tuple.getFields();
int numFields = fields.size();
for (int idx = 0; idx < numFields; idx++) {
String name = fields.get(idx);
Object value = tuple.getValue(idx);
data.put(name, value);
}
runtime.sendEvent(data, eventType);
collector.ack(tuple);
}
内容来源于网络,如有侵权,请联系作者删除!