本文整理了Java中parquet.schema.MessageType.getFields()
方法的一些代码示例,展示了MessageType.getFields()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。MessageType.getFields()
方法的具体详情如下:
包路径:parquet.schema.MessageType
类名称:MessageType
方法名:getFields
暂无
代码示例来源:origin: prestodb/presto
public static parquet.schema.Type getParquetTypeByName(String columnName, MessageType messageType)
{
if (messageType.containsField(columnName)) {
return messageType.getType(columnName);
}
// parquet is case-sensitive, but hive is not. all hive columns get converted to lowercase
// check for direct match above but if no match found, try case-insensitive match
for (parquet.schema.Type type : messageType.getFields()) {
if (type.getName().equalsIgnoreCase(columnName)) {
return type;
}
}
return null;
}
代码示例来源:origin: prestodb/presto
public static int getFieldIndex(MessageType fileSchema, String name)
{
try {
return fileSchema.getFieldIndex(name.toLowerCase(Locale.ENGLISH));
}
catch (InvalidRecordException e) {
for (parquet.schema.Type type : fileSchema.getFields()) {
if (type.getName().equalsIgnoreCase(name)) {
return fileSchema.getFieldIndex(type.getName());
}
}
return -1;
}
}
代码示例来源:origin: prestodb/presto
String columnName = useParquetColumnNames ? name : fileSchema.getFields().get(column.getHiveColumnIndex()).getName();
fieldsBuilder.add(constructField(type, lookupColumnByName(messageColumnIO, columnName)));
代码示例来源:origin: com.twitter/parquet-tools
public static void showDetails(PrettyPrintWriter out, MessageType type) {
List<String> cpath = new ArrayList<String>();
for (Type ftype : type.getFields()) {
showDetails(out, ftype, 0, type, cpath);
}
}
代码示例来源:origin: com.twitter/parquet-pig
/**
* converts a parquet schema into a pig schema
* @param parquetSchema the parquet schema to convert to Pig schema
* @return the resulting schema
*/
public Schema convert(MessageType parquetSchema) {
return convertFields(parquetSchema.getFields());
}
代码示例来源:origin: org.apache.tajo/tajo-storage
/**
* Converts a Parquet schema to a Tajo schema.
*
* @param parquetSchema The Parquet schema to convert.
* @return The resulting Tajo schema.
*/
public Schema convert(MessageType parquetSchema) {
return convertFields(parquetSchema.getFields());
}
代码示例来源:origin: uber/hudi
/**
* Returns equivalent Hive table schema read from a parquet file
*
* @param messageType : Parquet Schema
* @return : Hive Table schema read from parquet file MAP[String,String]
*/
public static Map<String, String> convertParquetSchemaToHiveSchema(MessageType messageType)
throws IOException {
Map<String, String> schema = Maps.newLinkedHashMap();
List<Type> parquetFields = messageType.getFields();
for (Type parquetType : parquetFields) {
StringBuilder result = new StringBuilder();
String key = parquetType.getName();
if (parquetType.isRepetition(Type.Repetition.REPEATED)) {
result.append(createHiveArray(parquetType, ""));
} else {
result.append(convertField(parquetType));
}
schema.put(hiveCompatibleFieldName(key, false), result.toString());
}
return schema;
}
代码示例来源:origin: uk.co.nichesolutions.presto/presto-hive
private static parquet.schema.Type getParquetTypeByName(String columnName, MessageType messageType)
{
if (messageType.containsField(columnName)) {
return messageType.getType(columnName);
}
// parquet is case-sensitive, but hive is not. all hive columns get converted to lowercase
// check for direct match above but if no match found, try case-insensitive match
for (Type type : messageType.getFields()) {
if (type.getName().equalsIgnoreCase(columnName)) {
return type;
}
}
return null;
}
}
代码示例来源:origin: dbiir/rainbow
this.fields = this.fileMetaDataList.get(0).getFileMetaData().getSchema().getFields();
this.columnCount = this.fileMetaDataList.get(0).getFileMetaData().getSchema().getFieldCount();
代码示例来源:origin: com.twitter/parquet-cascading
@Override
public void write(TupleEntry record) {
recordConsumer.startMessage();
final List<Type> fields = rootSchema.getFields();
for (int i = 0; i < fields.size(); i++) {
Type field = fields.get(i);
if (record == null || record.getObject(field.getName()) == null) {
continue;
}
recordConsumer.startField(field.getName(), i);
if (field.isPrimitive()) {
writePrimitive(record, field.asPrimitiveType());
} else {
throw new UnsupportedOperationException("Complex type not implemented");
}
recordConsumer.endField(field.getName(), i);
}
recordConsumer.endMessage();
}
内容来源于网络,如有侵权,请联系作者删除!