本文整理了Java中parquet.schema.MessageType.getPaths()
方法的一些代码示例,展示了MessageType.getPaths()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。MessageType.getPaths()
方法的具体详情如下:
包路径:parquet.schema.MessageType
类名称:MessageType
方法名:getPaths
暂无
代码示例来源:origin: prestodb/presto
public static Predicate buildPredicate(MessageType requestedSchema, TupleDomain<ColumnDescriptor> parquetTupleDomain, Map<List<String>, RichColumnDescriptor> descriptorsByPath)
{
ImmutableList.Builder<RichColumnDescriptor> columnReferences = ImmutableList.builder();
for (String[] paths : requestedSchema.getPaths()) {
RichColumnDescriptor descriptor = descriptorsByPath.get(Arrays.asList(paths));
if (descriptor != null) {
columnReferences.add(descriptor);
}
}
return new TupleDomainParquetPredicate(parquetTupleDomain, columnReferences.build());
}
代码示例来源:origin: prestodb/presto
public static Map<List<String>, RichColumnDescriptor> getDescriptors(MessageType fileSchema, MessageType requestedSchema)
{
Map<List<String>, RichColumnDescriptor> descriptorsByPath = new HashMap<>();
List<PrimitiveColumnIO> columns = getColumns(fileSchema, requestedSchema);
for (String[] paths : fileSchema.getPaths()) {
List<String> columnPath = Arrays.asList(paths);
getDescriptor(columns, columnPath)
.ifPresent(richColumnDescriptor -> descriptorsByPath.put(columnPath, richColumnDescriptor));
}
return descriptorsByPath;
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
public List<String[]> getPaths() {
return this.getPaths(0);
}
代码示例来源:origin: julienledem/redelm
public List<String[]> getPaths() {
return this.getPaths(0);
}
代码示例来源:origin: julienledem/redelm
public List<ColumnDescriptor> getColumns() {
List<String[]> paths = this.getPaths(0);
List<ColumnDescriptor> columns = new ArrayList<ColumnDescriptor>(paths.size());
for (String[] path : paths) {
// TODO: optimize this
columns.add(new ColumnDescriptor(path, getType(path).asPrimitiveType().getPrimitiveTypeName(), getMaxRepetitionLevel(path), getMaxDefinitionLevel(path)));
}
return columns;
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
public List<ColumnDescriptor> getColumns() {
List<String[]> paths = this.getPaths(0);
List<ColumnDescriptor> columns = new ArrayList<ColumnDescriptor>(paths.size());
for (String[] path : paths) {
// TODO: optimize this
PrimitiveType primitiveType = getType(path).asPrimitiveType();
columns.add(new ColumnDescriptor(
path,
primitiveType.getPrimitiveTypeName(),
primitiveType.getTypeLength(),
getMaxRepetitionLevel(path),
getMaxDefinitionLevel(path)));
}
return columns;
}
代码示例来源:origin: org.apache.tajo/tajo-storage
public void initialize(MessageType requestedSchema, MessageType fileSchema,
Map<String, String> extraMetadata, Map<String, String> readSupportMetadata,
Path file, List<BlockMetaData> blocks, Configuration configuration)
throws IOException {
this.requestedSchema = requestedSchema;
this.fileSchema = fileSchema;
this.file = file;
this.columnCount = this.requestedSchema.getPaths().size();
this.recordConverter = readSupport.prepareForRead(
configuration, extraMetadata, fileSchema,
new ReadSupport.ReadContext(requestedSchema, readSupportMetadata));
List<ColumnDescriptor> columns = requestedSchema.getColumns();
reader = new ParquetFileReader(configuration, file, blocks, columns);
for (BlockMetaData block : blocks) {
total += block.getRowCount();
}
LOG.info("RecordReader initialized will read a total of " + total + " records.");
}
代码示例来源:origin: com.facebook.presto.hive/hive-apache
public void initialize(MessageType fileSchema,
Map<String, String> fileMetadata,
Path file, List<BlockMetaData> blocks, Configuration configuration)
throws IOException {
// initialize a ReadContext for this file
ReadSupport.ReadContext readContext = readSupport.init(new InitContext(
configuration, toSetMultiMap(fileMetadata), fileSchema));
this.requestedSchema = readContext.getRequestedSchema();
this.fileSchema = fileSchema;
this.file = file;
this.columnCount = requestedSchema.getPaths().size();
this.recordConverter = readSupport.prepareForRead(
configuration, fileMetadata, fileSchema, readContext);
this.strictTypeChecking = configuration.getBoolean(STRICT_TYPE_CHECKING, true);
List<ColumnDescriptor> columns = requestedSchema.getColumns();
reader = new ParquetFileReader(configuration, file, blocks, columns);
for (BlockMetaData block : blocks) {
total += block.getRowCount();
}
LOG.info("RecordReader initialized will read a total of " + total + " records.");
}
内容来源于网络,如有侵权,请联系作者删除!