本文整理了Java中org.apache.flink.optimizer.plantranslate.JobGraphGenerator.addUserArtifactEntries()
方法的一些代码示例,展示了JobGraphGenerator.addUserArtifactEntries()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JobGraphGenerator.addUserArtifactEntries()
方法的具体详情如下:
包路径:org.apache.flink.optimizer.plantranslate.JobGraphGenerator
类名称:JobGraphGenerator
方法名:addUserArtifactEntries
暂无
代码示例来源:origin: apache/flink
private JobGraph createJobGraph() {
// make sure that all vertices start immediately
jobGraph.setScheduleMode(ScheduleMode.EAGER);
// Generate deterministic hashes for the nodes in order to identify them across
// submission iff they didn't change.
Map<Integer, byte[]> hashes = defaultStreamGraphHasher.traverseStreamGraphAndGenerateHashes(streamGraph);
// Generate legacy version hashes for backwards compatibility
List<Map<Integer, byte[]>> legacyHashes = new ArrayList<>(legacyStreamGraphHashers.size());
for (StreamGraphHasher hasher : legacyStreamGraphHashers) {
legacyHashes.add(hasher.traverseStreamGraphAndGenerateHashes(streamGraph));
}
Map<Integer, List<Tuple2<byte[], byte[]>>> chainedOperatorHashes = new HashMap<>();
setChaining(hashes, legacyHashes, chainedOperatorHashes);
setPhysicalEdges();
setSlotSharingAndCoLocation();
configureCheckpointing();
JobGraphGenerator.addUserArtifactEntries(streamGraph.getEnvironment().getCachedFiles(), jobGraph);
// set the ExecutionConfig last when it has been finalized
try {
jobGraph.setExecutionConfig(streamGraph.getExecutionConfig());
}
catch (IOException e) {
throw new IllegalConfigurationException("Could not serialize the ExecutionConfig." +
"This indicates that non-serializable types (like custom serializers) were registered");
}
return jobGraph;
}
代码示例来源:origin: apache/flink
@Test
public void testArtifactCompression() throws IOException {
Path plainFile1 = tmp.newFile("plainFile1").toPath();
Path plainFile2 = tmp.newFile("plainFile2").toPath();
Path directory1 = tmp.newFolder("directory1").toPath();
Files.createDirectory(directory1.resolve("containedFile1"));
Path directory2 = tmp.newFolder("directory2").toPath();
Files.createDirectory(directory2.resolve("containedFile2"));
JobGraph jb = new JobGraph();
final String executableFileName = "executableFile";
final String nonExecutableFileName = "nonExecutableFile";
final String executableDirName = "executableDir";
final String nonExecutableDirName = "nonExecutableDIr";
Collection<Tuple2<String, DistributedCache.DistributedCacheEntry>> originalArtifacts = Arrays.asList(
Tuple2.of(executableFileName, new DistributedCache.DistributedCacheEntry(plainFile1.toString(), true)),
Tuple2.of(nonExecutableFileName, new DistributedCache.DistributedCacheEntry(plainFile2.toString(), false)),
Tuple2.of(executableDirName, new DistributedCache.DistributedCacheEntry(directory1.toString(), true)),
Tuple2.of(nonExecutableDirName, new DistributedCache.DistributedCacheEntry(directory2.toString(), false))
);
JobGraphGenerator.addUserArtifactEntries(originalArtifacts, jb);
Map<String, DistributedCache.DistributedCacheEntry> submittedArtifacts = jb.getUserArtifacts();
DistributedCache.DistributedCacheEntry executableFileEntry = submittedArtifacts.get(executableFileName);
assertState(executableFileEntry, true, false);
DistributedCache.DistributedCacheEntry nonExecutableFileEntry = submittedArtifacts.get(nonExecutableFileName);
assertState(nonExecutableFileEntry, false, false);
DistributedCache.DistributedCacheEntry executableDirEntry = submittedArtifacts.get(executableDirName);
assertState(executableDirEntry, true, true);
DistributedCache.DistributedCacheEntry nonExecutableDirEntry = submittedArtifacts.get(nonExecutableDirName);
assertState(nonExecutableDirEntry, false, true);
}
代码示例来源:origin: apache/flink
.map(entry -> Tuple2.of(entry.getKey(), entry.getValue()))
.collect(Collectors.toList());
addUserArtifactEntries(userArtifacts, graph);
代码示例来源:origin: org.apache.flink/flink-streaming-java
private JobGraph createJobGraph() {
// make sure that all vertices start immediately
jobGraph.setScheduleMode(ScheduleMode.EAGER);
// Generate deterministic hashes for the nodes in order to identify them across
// submission iff they didn't change.
Map<Integer, byte[]> hashes = defaultStreamGraphHasher.traverseStreamGraphAndGenerateHashes(streamGraph);
// Generate legacy version hashes for backwards compatibility
List<Map<Integer, byte[]>> legacyHashes = new ArrayList<>(legacyStreamGraphHashers.size());
for (StreamGraphHasher hasher : legacyStreamGraphHashers) {
legacyHashes.add(hasher.traverseStreamGraphAndGenerateHashes(streamGraph));
}
Map<Integer, List<Tuple2<byte[], byte[]>>> chainedOperatorHashes = new HashMap<>();
setChaining(hashes, legacyHashes, chainedOperatorHashes);
setPhysicalEdges();
setSlotSharingAndCoLocation();
configureCheckpointing();
JobGraphGenerator.addUserArtifactEntries(streamGraph.getEnvironment().getCachedFiles(), jobGraph);
// set the ExecutionConfig last when it has been finalized
try {
jobGraph.setExecutionConfig(streamGraph.getExecutionConfig());
}
catch (IOException e) {
throw new IllegalConfigurationException("Could not serialize the ExecutionConfig." +
"This indicates that non-serializable types (like custom serializers) were registered");
}
return jobGraph;
}
代码示例来源:origin: org.apache.flink/flink-streaming-java_2.11
private JobGraph createJobGraph() {
// make sure that all vertices start immediately
jobGraph.setScheduleMode(ScheduleMode.EAGER);
// Generate deterministic hashes for the nodes in order to identify them across
// submission iff they didn't change.
Map<Integer, byte[]> hashes = defaultStreamGraphHasher.traverseStreamGraphAndGenerateHashes(streamGraph);
// Generate legacy version hashes for backwards compatibility
List<Map<Integer, byte[]>> legacyHashes = new ArrayList<>(legacyStreamGraphHashers.size());
for (StreamGraphHasher hasher : legacyStreamGraphHashers) {
legacyHashes.add(hasher.traverseStreamGraphAndGenerateHashes(streamGraph));
}
Map<Integer, List<Tuple2<byte[], byte[]>>> chainedOperatorHashes = new HashMap<>();
setChaining(hashes, legacyHashes, chainedOperatorHashes);
setPhysicalEdges();
setSlotSharingAndCoLocation();
configureCheckpointing();
JobGraphGenerator.addUserArtifactEntries(streamGraph.getEnvironment().getCachedFiles(), jobGraph);
// set the ExecutionConfig last when it has been finalized
try {
jobGraph.setExecutionConfig(streamGraph.getExecutionConfig());
}
catch (IOException e) {
throw new IllegalConfigurationException("Could not serialize the ExecutionConfig." +
"This indicates that non-serializable types (like custom serializers) were registered");
}
return jobGraph;
}
代码示例来源:origin: org.apache.flink/flink-optimizer
.map(entry -> Tuple2.of(entry.getKey(), entry.getValue()))
.collect(Collectors.toList());
addUserArtifactEntries(userArtifacts, graph);
代码示例来源:origin: org.apache.flink/flink-optimizer_2.11
.map(entry -> Tuple2.of(entry.getKey(), entry.getValue()))
.collect(Collectors.toList());
addUserArtifactEntries(userArtifacts, graph);
内容来源于网络,如有侵权,请联系作者删除!