投诉需要流式api

8hhllhi2  于 2021-06-21  发布在  Flink
关注(0)|答案(1)|浏览(273)

我试图用flink1.5.0创建一个面向批处理的flink作业,并希望使用表和sqlapi来处理数据。我的问题是试图创建BatchTableEnvironment,但我遇到了一个编译错误
batchjob.java:[46,73]无法访问org.apache.flink.streaming.api.environment.streamexecutionenvironment
引起于

final BatchTableEnvironment bTableEnv = TableEnvironment.getTableEnvironment(bEnv);

据我所知,我对流媒体环境没有依赖性。我的代码如下。

import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.java.BatchTableEnvironment;
import org.apache.flink.table.sources.CsvTableSource;
import org.apache.flink.table.sources.TableSource;

import java.util.Date;

public class BatchJob {

    public static void main(String[] args) throws Exception {
        final ExecutionEnvironment bEnv = ExecutionEnvironment.getExecutionEnvironment();
        // create a TableEnvironment for batch queries
        final BatchTableEnvironment bTableEnv = TableEnvironment.getTableEnvironment(bEnv);
    ... do stuff
    // execute program
        bEnv.execute("MY Batch Jon");
    }

我的pom依赖项如下

<dependencies>
        <!-- Apache Flink dependencies -->
        <!-- These dependencies are provided, because they should not be packaged into the JAR file. -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-java</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-scala_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>

        <!-- Add connector dependencies here. They must be in the default scope (compile). -->

        <!-- Example:

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka-0.10_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        -->

        <!-- Add logging framework, to produce console output when running in the IDE. -->
        <!-- These dependencies are excluded from the application JAR by default. -->
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.7</version>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
            <scope>runtime</scope>
        </dependency>

    </dependencies>

请有人能帮助我了解什么是流api的依赖性,为什么我需要它的批处理作业?非常感谢你的帮助。奥利弗

xqk2d5yq

xqk2d5yq1#

flink的表api和sql支持是用于批处理和流处理的统一api。许多内部类在批处理和流执行、scala/java表api和sql之间共享,因此链接到flink的批处理和流依赖项。
由于这些公共类,批处理查询也需要 flink-streaming 依赖关系。

相关问题