Zookeeper 如何向clickhouse提供sharding_key(ClickhouseWriterError)

nlejzf6q  于 5个月前  发布在  Apache
关注(0)|答案(1)|浏览(59)

我正在使用clickhouse分布式表在Kubernetes上设置哨兵。除了sentry-snuba-outcomes-consumer抛出异常之外,部署的所有组件似乎都在工作
snuba.clickhouse.errors.ClickhouseWriterError: Method write is not supported by storage Distributed with more than one shard and no sharding key provided (version 21.8.13.6 (official build))
我不知道我已经看过配置和文档,但似乎不明白如何提供分片密钥。我是完全新的Clickhouse和Snuba。
这是我当前的clickhouse配置

config.xml: |-
    <?xml version="1.0"?>
    <yandex>
        <path>/var/lib/clickhouse/</path>
        <tmp_path>/var/lib/clickhouse/tmp/</tmp_path>
        <user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
        <format_schema_path>/var/lib/clickhouse/format_schemas/</format_schema_path>

        <include_from>/etc/clickhouse-server/metrica.d/metrica.xml</include_from>

        <users_config>users.xml</users_config>

        <display_name>sentry-clickhouse</display_name>
        <listen_host>0.0.0.0</listen_host>
        <http_port>8123</http_port>
        <tcp_port>9000</tcp_port>
        <interserver_http_port>9009</interserver_http_port>
        <max_connections>4096</max_connections>
        <keep_alive_timeout>3</keep_alive_timeout>
        <max_concurrent_queries>100</max_concurrent_queries>
        <uncompressed_cache_size>8589934592</uncompressed_cache_size>
        <mark_cache_size>5368709120</mark_cache_size>
        <timezone>UTC</timezone>
        <umask>022</umask>
        <mlock_executable>false</mlock_executable>
        <remote_servers>
            <sentry-clickhouse>
                <shard>
                    <replica>
                        <internal_replication>true</internal_replication>
                        <host>sentry-clickhouse-0.sentry-clickhouse-headless.NAMESPACE.svc.cluster.local</host>
                        <port>9000</port>
                        <user>...</user>
                        <compression>true</compression>
                    </replica>
                </shard>
                <shard>
                    <replica>
                        <internal_replication>true</internal_replication>
                        <host>sentry-clickhouse-1.sentry-clickhouse-headless.NAMESPACE.svc.cluster.local</host>
                        <port>9000</port>
                        <user>...</user>
                        <compression>true</compression>
                    </replica>
                </shard>
                <shard>
                    <replica>
                        <internal_replication>true</internal_replication>
                        <host>sentry-clickhouse-2.sentry-clickhouse-headless.NAMESPACE.svc.cluster.local</host>
                        <port>9000</port>
                        <user>...</user>
                        <compression>true</compression>
                    </replica>
                </shard>
            </sentry-clickhouse>
        </remote_servers>
        <zookeeper incl="zookeeper-servers" optional="true" />
        <macros incl="macros" optional="true" />
        <builtin_dictionaries_reload_interval>3600</builtin_dictionaries_reload_interval>
        <max_session_timeout>3600</max_session_timeout>
        <default_session_timeout>60</default_session_timeout>
        <disable_internal_dns_cache>1</disable_internal_dns_cache>

        <query_log>
            <database>system</database>
            <table>query_log</table>
            <partition_by>toYYYYMM(event_date)</partition_by>
            <flush_interval_milliseconds>7500</flush_interval_milliseconds>
        </query_log>

        <query_thread_log>
            <database>system</database>
            <table>query_thread_log</table>
            <partition_by>toYYYYMM(event_date)</partition_by>
            <flush_interval_milliseconds>7500</flush_interval_milliseconds>
        </query_thread_log>

        <distributed_ddl>
            <path>/clickhouse/task_queue/ddl</path>
        </distributed_ddl>
        <logger>
            <level>trace</level>
            <log>/var/log/clickhouse-server/clickhouse-server.log</log>
            <errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
            <size>1000M</size>
            <count>10</count>
        </logger>
    </yandex>

字符串
这是Snuba的设置

import os

from snuba.settings import *

env = os.environ.get

DEBUG = env("DEBUG", "0").lower() in ("1", "true")

# Clickhouse Options
SENTRY_DISTRIBUTED_CLICKHOUSE_TABLES = True
CLUSTERS = [
  {
    "host": env("CLICKHOUSE_HOST", "sentry-clickhouse"),
    "port": int(9000),
    "user":  env("CLICKHOUSE_USER", "..."),
    "password": env("CLICKHOUSE_PASSWORD", "..."),
    "database": env("CLICKHOUSE_DATABASE", "..."),
    "http_port": 8123,
    "storage_sets": {
        "cdc",
        "discover",
        "events",
        "events_ro",
        "metrics",
        "migrations",
        "outcomes",
        "querylog",
        "sessions",
        "transactions",
        "profiles",
        "functions",
        "replays",
        "generic_metrics_sets",
        "generic_metrics_distributions",
        "search_issues",
        "generic_metrics_counters",
        "spans",
        "group_attributes",
    },
    "single_node": False,
    "cluster_name": "sentry-clickhouse",
    "distributed_cluster_name": "sentry-clickhouse",
    "sharding_key": "cdc",
  },
]


有没有人可以帮助我理解如何指定sharding_key?
我如上所示插入了"sharding_key": "cdc",SENTRY_DISTRIBUTED_CLICKHOUSE_TABLES = True,只是为了看看它是否解决了问题,但错误仍然存在。Clickhouse和Snuba文档似乎并不很清楚如何在配置中指定sharing_key。

inb24sb2

inb24sb21#

我有完全相同的-看看这个:https://github.com/sentry-kubernetes/charts/issues/1042
您将需要

  • 关闭snuba指标消费者(将副本设置为0)
  • 打开clickhouse replicaset,在每个pod上,你需要更新metrics_raw_v2表。

我用这个:

clickhouse-client --host 127.0.0.1
CREATE OR REPLACE TABLE metrics_raw_v2_dist AS metrics_raw_v2_local ENGINE = Distributed('sentry-clickhouse', default, metrics_raw_v2_local, timeseries_id);

字符串
之后-重新启动指标消费者,你应该看到所有的数据流入。

相关问题