关于prem kubernetes问题的elasticsearch开放发行版

2lpgd968  于 2021-06-10  发布在  ElasticSearch
关注(0)|答案(1)|浏览(673)

我正在尝试在on prem kubernetes集群上为elasticsearch集群设置一个开放发行版。我在用https://github.com/opendistro-for-elasticsearch/opendistro-build/tree/master/helm 主要是默认值。
values.yaml中唯一更改的是nodeselector和storageclass部分。


# This applies to the master, data, client, and kibana nodes

nodeSelector:
  kubernetes.io/hostname: node-1

# This applies to the master and data nodes.

storageClass: elasticsearch-test # master node
storageClass: elasticsearch-data-test # data node
size: 30Gi

如果我尝试取消对config下的任何项的注解,所有pod都会进入一个崩溃循环:

config:
    ## Example Config
    #opendistro_security.allow_unsafe_democertificates: true
    #opendistro_security.allow_default_init_securityindex: true
    #opendistro_security.audit.type: internal_elasticsearch
    #opendistro_security.enable_snapshot_restore_privilege: true
    #opendistro_security.check_snapshot_restore_write_privileges: true
    #cluster.routing.allocation.disk.threshold_enabled: false
    #opendistro_security.audit.config.disabled_rest_categories: NONE
    #opendistro_security.audit.config.disabled_transport_categories: NONE
    #cluster:
    #  name: elasticsearch
    #node:
    # master: ${NODE_MASTER}
    # data: ${NODE_DATA}
    # name: ${NODE_NAME}
    # ingest: ${NODE_INGEST}
    # max_local_storage_nodes: 1
    # attr.box_type: hot
    # processors: ${PROCESSORS:1}
    # network.host: ${NETWORK_HOST}
    # thread_pool.bulk.queue_size: 800

    #path:
      # data: /usr/share/elasticsearch/data
      # logs: /usr/share/elasticsearch/logs

    #http:
    # enabled: true
    # compression: true

    #discovery:
    # zen:
    #   ping.unicast.hosts: []
    #   minimum_master_nodes: 1

    # TLS Configuration Transport Layer
    #opendistro_security.ssl.transport.pemcert_filepath: elk-transport-crt.pem
    #opendistro_security.ssl.transport.pemkey_filepath: elk-transport-key.pem
    #opendistro_security.ssl.transport.pemtrustedcas_filepath: elk-transport-root-ca.pem
    #opendistro_security.ssl.transport.enforce_hostname_verification: false

    # # TLS Configuration REST Layer
    #opendistro_security.ssl.http.enabled: true
    #opendistro_security.ssl.http.pemcert_filepath: elk-rest-crt.pem
    #opendistro_security.ssl.http.pemkey_filepath: elk-rest-key.pem
    #opendistro_security.ssl.http.pemtrustedcas_filepath: elk-rest-root-ca.pem

豆荚:

NAME                                    READY   STATUS    RESTARTS   AGE     IP           NODE
opendistro-es-client-84c788dd7d-z8sf5   1/1     Running   0          7m56s   10.0.0.166   node-1
opendistro-es-data-0                    1/1     Running   0          7m56s   10.0.0.171   node-1
opendistro-es-kibana-576f4577f9-fwthz   1/1     Running   0          7m56s   10.0.0.167   node-1
opendistro-es-master-0                  1/1     Running   0          7m56s   10.0.0.172   node-1

服务:

NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)
kubernetes                     ClusterIP   10.96.0.1        <none>        443/TCP
opendistro-es-client-service   ClusterIP   10.109.25.235    <none>        9200/TCP,9300/TCP,9600/TCP,9650/TCP
opendistro-es-data-svc         ClusterIP   None             <none>        9300/TCP,9200/TCP,9600/TCP,9650/TCP
opendistro-es-discovery        ClusterIP   None             <none>        9300/TCP
opendistro-es-kibana-svc       ClusterIP   10.106.173.211   <none>        443/TCP

opendistro-es-data-0 pod中的错误:

[2020-11-11T00:19:05,035][WARN ][o.e.c.c.ClusterFormationFailureHelper] [opendistro-es-data-0] master not discovered yet: have discovered [{opendistro-es-data-0}{e_r9yR9_RDidZINEgKxYUA}{ubDNKWVLSJ2q1SIu59NVMQ}{10.0.0.171}{10.0.0.171:9300}{dr}]; discovery will continue using [] from hosts providers and [] from last-known cluster state; node term 8, last-accepted version 17 in term 8
[2020-11-11T00:19:07,073][WARN ][o.e.d.SeedHostsResolver  ] [opendistro-es-data-0] timed out after [5s] resolving host [opendistro-es-discovery]
[2020-11-11T00:19:07,082][WARN ][o.e.d.SeedHostsResolver  ] [opendistro-es-data-0] failed to resolve host [opendistro-es-discovery]
java.net.UnknownHostException: opendistro-es-discovery
        at java.net.InetAddress$CachedAddresses.get(InetAddress.java:800) ~[?:?]
        at java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:886) ~[?:?]
        at java.net.InetAddress.getAllByName0(InetAddress.java:1495) ~[?:?]
        at java.net.InetAddress.getAllByName(InetAddress.java:1354) ~[?:?]
        at java.net.InetAddress.getAllByName(InetAddress.java:1288) ~[?:?]
        at org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:548) ~[elasticsearch-7.9.1.jar:7.9.1]
        at org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:490) ~[elasticsearch-7.9.1.jar:7.9.1]
        at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:855) ~[elasticsearch-7.9.1.jar:7.9.1]
        at org.elasticsearch.discovery.SeedHostsResolver.lambda$resolveHostsLists$0(SeedHostsResolver.java:144) ~[elasticsearch-7.9.1.jar:7.9.1]
        at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:651) ~[elasticsearch-7.9.1.jar:7.9.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
        at java.lang.Thread.run(Thread.java:832) [?:?]

客户端pod抛出相同的“未能解析主机[opendistro es discovery]”错误。
opendistro-es-master-0中的elasticsearch.yml:

cluster.name: "docker-cluster"
network.host: 0.0.0.0

# # minimum_master_nodes need to be explicitly set when bound on a public IP

# # set to 1 to allow single node clusters

# # Details: https://github.com/elastic/elasticsearch/pull/17288

# discovery.zen.minimum_master_nodes: 1

# # Breaking change in 7.0

# # https://www.elastic.co/guide/en/elasticsearch/reference/7.0/breaking-changes-7.0.html#breaking_70_discovery_changes

# cluster.initial_master_nodes:

# - elasticsearch1

# - docker-test-node-1

######## Start OpenDistro for Elasticsearch Security Demo Configuration ########

# WARNING: revise all the lines below before you go into production

opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:
  - CN=kirk,OU=client,O=client,L=test, C=de

opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
opendistro_security.system_indices.enabled: true
opendistro_security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*"]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3

######## End OpenDistro for Elasticsearch Security Demo Configuration ########

当我运行“curl-xget”https://localhost:9200-uadmin:admin --insecure在主吊舱内,我得到响应“open distro security not initialized”
当我运行“/plugins/opendistro\u security/tools/securityadmin.sh-cd../securityconfig/-icl-nhnv-inside-master pod-cacert../../../config/root-ca.pem-cert../../config/kirk.pem-key../../config/kirk-key.pem”时,它就超时了。
在我看来,节点无法看到对方,但我不知道如何修复它。如前所述,使用values.yaml中的config部分会导致所有pod中断。如果你碰巧知道如何解决这个问题,那就太棒了。如果需要其他信息,请告诉我。提前谢谢!

ktca8awb

ktca8awb1#

这个问题与open发行版或kubernetes无关,而是calico安装和配置方式的问题,这会干扰pod dns。

相关问题