只让一个ElasticSearchPod出现在kubernetes的一个节点上

3okqufwl  于 2021-06-15  发布在  ElasticSearch
关注(0)|答案(3)|浏览(249)

我们的产品有一个多节点的设置,我们需要部署多个ElasticSearchPod。由于所有这些都是数据节点,并且具有用于持久存储的卷装载,因此我们不希望在同一个节点上出现两个pod。我试图使用kubernetes的反亲和特性,但是没有用。
集群部署是通过rancher完成的。我们在集群中有5个节点,还有3个节点(假设 node-1 ,
node-2 and node-3 )有标签吗 test.service.es-master: "true" . 因此,当我部署 Helm 图并将其放大到3时,elasticsearch吊舱在所有这三个节点上启动并运行。但如果我将其缩放为4,则第4个数据节点位于上述节点之一。这是正确的行为吗?我的理解是,施加一个严格的反亲和力应该可以防止pods出现在同一个节点上。我参考了多个博客和论坛(例如,this和this),他们提出了与我类似的变化。我附上舵图的相关部分。
其要求是,我们只需要在上面提到的那些标记有特定键值对的节点上提出es,并且每个节点只应该包含一个pod。任何反馈都将不胜感激。

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    test.service.es-master: "true"
  name: {{ .Values.service.name }}
  namespace: default
spec:
  clusterIP: None
  ports:
  ...
  selector:
    test.service.es-master: "true"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    test.service.es-master: "true"
  name: {{ .Values.service.name }}
  namespace: default
spec:
  selector:
    matchLabels:
      test.service.es-master: "true"
  serviceName: {{ .Values.service.name }}
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: test.service.es-master
            operator: In
            values:
            - "true"
        topologyKey: kubernetes.io/hostname
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      creationTimestamp: null
      labels:
        test.service.es-master: "true"
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: test.service.es-master
                operator: In
                values:
                  - "true"
              topologyKey: kubernetes.io/hostname
      securityContext:
             ...
      volumes:
        ...
      ...
status: {}

更新-1
根据评论和答案中的建议,我在template.spec中添加了反亲和力部分。但不幸的是,这个问题仍然存在。更新后的yaml如下所示:

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    test.service.es-master: "true"
  name: {{ .Values.service.name }}
  namespace: default
spec:
  clusterIP: None
  ports:
  - name: {{ .Values.service.httpport | quote }}
    port: {{ .Values.service.httpport }}
    targetPort: {{ .Values.service.httpport }}
  - name: {{ .Values.service.tcpport | quote }}
    port: {{ .Values.service.tcpport }}
    targetPort: {{ .Values.service.tcpport }}
  selector:
    test.service.es-master: "true"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    test.service.es-master: "true"
  name: {{ .Values.service.name }}
  namespace: default
spec:
  selector:
    matchLabels:
      test.service.es-master: "true"
  serviceName: {{ .Values.service.name }}
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      creationTimestamp: null
      labels:
        test.service.es-master: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
            matchExpressions:
            - key: test.service.es-master
              operator: In
              values:
              - "true"
            topologyKey: kubernetes.io/hostname
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: test.service.es-master
                operator: In
                values:
                  - "true"
              topologyKey: kubernetes.io/hostname
      securityContext:
             readOnlyRootFilesystem: false
      volumes:
       - name: elasticsearch-data-volume
         hostPath:
            path: /opt/ca/elasticsearch/data
      initContainers:
         - name: elasticsearch-data-volume
           image: busybox
           securityContext:
                  privileged: true
           command: ["sh", "-c", "chown -R 1010:1010 /var/data/elasticsearch/nodes"]
           volumeMounts:
              - name: elasticsearch-data-volume
                mountPath: /var/data/elasticsearch/nodes
      containers:
      - env:
        {{- range $key, $val := .Values.data }}
        - name: {{ $key }} 
          value: {{ $val | quote }}
        {{- end}}
        image: {{ .Values.image.registry }}/analytics/{{ .Values.image.repository }}:{{ .Values.image.tag }}
        name: {{ .Values.service.name }}
        ports:
        - containerPort: {{ .Values.service.httpport }}
        - containerPort: {{ .Values.service.tcpport }}
        volumeMounts:
              - name: elasticsearch-data-volume
                mountPath: /var/data/elasticsearch/nodes    
        resources:
          limits:
            memory: {{ .Values.resources.limits.memory }}
          requests:
            memory: {{ .Values.resources.requests.memory }}
        restartPolicy: Always
status: {}
niknxzdl

niknxzdl1#

这对我来说适用于kubernetes 1.11.5:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      test.service.es-master: "true"
  template:
    metadata:
      labels:
        test.service.es-master: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: test.service.es-master
                operator: In
                values:
                - "true"
            topologyKey: kubernetes.io/hostname
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: test.service.es-master
                operator: In
                values:
                  - "true"
      containers:
      - image: nginx:1.7.10
        name: nginx

我不知道为什么您为pod部署选择器标签选择了与节点选择器相同的键/值。它们至少让人困惑。。。

8iwquhpp

8iwquhpp2#

首先,在您的初始清单中,甚至在您正在使用的更新清单中 topologyKey 为了 nodeAffinity 在尝试使用部署这些清单时会出现错误 kubectl create 或者 kubectl apply 因为没有调用api键 topologyKey 为了 nodeAffinity 参考文件
其次,您使用的是一个名为 test.service.es-master 对于你的nodeaffinity,你确定你的“node”有这些标签吗?请按此命令确认 kubectl get nodes --show-labels 最后,为了简化@laszlo answer和@bitswazky注解,您可以使用以下代码:
这里我使用了一个名为 role 要标识节点,可以通过执行以下命令将其添加到现有集群的节点 kubectl label nodes <node-name> role=platform ```
selector:
matchLabels:
component: nginx
template:
metadata:
labels:
component: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: role
operator: In
values:
- platform
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: component
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname

rseugnpd

rseugnpd3#

正如埃戈尔所建议的,你需要:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-cache
spec:
  selector:
    matchLabels:
      app: store
  replicas: 3
  template:
    metadata:
      labels:
        app: store
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - store
            topologyKey: "kubernetes.io/hostname"

资料来源:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#always-位于同一节点
因此,根据您当前的标签,它可能如下所示:

spec:
  affinity:
    nodeAffinity:
    # node affinity stuff here
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: "test.service.es-master"
            operator: In
            values:
            - "true"
        topologyKey: "kubernetes.io/hostname"

确保你把它放在正确的地方在你的yaml,否则它不会工作。

相关问题