eck部署过程

x33g5p2x  于2021-03-14 发布在 ElasticSearch  
字(6.3k)|赞(0)|评价(0)|浏览(724)

准备nfs存储

master01节点部署nfs server提供pvc所需的存储资源。

docker run -d \
    --name nfs-server \
    --privileged \
    --restart always \
    -p 2049:2049 \
    -v /nfs-share:/nfs-share \
    -e SHARED_DIRECTORY=/nfs-share \
    itsthenetwork/nfs-server-alpine:latest

部署nfs-client-provisioner,动态申请nfs存储资源

helm repo add apphub https://apphub.aliyuncs.com

helm install nfs-client-provisioner \
  --set nfs.server=192.168.93.11 \
  --set nfs.path=/ \
  apphub/nfs-client-provisioner

所有节点安装nfs客户端并启用rpcbind服务

yum install -y nfs-utils
systemctl enable --now rpcbind

安装eck operator

kubectl apply -f https://download.elastic.co/downloads/eck/1.1.0/all-in-one.yaml

查看创建的pod

[root@master01 ~]# kubectl -n elastic-system get pods
NAME                 READY   STATUS    RESTARTS   AGE
elastic-operator-0   1/1     Running   1          17m

查看创建的crd,创建了3个crd,apmserver、elasticsearche以及kibana.

[root@master01 ~]# kubectl get crd | grep elastic
apmservers.apm.k8s.elastic.co                                2020-04-27T16:23:08Z
elasticsearches.elasticsearch.k8s.elastic.co                 2020-04-27T16:23:08Z
kibanas.kibana.k8s.elastic.co                                2020-04-27T16:23:08Z

部署es和kibana

克隆github源码中的示例yaml到本地

curl -LO https://github.com/elastic/cloud-on-k8s/archive/1.1.0.tar.gz
tar -zxf cloud-on-k8s-1.1.0.tar.gz
cd cloud-on-k8s-1.1.0/config/recipes/beats/

创建命名空间

kubectl apply -f 0_ns.yaml

部署es和kibana,配置 storageClassName为nfs-client,服务类型改为nodePort。

$ cat 1_monitor.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: monitor
  namespace: beats
spec:
  version: 7.6.2
  nodeSets:
  - name: mdi
    count: 3
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi
        storageClassName: nfs-client
  http:
    service:
      spec:
        type: NodePort
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: monitor
  namespace: beats
spec:
  version: 7.6.2
  count: 1
  elasticsearchRef:
    name: "monitor"
  http:
    service:
      spec:
        type: NodePort

执行yaml文件部署es和kibana

kubectl apply -f 1_monitor.yaml

如果无法拉取镜像可以手动替换为dockrehub镜像:

docker pull elastic/elasticsearch:7.6.2
docker pull elastic/kibana:7.6.2
docker tag elastic/elasticsearch:7.6.2 docker.elastic.co/elasticsearch/elasticsearch:7.6.2
docker tag elastic/kibana:7.6.2 docker.elastic.co/kibana/kibana:7.6.2

查看创建的Elasticsearch和kibana资源,包括运行状况,版本和节点数

[root@master01 ~]# kubectl get elasticsearch
NAME         HEALTH   NODES   VERSION   PHASE   AGE
quickstart   green    3       7.6.2     Ready   77m

[root@master01 ~]# kubectl get kibana
NAME         HEALTH   NODES   VERSION   AGE
quickstart   green    1       7.6.2     137m

查看创建的pods:

[root@master01 ~]# kubectl -n beats get pods
NAME                          READY   STATUS    RESTARTS   AGE
monitor-es-mdi-0              1/1     Running   0          109s
monitor-es-mdi-1              1/1     Running   0          9m
monitor-es-mdi-2              1/1     Running   0          3m26s
monitor-kb-54cbdf6b8c-jklqm   1/1     Running   0          9m

查看创建的pv和pvc

[root@master01 ~]# kubectl -n beats get pvc
NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
elasticsearch-data-monitor-es-mdi-0   Bound    pvc-882be3e2-b752-474b-abea-7827b492d83d   10Gi       RWO            nfs-client     3m33s
elasticsearch-data-monitor-es-mdi-1   Bound    pvc-8e6ed97e-7524-47f5-b02c-1ff0d2af33af   10Gi       RWO            nfs-client     3m33s
elasticsearch-data-monitor-es-mdi-2   Bound    pvc-31b5f80d-8fbd-4762-ab69-650eb6619a2e   10Gi       RWO            nfs-client     3m33s

[root@master01 ~]# kubectl -n beats get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                       STORAGECLASS   REASON   AGE
pvc-31b5f80d-8fbd-4762-ab69-650eb6619a2e   50Gi       RWO            Delete           Bound    beats/elasticsearch-data-monitor-es-mdi-2   nfs-client              3m35s
pvc-882be3e2-b752-474b-abea-7827b492d83d   50Gi       RWO            Delete           Bound    beats/elasticsearch-data-monitor-es-mdi-0   nfs-client              3m35s
pvc-8e6ed97e-7524-47f5-b02c-1ff0d2af33af   50Gi       RWO            Delete           Bound    beats/elasticsearch-data-monitor-es-mdi-1   nfs-client              3m35s

实际数据存储在master01节点/nfs-share目录下:

[root@master01 ~]# tree /nfs-share/ -L 2
/nfs-share/
├── beats-elasticsearch-data-monitor-es-mdi-0-pvc-250c8eef-4b7e-4230-bd4f-36b911a1d61b
│   └── nodes
├── beats-elasticsearch-data-monitor-es-mdi-1-pvc-c1a538df-92df-4a8e-9b7b-fceb7d395eab
│   └── nodes
└── beats-elasticsearch-data-monitor-es-mdi-2-pvc-dc21c1ba-4a17-4492-9890-df795c06213a
    └── nodes

查看创建的service,部署时已经将es和kibana服务类型改为NodePort,方便从集群外访问。

[root@master01 ~]# kubectl -n beats get svc
NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
monitor-es-http   NodePort    10.96.82.186    <none>        9200:31575/TCP   9m36s
monitor-es-mdi    ClusterIP   None            <none>        <none>           9m34s
monitor-kb-http   NodePort    10.97.213.119   <none>        5601:30878/TCP   9m35s

默认elasticsearch启用了验证,获取elastic用户的密码:

PASSWORD=$(kubectl -n beats get secret monitor-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)                          

echo $PASSWORD

浏览器访问elasticsearch:

https://192.168.93.11:31575/

或者从Kubernetes集群内部访问elasticsearch的endpoint:

[root@master01 ~]# kubectl run -it --rm centos--image=centos -- sh                          
sh-4.4#
sh-4.4# PASSWORD=gf4mgr5fsbstwx76b8zl8m2g
sh-4.4# curl -u "elastic:$PASSWORD" -k "https://monitor-es-http:9200"
{
  "name" : "quickstart-es-default-2",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "mrDgyhp7QWa7iVuY8Hx6gA",
  "version" : {
    "number" : "7.6.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

在浏览器中访问kibana,用户密码与elasticsearch相同,选择Explore on my own,可以看到还没有创建index。

https://192.168.93.11:30878/

部署filebeat

使用dockerhub中的镜像,版本改为7.6.2.

sed -i 's#docker.elastic.co/beats/filebeat:7.6.0#elastic/filebeat:7.6.2#g' 2_filebeat-kubernetes.yaml
kubectl apply -f 2_filebeat-kubernetes.yaml

查看创建的pods

[root@master01 beats]# kubectl -n beats get pods -l k8s-app=filebeat
NAME             READY   STATUS    RESTARTS   AGE
filebeat-dctrz   1/1     Running   0          9m32s
filebeat-rgldp   1/1     Running   0          9m32s
filebeat-srqf4   1/1     Running   0          9m32s

镜像

docker pull elastic/filebeat:7.6.2
docker tag elastic/filebeat:7.6.2 docker.elastic.co/beats/filebeat:7.6.2

docker pull elastic/metricbeat:7.6.2
docker tag elastic/metricbeat:7.6.2 docker.elastic.co/beats/metricbeat:7.6.2

访问kibana,此时可以搜索到filebeat的index,填写index pattern,选择@timestrap然后创建index.

查看收集到的日志

部署metricbeat

sed -i 's#docker.elastic.co/beats/metricbeat:7.6.0#elastic/metricbeat:7.6.2#g' 3_metricbeat-kubernetes.yaml

查看创建的pods

[root@master01 beats]# kubectl -n beats get pods -l  k8s-app=metricbeat
NAME                          READY   STATUS    RESTARTS   AGE
metricbeat-6956d987bb-c96nq   1/1     Running   0          76s
metricbeat-6h42f              1/1     Running   0          76s
metricbeat-dzkxq              1/1     Running   0          76s
metricbeat-lffds              1/1     Running   0          76s

此时访问kibana可以选择metricbeat.

相关文章