docker下部署efk

x33g5p2x  于2021-03-14 发布在 ElasticSearch  
字(2.6k)|赞(0)|评价(0)|浏览(243)

docker-compose示例

https://docs.fluentd.org/v/0.12/articles/docker-logging-efk-compose

增加mmap限制​​:

echo "vm.max_map_count=262144" >> /etc/sysctl.conf
立即生效
sysctl -w vm.max_map_count=262144

创建docker-compose.yaml

version: '3'

services:

  elasticsearch:
    image: elasticsearch:7.3.2
    container_name: elasticsearch
    environment:
      - node.name=elasticsearch
      - discovery.seed_hosts=elasticsearch
      - cluster.initial_master_nodes=elasticsearch
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

  kibana:
    image: kibana:7.3.2
    container_name: kibana
    environment:
      ELASTICSEARCH_URL: "http://elasticsearch:9200"
    ports:
      - 5601:5601
    depends_on:
      - elasticsearch

volumes:
  esdata1:
    driver: local

运行并查看

root@server:~# docker-compose ps
    Name                   Command               State                Ports
-----------------------------------------------------------------------------------------
elasticsearch   /usr/local/bin/docker-entr ...   Up      0.0.0.0:9200->9200/tcp, 9300/tcp
kibana          /usr/local/bin/dumb-init - ...   Up      0.0.0.0:5601->5601/tcp

配置文件中启用了3个容器,分别是elasticsearch、kibana、fluentd。其中elasticsearch、kibana直接从仓库中下载,fluentd是自己建立的容器。注意,从配置内容中可以看出,我们自建的fluentd容器是在官方的镜像基础上建的,主要的改动有2点:

td-agent 所在主机

需要采集数据的每一台主机。注意,并不是在docker容器内安装。(当然,这视乎你是如何设计日志收集方式而定)

安装fluent-plugin-elasticsearch这个plugin;

将配置文件fluent.conf拷贝到容器中;

这两个步骤的作用是让fluentd能够将日志内容发送到elasticsearch。

安装td-agent

efk使用filebeat

创建docker-compose文件

version: "3"
services:
    elasticsearch:
        image: "docker.elastic.co/elasticsearch/elasticsearch:7.2.0"
        environment:
            - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
            - "discovery.type=single-node"
        ports:
            - "9200:9200"
        volumes:
            - elasticsearch_data:/usr/share/elasticsearch/data

    kibana:
        image: "docker.elastic.co/kibana/kibana:7.2.0"
        ports:
            - "5601:5601"

    filebeat:
        image: "docker.elastic.co/beats/filebeat:7.2.0"
        user: root
        volumes:
            - /etc/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
            - /var/lib/docker:/var/lib/docker:ro
            - /var/run/docker.sock:/var/run/docker.sock

volumes:
    elasticsearch_data:

创建配置文件目录

mkdir /etc/filebeat
cd /etc/filebeat

cat > filebeat.yml <<EOF
filebeat.inputs:
- type: container
  paths: 
    - '/var/lib/docker/containers/*/*.log'

processors:
- add_docker_metadata:
    host: "unix:///var/run/docker.sock"

- decode_json_fields:
    fields: ["message"]
    target: "json"
    overwrite_keys: true

output.elasticsearch:
  hosts: ["elasticsearch:9200"]
  indices:
    - index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"

logging.json: true
logging.metrics.enabled: false
EOF

创建网络并启动服务

docker network docker-compose.ymlcreate efk_default 

docker-compose up -d

相关文章

微信公众号

最新文章

更多