使用kubeadm在CentOS 7.5上安装Kubernetes 1.11.2

x33g5p2x  于2021-03-14 发布在 Kubernetes  
字(10.2k)|赞(0)|评价(0)|浏览(323)

Kubernetes作为Google开源的容器运行平台,受到了大家的热捧。搭建一套完整的kubernetes平台,也成为试用这套平台必须迈过的坎儿。kubernetes1.5版本以及之前,安装还是相对比较方便的,官方就有通过yum源在CentOS 7安装Kubernetes。但是在kubernetes1.6之后,安装就比较繁琐了,需要证书各种认证,对于刚接触Kubernetes的人来说很不友好,按照官方文档在本地安装“集群”的的话,我觉得你肯定是跑不起来的,除非你突破了GFW的限制,还要懂得怎么样不断修改参数。

Kubernetes的官方文档更新的速度太快了,我们注意到在Kubernetes 1.9的文档Using kubeadm to Create a Cluster中已经给出了目前1.9的kubeadm的主要特性已经处于beta状态了,在2018年将进入GA状态,说明kubeadm离可以在生产环境中使用的距离越来越近了。

kubeadm方式部署,Kubernetes可以把Kubernetes自身的大部分应用管控起来,即运行于pod上,但是kubelet和docker不能这样实现自托管,这两个主机运行为守护进程,因此,只需要在所有主机都安装kubelet和docker,构建Kubernetes集群。相当于是自举。etcd也是托管于pod上运行,使用kubeadm进行部署,安装过程相对简单。这些主件的pod一般为静态pod(不属于Kubernetes管理),也可以运行为自托管的pod,每个主机都要运行flannel这个主件,可以运行为pod。flannel为动态pod。

kubeadm的介绍可以查看如下链接
https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md
通过kubeadm init 完成集群master节点的初始化,用kubeadm join加入集群

最关键,服务器的资源至少为2G内存,否则初始化Kubernetes会报错

默认情况下,会到google托管的站点下载所有的镜像
使用kubeadm安装,默认网络如下:
pod的默认ip 为 10.244.0.0/16
service的默认ip 10.92.0.0/12
节点网络为:172.20.0.0/16
docker要和Kubernetes结合使用,首先要先通过Kubernetes认证,否则会报警。目前Kubernetes只认证到17.03的版本

[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03

架构规划

kubernetes至少需要一个master和一个node才能组成一个可用集群。
我们搭建一个master节点和一个node节点。
我们在生产环境中使用时kubernetes时可以适当增加节点。
准备二台主机,分别安装docker和kubelet,其他服务都是通过自托管实现
安装步骤如下三步

  1. master和node 安装 kubelet、kubeadm、docker
  2. mster:kubeadm init
  3. nodes:kubeadm join

安装准备

操作系统:CentOS 7.5
更新系统

# yum update -y && reboot
# cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)
# uname -r
3.10.0-862.11.6.el7.x86_64

安装机器:kubernetes-master为master节点,kubernetes-node01为slave节点
在两台机器上设置hosts

# echo '192.168.56.55 kubernetes-master' >> /etc/hosts
# echo '192.168.56.56 kubernetes-node01' >> /etc/hosts

kubernetes启用ipvs
确保内核开启了ipvs模块

# lsmod | grep ip_vs

没开启加载方式

# modprobe -- ip_vs
# modprobe -- ip_vs_rr
# modprobe -- ip_vs_wrr
# modprobe -- ip_vs_sh
# modprobe -- nf_conntrack_ipv4

安装Docker

注: 所有节点均需执行该步骤

配置安装源-阿里云源

设置aliyun的docker-ce源

# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

设置aliyun的kubernetes源

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
#baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
#gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

解决路由异常和调整swappiness参数

安装bridge-util软件,加载bridge模块,加载br_netfilter模块

# yum install bridge-utils -y
# lsmod | grep bridge
# modprobe bridge
# modprobe br_netfilter
# echo 'vm.swappiness = 0' >> /etc/sysctl.conf
# echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
# echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.conf
# sysctl --system

查看当前的Docker版本

# yum list docker-ce --showduplicates | sort -r        
# yum install docker-ce -y
# systemctl enable docker

由于Kubernetes安装有很多镜像国内下载不到,因为编辑如下的配置文件可以找到需要的镜像,启动docker前,在Service配置段里定义环境变量,Environment,表示通过这个代理去加载Kubernetes所需的镜像,加载完成后,可以注释掉,仅使用国内的加速器来拉取非Kubernetes的镜像,后续需要使用时,再开启。

# vim /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
Environment="HTTPS_PROXY=http://www.ik8s.io:10080"
Environment="HTTP_PROXY=http://www.ik8s.io:10080"
Environment="NO_PROXY=127.0.0.0/8,172.20.0.0/16"
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

#保存退出后,执行

# systemctl daemon-reload

确保如下两个参数值为1,默认为1。

# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1

启动docker-ce

# systemctl start docker

设置开机启动

# systemctl enable docker

安装kubeadm、kubectl、kubelet

Kubernetes集群节点都要安装kubeadm,kubelet,kubectl
使用kubeadm官方文档:https://kubernetes.io/docs/setup/independent/install-kubeadm

#查看可用版本

# yum list --showduplicates | grep 'kubeadm\|kubectl\|kubelet'

#默认安装最新版本,当前安装的版本为1.11.2,也可以安装指定版本

# yum install kubeadm kubectl kubelet -y
# systemctl enable kubelet

安装Kubernetes集群,kubeadm会读取/etc/systemd/system/kubelet.service.d/10-kubeadm.conf文件的配置信息,更改cgroup的驱动,这里的驱动要要docker一致,因此用docker info | grep Cgroup命令查看docker的驱动类型,如这里查到的结果为 cgroupfs,因此修改kubeadm的配置文件如下

# docker info | grep Cgroup
Cgroup Driver: cgroupfs

# vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

1.10以上版本,使用kubeadm安装的,直接修改kube-proxy configmap即可

# kubectl edit configmap kube-proxy --namespace=kube-system
mode: "" 修改为 mode: "ipvs"

使用kubeadm init初始化集群

初始化kubeadm,仅在master节点上执行
查看kubeadm相关的参数

# kubeadm init --help
# kubeadm init --kubernetes-version=v1.11.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.56.55:6443 --token nk4y2d.akj9heoeqh6a647e --discovery-token-ca-cert-hash sha256:ec53e4f34482c6ba71868f43c9e54e10b67cc447bb86b5899a7ba0444d0164af

更多init的错误可以新开一个控制台使用命令跟踪:

[root@kubernetes-master ~]# journalctl -f -u kubelet

[root@kubernetes-master ~]# mkdir -p $HOME/.kube
[root@kubernetes-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

kubectl命令自动补全

# source <(kubectl completion bash)
# echo "source <(kubectl completion bash)" >> ~/.bashrc

获取组件的健康状态

# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}  

查看节点信息

# kubectl get nodes 
NAME                STATUS     ROLES     AGE       VERSION
kubernetes-master   NotReady   master    5m        v1.11.2

安装Flannel

这里status未就绪,是因为没有网络插件,如flannel.地址https://github.com/coreos/flannel可以查看flannel在github上的相关项目,执行如下的命令自动安装flannel。

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

执行完上述命令后,会拉取flannel的镜像,可用docker image ls查看flannel镜像是否成功拉取,如quay.io/coreos/flannel :v0.10.0-amd64

# docker image ls

获取当前系统上所有在运行的pod的状态,指定名称空间为kube-system,为系统级的pod

# kubectl get pods --namespace=kube-system

镜像拉取成功后,一般会把启动起来。此时默认的node状态为Ready,如下

# kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
kubernetes-master   Ready     master    13m       v1.11.2

获取当前系统的名称空间

# kubectl get namespaces 
NAME          STATUS    AGE
default       Active    16m
kube-public   Active    16m
kube-system   Active    16m

kubernetes启用ipvs

看到pod如下信息表明成功

# kubectl logs --namespace=kube-system kube-proxy-qhp8m 
I0904 07:40:27.547304       1 server_others.go:183] Using ipvs Proxier.
W0904 07:40:27.554131       1 proxier.go:355] IPVS scheduler not specified, use rr by default
I0904 07:40:27.554218       1 server_others.go:210] Tearing down inactive rules.
I0904 07:40:27.571427       1 server.go:448] Version: v1.11.2
I0904 07:40:27.574574       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144
I0904 07:40:27.574617       1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0904 07:40:27.574867       1 conntrack.go:83] Setting conntrack hashsize to 65536
I0904 07:40:27.579747       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0904 07:40:27.579789       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0904 07:40:27.579961       1 config.go:102] Starting endpoints config controller
I0904 07:40:27.579975       1 controller_utils.go:1025] Waiting for caches to sync for endpoints config controller
I0904 07:40:27.579992       1 config.go:202] Starting service config controller
I0904 07:40:27.579997       1 controller_utils.go:1025] Waiting for caches to sync for service config controller
I0904 07:40:27.680117       1 controller_utils.go:1032] Caches are synced for service config controller
I0904 07:40:27.680142       1 controller_utils.go:1032] Caches are synced for endpoints config controller

测试DNS (只在主节点执行),确认解析正常

# kubectl run curl --image=radial/busyboxplus:curl -i --tty
If you don't see a command prompt, try pressing enter.
[ root@curl-87b54756-qtz9t:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[ root@curl-87b54756-qtz9t:/ ]$ 

向Kubernetes集群添加Node (只在副节点执行)
加入master节点,注意,如下命令token和discovery-token-ca-cert-hash是随机生成,可以用命令查找,比较麻烦,建议安装成功后,在成功的信息中,如下的命令需要保存,后期方便加入主节点。

# kubeadm join 192.168.56.55:6443 --token nk4y2d.akj9heoeqh6a647e --discovery-token-ca-cert-hash sha256:ec53e4f34482c6ba71868f43c9e54e10b67cc447bb86b5899a7ba0444d0164af
# mkdir -p $HOME/.kube

在master端执行

# scp /etc/kubernetes/admin.conf centos7-vm-node01:$HOME/.kube/config

相关文章