kubernetes使用ceph作为存储挂载

x33g5p2x  于2021-12-25 转载在 其他  
字(2.9k)|赞(0)|评价(0)|浏览(225)

一、在ceph集群上操作:

**1、创建池(主要使用存储类来进行持久卷的挂载,其他的挂载方式不好使也太麻烦):**
ceph osd pool create k8s 64

二、在k8s上操作:

1、安装客户端(客户端版本与ceph集群的版本最好一致,不然创建image的时候死活创建不了)

给yum增加一个Ceph源(这里以网易源为例)。

tee /etc/yum.repos.d/ceph.repo <<-'EOF'
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
EOF

安装依赖

yum install -y yum-utils && \
yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && \
yum install --nogpgcheck -y epel-release && \
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && \
rm -f /etc/yum.repos.d/dl.fedoraproject.org*

安装Ceph

**1、安装:**
yum -y install ceph-common
ceph --version
**2、将ceph的配置文件ceph.comf放在所有节点的/etc/****ceph目录下:**
scp ceph.conf root@192.168.73.64:/etc/ceph
scp ceph.conf root@192.168.73.65:/etc/ceph
scp ceph.conf root@192.168.73.66:/etc/ceph

**3、将caph集群的ceph.client.admin.keyring文件放在k8s控制节点的/etc/****ceph目录:**
scp ceph.client.admin.keyring root@192.168.73.66:/etc/ceph

**4****、生成加密key:**
grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64

**5****、创建ceph的secret:**
cat ceph-secret.yaml
**********************
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
type: "kubernetes.io/rbd"
data:
  key: QVFDTTlXOWFOMk9IR3hBQXZyUjFjdGJDSFpoZUtmckY0N2tZOUE9PQ==
kubectl create -f ceph-secret.yaml
kubectl get secret
**6、创建存储类:**
cat ceph-class.yaml 
**********************
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: ceph-web
provisioner: kubernetes.io/rbd
parameters:
  monitors: 192.168.78.101:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: default
  pool: k8s
  userId: admin
  userSecretName: ceph-secret
kubectl create -f ceph-class.yaml

7、创建PersistentVolumeClaim:
*****************************
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: grafana
  namespace: kube-system
spec:
  accessModes:
     - ReadWriteOnce
  storageClassName: ceph-web
  resources:
    requests:
      storage: 100G
**8、创建pod:**cat ceph-pod.yaml
*******************
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1
spec:
  containers:
  - name: nginx
    image: nginx
    command: ["sleep", "60000"]
    volumeMounts:
    - name: ceph-rbd-vol1
      mountPath: /mnt/ceph-rbd-pvc/busybox
      readOnly: false
  volumes:
  - name: ceph-rbd-vol1
    persistentVolumeClaim:
      claimName: grafana
kubectl get pod
kubectl describe pod ceph-pod1
只有statefulset能使用:
volumeClaimTemplates:
- metadata:
    name: rabbitmq-run
    annotations:
      volume.beta.kubernetes.io/storage-class: "ceph-web"
  spec:
    accessModes: [ "ReadWriteOnce" ]
    resources:
      requests:
        storage: 50Gi
 

相关文章

微信公众号

最新文章

更多