Kubernetes 是一个开源的容器编排引擎,可用来对容器化应用进行自动化部署、扩缩和管理。本文将会去繁为简,快速带领大家搭建 k8s 集群以及部署实际应用,建议有 Docker 使用基础再看本文最佳。
k8s 基本万物基于yml配置文件,所以此文中会有一堆的yml配置文件,需要注意的在配置文件中都有注解标出,实际实践中注意这些标注地方修改为自己的即可
一、部署k8s集群
Kubernetes 本文或许将简称 k8s,意为取单词头尾k和s,中间还有8个字母。
k8s 集群搭建方式有很多,本文采用 k3s 的方式,可理解为精简版 k8s ,但核心组件都有,也是完整的k8s集群,使用起来区别不大
1. Server 节点
有兴趣也可直接看官方文档:https://docs.k3s.io/zh/quick-start
1 curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -
2. Agent 节点
K3S_URL 参数会导致安装程序将 K3s 配置为 Agent 而不是 Server。K3s Agent 将注册到在 URL 上监听的 K3s Server。 K3S_TOKEN 使用的值存储在 Server 节点上的 /var/lib/rancher/k3s/server/node-token
中,可使用 cat
查看。 若仅本地测试学习,不需要部署集群,此步骤可省略,单 Server 也能完整使用 k8s。
1 curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -
3. 查看节点 1 2 3 4 $ kubectl get nodes NAME STATUS ROLES AGE VERSION ubuntu-server Ready control-plane,master 5h43m v1.29.6+k3s2 ubuntu-agent Ready 5h42m v1.29.6+k3s2
4. 停止k3s 1 $ /usr/local/bin/k3s-killall.sh
5. 卸载k3s 1 2 3 4 $ /usr/local/bin/k3s-uninstall.sh $ /usr/local/bin/k3s-agent-uninstall.sh
6. 配置国内源 创建 /etc/rancher/k3s/registries.yaml
文件填入以下内容,并 systemctl restart k3s
重启即可。
1 2 3 4 5 6 7 8 9 10 mirrors: docker.io: endpoint: - "https://dockerpull.com"
二、常用命令 1. 查看所有信息 目前应该只有一个service,若有pod或deployment也会列出来
1 2 3 $ kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5h45m
2. 查看 Pod pod 可以理解为 docker 的 container ,是容器运行的实例。
3. 查看 Service service 可以理解为 docker 的 network + port
,要将 pod 公开给外部必须部署一个或多个 service ,其中指定暴露的类型和规则。
4. 查看 Deployment deployment 可以理解为 docker(docker swarm) 的 docker-compose
配置,里面会指定部署应用的镜像、名称、端口、运行副本数等信息。
1 $ kubectl get deployment
5. 部署删除应用 上述说到的概念,在实际中都会对应成yml文件,要部署或更新他们都需要用到此命令。
1 2 $ kubectl apply -f xxx.yml $ kubectl delete -f xxx.yml
6. 查看 Namespace namespace 可以理解为大的房子,上述所说到的 pod/service/deployment
等等都会装到一个 namespace 里,默认集群创建好后大概会有如下命名空间。
1 2 3 4 5 6 $ kubectl get ns NAME STATUS AGE default Active 6h kube-node-lease Active 6h kube-public Active 6h kube-system Active 6h
当执行任何命令时例如 kubectl get all
时,默认会在 default
这个命名空间下。可通过 -n
指定例如 kubectl get all -n kube-system
。
其他:当然 kubectl 远不止这些命令,例如 kubectl create namespace xxx
,但本文不再赘述这些,因为实际中都可以通过yml文件和命令 kubectl apply -f xxx.yml
去创建更新任何的东西,基本万物皆yml文件。
三、部署应用 1. 准备 whoami.yml 文件
whoami 是一个简单的web程序,部署后访问其80端口会打印出容器内信息如ip等,此处方便演示因此以此为例,当然实际部署中可自行替换为nginx或springboot等任意实际项目程序即可
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 apiVersion: v1 kind: Namespace metadata: name: whoami --- kind: Deployment apiVersion: apps/v1 metadata: name: whoami namespace: whoami labels: app: whoami spec: replicas: 3 selector: matchLabels: app: whoami template: metadata: labels: app: whoami spec: containers: - name: whoami image: traefik/whoami:latest ports: - name: whoami-port containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: whoami namespace: whoami spec: ports: - name: whoami-port port: 80 targetPort: whoami-port selector: app: whoami
2. 部署 whoami 1 2 3 4 5 6 $ kubectl apply -f whoami.yml namespace/whoami created deployment.apps/whoami created service/whoami created
3. 查看部署情况 可以看到多出了一个 whoami
的命名空间
1 2 3 4 5 6 7 $ kubectl get ns NAME STATUS AGE default Active 6h21m kube-node-lease Active 6h21m kube-public Active 6h21m kube-system Active 6h21m whoami Active 6s
再查看所有信息,三个 pod
的状态是 Running
,并且 service
也有一个类型 ClusterIP
和地址 10.43.71.16
,表示部署已完成
1 2 3 4 5 6 7 8 9 10 11 12 13 14 $ kubectl get all -n whoami NAME READY STATUS RESTARTS AGE pod/whoami-6b548567dd-kkqjp 1/1 Running 0 96s pod/whoami-6b548567dd-pxgsx 1/1 Running 0 96s pod/whoami-6b548567dd-swghl 1/1 Running 0 96s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/whoami ClusterIP 10.43.71.16 <none> 80/TCP 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/whoami 3/3 3 3 96s NAME DESIRED CURRENT READY AGE replicaset.apps/whoami-6b548567dd 3 3 3 96s
4. 测试访问 可以发现能否访问得到,并且内部IP也会不同,说明能够访问到不同的 pod
了。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 $ curl 10.43.71.16:80 Hostname: whoami-6b548567dd-pxgsx IP: 127.0.0.1 IP: ::1 IP: 10.42.0.19 IP: fe80::9435:d3ff:fe77:9c8d RemoteAddr: 10.42.0.1:12466 GET / HTTP/1.1 Host: 10.43.71.16 User-Agent: curl/7.81.0 Accept: */* $ curl 10.43.71.16:80 Hostname: whoami-6b548567dd-kkqjp IP: 127.0.0.1 IP: ::1 IP: 10.42.0.20 IP: fe80::479:99ff:fee0:ea97 RemoteAddr: 10.42.0.1:1082 GET / HTTP/1.1 Host: 10.43.71.16 User-Agent: curl/7.81.0 Accept: */*
5. 暴露端口 我们此时访问应用是通过service的ip去访问,而无法通过宿主机ip或者127.0.0.1去访问,并且 netstat -ntl
查看端口也会发现 80 端口并没有监听,因为我们yml配置中的 service
没有指定类型,所以默认是 ClusterIP
,也就是集群内部ip,我们若想让外部能够访问,快速的方法可以使用 nodePort
去暴露端口,就类似于 docker run -p 80:80
,把容器内部端口暴露到宿主机上。
修改 whoami.yml
配置 Service
部分
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 --- apiVersion: v1 kind: Service metadata: name: whoami namespace: whoami spec: type: NodePort ports: - name: whoami-port port: 80 targetPort: whoami-port nodePort: 30080 selector: app: whoami
并执行应用命令
1 2 3 4 $ kubectl apply -f whoami.yml namespace/whoami unchanged deployment.apps/whoami unchanged service/whoami configured
查看 service
,可以发现TYPE变为 NodePort
了,并且端口映射到了30080.
1 2 3 $ kubectl get svc -n whoami NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE whoami NodePort 10.43.71.16 <none> 80:30080/TCP 40m
6. 重新测试访问 通过宿主机ip或127.0.0.1已经能够访问到应用。
1 2 3 4 5 6 7 8 9 10 11 $ curl 127.0.0.1:30080 Hostname: whoami-6b548567dd-pxgsx IP: 127.0.0.1 IP: ::1 IP: 10.42.0.19 IP: fe80::9435:d3ff:fe77:9c8d RemoteAddr: 10.42.0.1:12466 GET / HTTP/1.1 Host: 10.43.71.16 User-Agent: curl/7.81.0 Accept: */*
7. 其他 此时我们已经能够使用 k8s 简单替换掉 Docker
甚至是 Docker swarm
了,当然还有 docker volume
也就是目录挂载这个实现容器持久化的重要功能还没说,后续会详细介绍。
四、Ingress 详解 1. 前言 上节我们采用 NodePort
来暴露端口,当然这是可以的,但对于 k8s 定位于大型服务集群来说,要去规划每个应用的端口以及端口范围还有限制,显得就有点麻烦了。
因此 k8s 暴露服务的方式除了 NodePort
,还有 LoadBalancer
和 Ingress
,甚至最新的 Gateway Api
,这几种的对比我们就不详细阐述,本文还是以最通俗易懂和便捷为主,也就是通过 Ingress
的方式来暴露服务,实现的效果类似于统一网关,所有应用都可以通过访问同一个地址例如 127.0.0.1:80
,Ingress
可以根据前缀或者Host等等,让网关自动帮我们转发到不同的应用中。
而 Ingress
更强大之处在于不仅仅只是为了方便路由定位,甚至可以和第三方集成例如 traefik
/nginx
/istio
等等,为我们实现负载均衡限流等更多功能,本文中我们将使用安装门槛较低但功能依然强大的 traefik
来介绍。
2. 准备 traefik-ingress.yml
官方文档:https://doc.traefik.io/traefik/getting-started/quick-start-with-kubernetes/
新建 traefik-ingress.yml
配置文件,内容可以忽略,有兴趣的可以自行了解。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 --- apiVersion: v1 kind: Namespace metadata: name: traefik-proxy --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: traefik-role namespace: traefik-proxy rules: - apiGroups: - "" resources: - services - secrets - nodes - namespaces verbs: - get - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch - apiGroups: - extensions - networking.k8s.io resources: - ingresses - ingressclasses verbs: - get - list - watch - apiGroups: - extensions - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - traefik.io resources: - middlewares - middlewaretcps - ingressroutes - traefikservices - ingressroutetcps - ingressrouteudps - tlsoptions - tlsstores - serverstransports - serverstransporttcps verbs: - get - list - watch - apiGroups: - gateway.networking.k8s.io resources: - gatewayclasses - gateways - httproutes - referencegrants - tcproutes - tlsroutes verbs: - get - list - watch - apiGroups: - gateway.networking.k8s.io resources: - gatewayclasses/status - gateways/status - httproutes/status - tcproutes/status - tlsroutes/status verbs: - update --- apiVersion: v1 kind: ServiceAccount metadata: name: traefik-account namespace: traefik-proxy --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: traefik-role-binding namespace: traefik-proxy roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: traefik-role subjects: - kind: ServiceAccount name: traefik-account namespace: traefik-proxy --- kind: Deployment apiVersion: apps/v1 metadata: name: traefik-deployment namespace: traefik-proxy labels: app: traefik spec: replicas: 1 selector: matchLabels: app: traefik template: metadata: labels: app: traefik spec: serviceAccountName: traefik-account containers: - name: traefik image: dockerpull.com/library/traefik:v3.1 args: - --api.insecure - --providers.kubernetesingress - --providers.kubernetesgateway ports: - name: web containerPort: 80 - name: dashboard containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: traefik-dashboard-service namespace: traefik-proxy spec: type: LoadBalancer ports: - port: 8080 targetPort: dashboard selector: app: traefik --- apiVersion: v1 kind: Service metadata: name: traefik-web-service namespace: traefik-proxy spec: type: LoadBalancer ports: - targetPort: web port: 80 selector: app: traefik
3. 安装 traefik 1 $ kubectl apply -f traefik-ingress.yml
4. 访问面板 已通过 LoadBalancer
暴露了 80 端口和 8080 端口,直接访问 http://127.0.0.1:8080 即可进入 traefik
面板,若是服务器或虚拟机自行替换127.0.0.1
为宿主机ip。
并且访问接口可以获取到404
1 2 $ curl http://127.0.0.1 404 page not found
5. 修改 whoami.yml 去除 NodePort 部分
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 --- apiVersion: v1 kind: Service metadata: name: whoami namespace: whoami spec: ports: - name: whoami-port port: 80 targetPort: whoami-port selector: app: whoami --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: whoami-ingress namespace: whoami spec: rules: - host: "whoami.example.com" http: paths: - path: / pathType: Prefix backend: service: name: whoami port: name: whoami-port
6. 更新 whoami 1 $ kubectl apply -f whoami.yml
7. 测试访问 前缀匹配访问
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 $ curl -H "Host:whoami.example.com" http://127.0.0.1 Hostname: whoami-6b548567dd-pxgsx IP: 127.0.0.1 IP: ::1 IP: 10.42.0.19 IP: fe80::9435:d3ff:fe77:9c8d RemoteAddr: 10.42.0.8:33228 GET /whoami HTTP/1.1 Host: 127.0.0.1 User-Agent: curl/7.81.0 Accept: */* Accept-Encoding: gzip X-Forwarded-For: 10.42.0.7 X-Forwarded-Host: 127.0.0.1 X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Forwarded-Server: traefik-7d5f6474df-kncqg X-Real-Ip: 10.42.0.7
Host匹配访问
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 $ curl -H "Host:whoami.example.com" http://127.0.0.1 Hostname: whoami-6b548567dd-kkqjp IP: 127.0.0.1 IP: ::1 IP: 10.42.0.20 IP: fe80::479:99ff:fee0:ea97 RemoteAddr: 10.42.0.8:46392 GET / HTTP/1.1 Host: whoami.example.com User-Agent: curl/7.81.0 Accept: */* Accept-Encoding: gzip X-Forwarded-For: 10.42.0.7 X-Forwarded-Host: whoami.example.com X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Forwarded-Server: traefik-7d5f6474df-kncqg X-Real-Ip: 10.42.0.7
并且可以发现分别访问到了 whoami-6b548567dd-pxgsx / 10.42.0.19
和 whoami-6b548567dd-kkqjp / 10.42.0.20
,traefik 已经帮我们实现了负载均衡,我们查看一下 pod
信息,发现是能够对应得上的,说明路由已经生效并访问到了我们指定的 pod
。
1 2 3 4 5 $ kubectl get pod -n whoami NAME READY STATUS RESTARTS AGE whoami-6b548567dd-kkqjp 1/1 Running 0 101m whoami-6b548567dd-pxgsx 1/1 Running 0 101m whoami-6b548567dd-swghl 1/1 Running 0 101m
8. SSL traefik 也是能够配置SSL,并且还支持自动申请letsencrypt证书和续期,但本文不阐述,因为实际生产环境中在 k8s 集群外,我们一般还会有waf或nginx等统一入口,ssl在此处配置会更简单且更好规划,例如可以用 caddy 实现自动ssl,流量再反代到我们内部的 traefik 入口上(可以把traefik部署的80端口改成例如4080,以免与nginx冲突)即可。
9. 其他配置 例如限流,可以通过中间件 Traefik Middlewares Http Ratelimit 实现,当然还有很多有用的中间件,大家自行查看文档按需配置即可。
五、Volume 详解 1. 前言 在 Docker volume
中,是将服务器的某个目录挂载到容器中从而实现持久化,而当你是服务器集群时(例如 Docker Swarm
),可能容器先在A节点运行,容器崩了重启后则可能出现在B节点上,而又因之前的文件都是保存在A节点的目录上,导致B节点启动后就如同新程序一样,无法实现持久化,因此我们需要一个能够让多节点互通的中心服务,即 nfs server
。
2. 部署 Nfs 服务端
在实际生产环境中 Nfs 服务端
有可能是独立于 k8s 集群之外的服务端单独部署,此处我们通过 k8s 快速部署一个,当然也可以通过 apt-get install nfs-server
之类安装,或购买云服务商的nas文件服务器等等,重点是获取到 nfs 服务端的ip和目录。
准备 nfs 服务器,此处通过打标签,让nfs容器固定运行在某个节点上,避免重启后运行到其他节点导致持久化文件丢失。
1 2 $ kubectl label nodes node1 kubernetes.io/nfs=server
准备 nfs-server.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 apiVersion: v1 kind: Namespace metadata: name: nfs-server --- kind: Service apiVersion: v1 metadata: name: nfs-server namespace: nfs-server labels: app: nfs-server spec: type: LoadBalancer selector: app: nfs-server ports: - name: tcp-2049 port: 2049 protocol: TCP - name: udp-111 port: 111 protocol: UDP --- kind: Deployment apiVersion: apps/v1 metadata: name: nfs-server namespace: nfs-server spec: replicas: 1 selector: matchLabels: app: nfs-server template: metadata: name: nfs-server labels: app: nfs-server spec: nodeSelector: "kubernetes.io/nfs": "server" containers: - name: nfs-server image: itsthenetwork/nfs-server-alpine:latest env: - name: SHARED_DIRECTORY value: "/exports" volumeMounts: - mountPath: /exports name: nfs-vol securityContext: privileged: true ports: - name: tcp-2049 containerPort: 2049 protocol: TCP - name: udp-111 containerPort: 111 protocol: UDP volumes: - name: nfs-vol hostPath: path: /home/seepine/nfs-dir type: DirectoryOrCreate
部署
1 2 3 4 5 $ kubectl apply -f nfs-server.yml namespace/nfs-server created service/nfs-server created deployment.apps/nfs-server created
查看
1 2 3 4 5 6 7 8 9 10 11 12 $ kubectl get all -n nfs-server NAME READY STATUS RESTARTS AGE pod/nfs-server-5c8d468f48-jw828 1/1 Running 0 79m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/nfs-server LoadBalancer 10.43.252.199 192.168.100.131,192.168.100.132 2049:32552/TCP,111:32455/UDP 87m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nfs-server 1/1 1 1 87m NAME DESIRED CURRENT READY AGE replicaset.apps/nfs-server-5c8d468f48 1 1 1 79m
3. 部署 nfs-provisioner
服务器需要有nfs客户端,若ubuntu可执行 sudo apt-get install nfs-common -y
安装
准备 nfs-provisioner.yml
配置文件,此文件相对复杂,照抄并 kubectl apply -f nfs-provisioner.yml
部署即可,后续不会再用到。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 apiVersion: v1 kind: Namespace metadata: name: nfs-provisioner --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: "false" --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner namespace: nfs-provisioner --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner namespace: nfs-provisioner spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 192.168 .100 .100 - name: NFS_PATH value: / volumes: - name: nfs-client-root nfs: server: 192.168 .100 .100 path: / --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: ["" ] resources: ["nodes" ] verbs: ["get" , "list" , "watch" ] - apiGroups: ["" ] resources: ["persistentvolumes" ] verbs: ["get" , "list" , "watch" , "create" , "delete" ] - apiGroups: ["" ] resources: ["persistentvolumeclaims" ] verbs: ["get" , "list" , "watch" , "update" ] - apiGroups: ["storage.k8s.io" ] resources: ["storageclasses" ] verbs: ["get" , "list" , "watch" ] - apiGroups: ["" ] resources: ["events" ] verbs: ["create" , "update" , "patch" ] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: nfs-provisioner roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: nfs-provisioner rules: - apiGroups: ["" ] resources: ["endpoints" ] verbs: ["get" , "list" , "watch" , "create" , "update" , "patch" ] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: nfs-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: nfs-provisioner roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
4. 部署测试用例 准备 nfs-test.yml
配置文件并执行 kubectl apply -f nfs-test.yml
,看配置文件注解大概就能看得懂,PVC(PersistentVolumeClaim)
和 docker volume
有点类似,表示空间申请,当然PV
和PVC
的种类和概念没有这么简单,但快速入门时不需要了解那么多也可以,因为已经是最佳实践,后续自己项目就把一些名称和路径之类改改即可。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pv-claim spec: storageClassName: nfs-storage accessModes: - ReadWriteMany resources: requests: storage: 2Mi --- kind: Deployment apiVersion: apps/v1 metadata: name: test-pod labels: app: test-pod spec: replicas: 1 selector: matchLabels: app: test-pod template: metadata: labels: app: test-pod spec: containers: - name: test-pod image: alpine:3.18 command: - "/bin/sh" args: - "-c" - "echo 'mount success!' > /mnt/SUCCESS.txt && exit 0 || exit 1" volumeMounts: - name: nfs-pvc mountPath: "/mnt" volumes: - name: nfs-pvc persistentVolumeClaim: claimName: test-pv-claim
5. 检验 在 nfs 挂载目录下,将能看到出现了一个目录,并且等待片刻等容器执行完毕,目录中会出现一个文件
并且文件内容是
1 2 $ cat default-test-pv-claim-pvc-cc4b01e5-ecd1-4c78-84cd-fb1e9d79d115/SUCCESS.txt mount success!