基本概念 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 有状态 需要依赖本地的 redis mysql 无状态 不需要依赖本地的 nginx appach java mysql 两个容易由于mysql可能会故障要重启 所以应该直接links 容器名 保持连接 无状态服务 rc(replication) 扩缩容 rs(replicaset) 在rc的基础上可以选择标签 deployment 在rs基础上平滑扩缩容 平滑升级/回滚 暂停与恢复(比如一个地方代码想要改5处 它是自动升级的所以可以先暂停 等代码都改好了再恢复) 有状态服务 statefulset redis mysql都要基于主从所以需要按照顺序 稳定的持久化 基于pvc(volumeclaimtemplate) 稳定的网络标识 headless service(dns类似 服务名绑定了ip) 有序部署有序扩展 有序收缩有序删除 services node与node的之间暴露的访问的端口 ingress 外部想要访问内部的端口 configmap 配置文件的映射
资源调度控制器 pod 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 cat nginx-pod.yaml apiVersion: v1 kind: Pod metadata: name: nginx-demo labels: type: app test: 1.0.0 namespace: 'default' spec: containers: - name: nginx image: nginx:1.7.9 imagePullPolicy: IfNotPresent command: - nginx - -g - 'daemon off workingDir: /usr/share/nginx/html ports: - name: http containerPort: 80 protocol: TCP env: - name: JVM_OPTS value: '-Xms128m -Xmx128m' reousrces: requests: cpu: 100m memory: 128Mi limits: cpu: 200m memory: 256Mi restartPolicy: OnFailure
Deployment 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx-deploy name: nginx-deploy namespace: default spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: nginx-deploy strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: nginx-deploy spec: containers: - image: nginx:1.7.9 imagePullPolicy: IfNotPresent name: nginx restartPolicy: Always terminationGracePeriodSeconds: 30
关于deploy 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 kubectl scale --replicas =5 deploy nginx-deploy kubectl rollout history deploy nginx-deploy kubectl rollout deploy nginx-deploy --revision =2 kubectl rollout undo deploy nginx-deploy --to-revision =2 kubect rollout status deploy nginx-deploy kubectl rollout pause deploy nginx-deploy kubectl rollout rusume deploy nginx-deploy kubectl rollout restart deploy nginx-deploy
StatefulSet 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 --- apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: "nginx" replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.2.1 ports: - containerPort: 80 name: web
副本扩缩容 1 2 3 4 5 6 $ kubectl scale statefulset web --replicas =5 $ kubectl patch statefulset web -p '{"spec":{"replicas":3}}' $ kubectl scale statefulset web --replicas =2
更新机制 灰度发布(金丝雀) 1 2 3 4 5 6 7 8 9 10 11 updateStrategy: rollingUpdate: partition: 0 type: RollingUpdate 利用滚动更新中的 partition 属性,可以实现简易的灰度发布的效果 例如我们有 5 个 pod,如果当前 partition 设置为 3,那么此时滚动更新时,只会更新那些 序号 >= 3 的 pod 利用该机制,我们可以通过控制 partition 的值,来决定只更新其中一部分 pod,确认没有问题后再主键增大更新的 pod 数量,最终实现全部 pod 更新
删除更新 1 2 3 4 updateStrategy: type: OnDelete 只有在 pod 被删除时会进行更新操作
删除操作 1 2 3 4 5 6 7 kubectl delete statefulset web kubectl deelte sts web --cascade =false kubectl delete service nginx
Daemonset 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd spec: selector: matchLabels: id: fluentd template: metadata: labels: app: logging id: fluentd name: fluentd spec: containers: - name: fluentd-es image: agilestacks/fluentd-elasticsearch:v1.3.0 env: - name: FLUENTD_ARGS value: -qq volumeMounts: - name: containers mountPath: /var/lib/docker/containers - name: varlog mountPath: /varlog volumes: - hostPath: path: /var/lib/docker/containers name: containers - hostPath: path: /var/log name: varlog nodeSelector: aa: bb 更新机制最好还是OnDelet 要不然每次更新都让所有的节点更新 消耗太多的资源 kubectl label nodes k8s-node1 aa =bb kubectl get no -l aa =bb NAME STATUS ROLES AGE VERSION k8s-node1 Ready <none> 16h v1.23.6 k8s-node2 Ready <none> 16h v1.23.6
HPA 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 实现 cpu 或内存的监控,首先有个前提条件是该对象必须配置了 resources.requests.cpu 或 resources.requests.memory 才可以,可以配置当 cpu/memory 达到上述配置的百分比后进行扩容或缩容 创建一个 HPA: 先准备一个好一个有做资源限制的 deployment 执行命令 kubectl autoscale deploy nginx-deploy --cpu-percent =20 --min=2 --max=5 通过 kubectl get hpa 可以获取 HPA 信息 测试:找到对应服务的 service,编写循环测试脚本提升内存与 cpu 负载 while true 可以通过多台机器执行上述命令,增加负载,当超过负载后可以查看 pods 的扩容情况 kubectl get pods 查看 pods 资源使用情况 kubectl top pods 扩容测试完成后,再关闭循环执行的指令,让 cpu 占用率降下来,然后过 5 分钟后查看自动缩容情况
selector和 模板中labels的关系 1 2 3 4 5 6 7 8 9 10 11 selector: matchLabels: app: nginx-deploy template: metadata: labels: app: nginx-deploy 来选择控制模板里app为 nginx-deploy的pod
探针 启动探针 k8s 1.16 版本新增的探针,用于判断应用程序是否已经启动了。
当配置了 startupProbe 后,会先禁用其他探针,直到 startupProbe 成功后,其他探针才会继续。
作用:由于有时候不能准确预估应用一定是多长时间启动成功,因此配置另外两种方式不方便配置初始化时长来检测,而配置了 statupProbe 后,只有在应用启动成功了,才会执行另外两种探针,可以更加方便的结合使用另外两种探针使用。
检测到/api/startup 则表示启动了
1 2 3 4 startupProbe: httpGet: path: /api/startup port: 80
存活探针 用于探测容器中的应用是否运行,如果探测失败,kubelet 会根据配置的重启策略进行重启,若没有配置,默认就认为容器启动成功,不会执行重启策略。
1 2 3 4 5 6 7 8 9 10 livenessProbe: failureThreshold: 5 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5
就绪探针 用于探测容器内的程序是否健康,它的返回值如果返回 success,那么就认为该容器已经完全启动,并且该容器是可以接收外部流量的。
1 2 3 4 5 6 7 8 9 readinessProbe: failureThreshold: 3 httpGet: path: /ready port: 8181 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1
探测方式 ExecAction(命令方式) 1 2 3 4 5 6 7 在容器内部执行一个命令,如果返回值为 0,则任务容器时健康的。 livenessProbe: exec: command: - cat - /health
TCPSocketAction(tcp方式) 1 2 3 4 5 通过 tcp 连接监测容器内端口是否开放,如果开放则证明该容器健康 livenessProbe: tcpSocket: port: 80
HTTPGetAction(http方式) 1 2 3 4 5 6 7 8 9 10 11 生产环境用的较多的方式,发送 HTTP 请求到容器内的应用程序,如果接口返回的状态码在 200~400 之间,则认为容器健康。 livenessProbe: failureThreshold: 5 httpGet: path: /health port: 8080 scheme: HTTP httpHeaders: - name: xxx value: xxx
Service
Endpoints 实现容器地址转发可以用于项目迁移 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 可以通过ip访问 kubectl exec -it busybox -- sh wget nginx-svc1 先创建一个svc把标签选择给删了 apiVersion: v1 kind: Service metadata: name: nginx-svc1 labels: app: nginxep spec: ports: - port: 80 name: web targetPort: 80 此时svc 有但是ep是没有显示的 apiVersion: v1 kind: Endpoints metadata: labels: app: nginxep name: nginx-svc1 subsets: - addresses: - ip: 120.78.159.117 ports: - name: web port: 80 protocol: TCP 此时ep就相当于一个代理代理到了120.78.159.117这个ip
ClusterIP 容器内部访问 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP 10.104.207.42 <none> 80/TCP 36m apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web targetPort: 80 type: ClusterIP selector: app: nginx-deploy 集群内部访问
ExternalName(反向代理外部域名) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 可以通过域名访问 kubectl exec -it busybox -- sh wget nginx-extelner NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-extelner ExternalName <none> www.wolfcode.cn 80/TCP 10m apiVersion: v1 kind: Service metadata: name: nginx-extelner labels: app: nginx spec: ports: - port: 80 name: web targetPort: 80 type: ExternalName externalName: www.wolfcode.cn
NodePort(端口映射) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 可以对外访问 192.168.85.128:32237 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-svc NodePort 10.108.184.225 <none> 80:32237/TCP 74s NAME ENDPOINTS AGE nginx-svc 10.244.1.49:80,10.244.2.63:80 83s apiVersion: v1 kind: Service metadata: name: nginx-svc labels: app: nginx spec: ports: - port: 80 name: web targetPort: 80 type: NodePort selector: app: nginx-deploy
Ingress 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 可以基于url对外访问 apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: k8s.ingress.cn http: paths: - pathType: Exact backend: service: name: nginx-svc port: number: 80 path: /api - pathType: Prefix backend: service: name: nginx-svc port: number: 80 path: / 访问k8s.ingress.cn/api 能到nginx下面的api Exact永远优秀按匹配
配置管理 configmap 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 首先创建一个cm 可以看kubectl create cm -h apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx-deploy name: nginx-deploy spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: nginx-deploy strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: nginx-deploy spec: initContainers: - name: init-nginx image: busybox command: ["sh", "-c", "mkdir -p /usr/share/nginx/api"] containers: - image: nginx:latest imagePullPolicy: IfNotPresent name: nginx volumeMounts: - name: nginx-default-config mountPath: /etc/nginx/conf.d/default.conf subPath: default.conf - name: nginx-default-config2 mountPath: /usr/share/nginx/api/index.html subPath: index.html restartPolicy: Always terminationGracePeriodSeconds: 30 volumes: - name: nginx-default-config configMap: name: nginx-default-configmap - name: nginx-default-config2 configMap: name: nginx-index-config 这样就可以实现在外面更改配置文件和index了 重新加载操作 kubectl rollout restart deploy deploy-nginx
持久化存储 volumes hostpath(挂载到本机) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: nginx:latest name: nginx-volume volumeMounts: - mountPath: /usr/share/nginx/html/index.html name: test-volume volumes: - name: test-volume hostPath: path: /tmp/index.html type: FileOrCreate 类型: 空字符串:默认类型,不做任何检查 DirectoryOrCreate:如果给定的 path 不存在,就创建一个 755 的空目录 Directory:这个目录必须存在 FileOrCreate:如果给定的文件不存在,则创建一个空文件,权限为 644 File:这个文件必须存在 Socket:UNIX 套接字,必须存在 CharDevice:字符设备,必须存在 BlockDevice:块设备,必须存在
emptydir()空目录 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: nginx name: nginx-emptydir volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {} emptyDir 的特点: emptyDir 是一种空目录,生命周期与 Pod 绑定。 在容器启动时,Kubernetes 会在节点上为 Pod 创建一个空的目录,并将该目录挂载到容器的指定路径。 如果 Pod 中的容器写入文件,它们会被写入 emptyDir,并且这个文件对于 Pod 中所有容器是共享的。 当 Pod 被删除时,emptyDir 中的内容也会丢失。 emptyDir 不适合长期存储数据,因为它不会在 Pod 重启或删除后保留数据。
nfs方式 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 apiVersion: v1 kind: Pod metadata: name: nfs-pd spec: containers: - image: nginx:latest name: test-container1 volumeMounts: - mountPath: /usr/share/nginx/html/ name: test-volume volumes: - name: test-volume nfs: server: 192.168.85.130 path: /data/nfs/rw/a readOnly: false
pv与pvc 静态构建 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 先创建一个pv apiVersion: v1 kind: PersistentVolume metadata: name: pv001 spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: test mountOptions: - hard - nfsvers =4.1 nfs: path: /data/nfs/rw/pvc server: 192.168.85.130 ----------------------------- ----------------------------- 再创建一个pvc apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 1Gi storageClassName: test ----------------------------- ----------------------------- 此时pvc和pv已经绑定好了 再创建一个pod与pvc关联 apiVersion: v1 kind: Pod metadata: name: pvc-test spec: containers: - image: nginx:latest name: test-container1 volumeMounts: - mountPath: /tmp/pvc name: nfs-pvc-test volumes: - name: nfs-pvc-test persistentVolumeClaim: claimName: nfs-pvc
动态构建 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 vim rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io 3、创建sc apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: tigerfive-nfs-storage parameters: archiveOnDelete: "false" 4、创建 提供者 apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner namespace: default spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: tigerfive-nfs-storage - name: NFS_SERVER value: 192.168.81.140 - name: NFS_PATH value: /data/nfs volumes: - name: nfs-client-root nfs: server: 192.168.81.140 path: /data/nfs -------------------------------------------------------- No resources found No resources found 直接创建pvc,并且pod使用该pvc apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-claim annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-fd443294-78e7-40ba-a27a-1d000ab92d2c 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 27s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-fd443294-78e7-40ba-a27a-1d000ab92d2c 1Mi RWX managed-nfs-storage 28s 创建pod使用svc kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-pod image: busybox:1.24 command: - "/bin/sh" args: - "-c" - "touch /mnt/SUCCESS && exit 0 || exit 1" volumeMounts: - name: nfs-pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: nfs-pvc persistentVolumeClaim: claimName: test-claim 检查nfs的目录[root@storage1 ~] SUCCESS
高级调度 定时任务cronjob 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 apiVersion: batch/v1 kind: CronJob metadata: name: hello spec: concurrencyPolicy: Allow failedJobsHistoryLimit: 1 successfulJobsHistoryLimit: 3 suspend: false schedule: "* * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox:latest imagePullPolicy: IfNotPresent command: - /bin/sh - -c - date restartPolicy: OnFailure
初始化容器 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 在真正的容器启动之前,先启动 InitContainer,在初始化容器中完成真实容器所需的初始化操作,完成后再启动真实的容器。 相对于 postStart 来说,首先 InitController 能够保证一定在 EntryPoint 之前执行,而 postStart 不能,其次 postStart 更适合去执行一些命令操作,而 InitController 实际就是一个容器,可以在其他基础容器环境下执行更复杂的初始化功能。 在 pod 创建的模板中配置 initContainers 参数: spec: initContainers: - image: nginx imagePullPolicy: IfNotPresent command: ["sh", "-c", "echo 'inited;' >> ~/.init"] name: init-test 初始化的话做好配合持久化存储 pod中的两个容器默认不会共享数据
容忍和污点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 NoSchedule 不能容忍但是已经部署在pod不会被删除 NoExecute 可以容忍但是没有对应的id或者 id和value 会被删除 首先创建个标签 给节点打上污点 tolerations: - key: "memory" value: "low" effect: "NoSchedule" operator: "Equal"
亲和信 node 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: test operator: In values: - "1" preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: cpu operator: In values: - good containers: - name: with-node-affinity image: nginx:1.2.1 imagePullPolicy: IfNotPresent
pod 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - S1 topologyKey: topology.kubernetes.io/zone podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: security operator: In values: - S2 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: pause:2.0
基本命令 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 kubectl delete pods --all --force --grace-period =0 kubectl label po nginx app =hello kubectl label po nginx app =hello2 --overwrite kubectl label po nginx --show-labels kubectl get po -A -l app =hello kubectl get po -A -l 'k8s-app in (metrics-server, kubernetes-dashboard)' kubectl get po -l version!=1,app =nginx kubectl get po -A -l version!=1,'app in (busybox, nginx)' kubectl run -i --tty --image busybox dns-test --restart =Never --rm /bin/sh 跑busybox不退出