目录

一、基础操作

1、简化别名设置

2、查看kubernetes的资源对象

3、创建、查看namespace

4、Pod 增删改查

二、YAML文件创建Pod

1、标签部分作用

1.生成yaml文件

2.修改yaml中的标签名

2、imagePullPolicy

1.增加镜像拉取策略

2.yaml批量创建pod

3.通过label查询

3、restartPolicy 作用

1.修改重启策略

2.停止容器内的服务

3.常用退出码表: 

4、​​​​​​dnsPolicy的作用

1.修改策略

2.查看Pod的DNS配置

5、容器对外暴露端口设置

1.修改yaml文件端口部分

2.修改后重新创建nginx3,测试是否能访问服务

3.若在master和非Pod工作节点 访问pod需要如下修改

①若集群外需要访问以service为例(还可以通过ingress)

②修改nginx3的yaml文件 

③测试

6、容器执行的命令及参数设置

1. 编辑yaml文件

1.删除所有yaml文件

2.编写nginx1.yaml文件

3.编写nginx2.yaml文件

4.编写nginx3.yaml文件

5.创建三个Pod

6.查看运行日志

7.使用Dockerfile创建image验证

1.编写Dockerfile

2.使用dockerfile创建镜像

3.将制作的image制作成tar归档文件

4.将tar归档文件传送给工作节点(默认做过ssh免密)

5.加载镜像

6. 编写nginx4yaml文件

7.启动Pod

8.查看详细信息 

7、容器中环境变量设置

1.修改nginx3.yaml文件

2.创建Pod

3.查看Pod日志

4.知识点

5.通过kubectl打印指定字段中的变量

8、容器资源限制

1.修改nginx3.yaml文件

2.创建Pod

3.验证结果-已达到最大限额

1.使用kubectl进入容器查询

2.在Node节点使用Docker查询

9、相同Pod中运行多个容器

1.修改yaml文件

2.创建Pod

3.查看状态

4.思考问题

1.两个容器是否有启动顺序?

2.如果其中一个容器没有顺利启动,是否会影响另外一个容器的启动?

1.修改其中一个镜像为不存在

2.应用 

3.验证

3.如果要登入某个容器,该如何执行命令?(将镜像改为正常镜像并应用不在赘述)

10、初始化容器

1.initContainers优势

2.修改yaml文件

3.创建Pod

4.查看容器初始化信息

5.模拟问题

1.修改yaml文件,将访问地址改为一个无法访问的

2.刷新Pod

3.查看pod

4.结论

11、Pod健康检查

1.容器探针介绍

2.修改yaml文件

3.创建Pod

4.kubelet 驱逐时 Pod 的选择

5.问题

12、PV和PVC

1.介绍

2.实验准备工作

1.k8s01中安装nfs-server

2.工作节点  -  安装nfs客户端

3.配置nfs规则及启动服务

4.测试服务

3.编写YAML文件

1.编写PV的yaml文件

2.编写PVC的yaml文件

4.部署PV与PVC

5.使用PV和PVC

1.修改nginx3.yaml文件

2.部署Pod

5.测试

1.通过url访问测试

2.通过容器内部查询


一、基础操作

1、简化别名设置

[root@k8s01 ~]# alias k=kubectl    #方便使用,官方考试默认使用k替代kubectl

标准关系运算符

2、查看kubernetes的资源对象

[root@k8s01 ~]# k api-resources 
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
events                            ev                                          true         Event
limitranges                       limits                                      true         LimitRange
namespaces                        ns                                          false        Namespace
nodes                             no                                          false        Node
persistentvolumeclaims            pvc                                         true         PersistentVolumeClaim
persistentvolumes                 pv                                          false        PersistentVolume
pods                              po                                          true         Pod
....

3、创建、查看namespace

[root@k8s01 ~]# k create namespace myname
namespace/myname created
[root@k8s01 ~]# k get namespaces 
NAME              STATUS   AGE
default           Active   9h
kube-node-lease   Active   9h
kube-public       Active   9h
kube-system       Active   9h
myname            Active   16s

4、Pod 增删改查

若不指定命名空间,默认在default命名空间

[root@k8s01 ~]# kubectl run nginx --image=nginx:1.21 #创建Pod
pod/nginx created
[root@k8s01 ~]# k get pods    #查询default命名空间中的所有pod
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          9m34s

[root@k8s01 ~]# k describe pod    #列出Pod的详细信息
Name:         nginx
Namespace:    default
Priority:     0
Node:         k8s03/192.168.248.22
Start Time:   Mon, 14 Aug 2023 22:00:47 -0400
...

以YAML格式查看pod的详情

[root@k8s01 ~]# k get pods nginx -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/containerID: 49cd8e984a01a8f1cb2edb7820e1fe646bc4c9eb6711701aa6b0505b6911ab36
    cni.projectcalico.org/podIP: 10.244.235.130/32
    cni.projectcalico.org/podIPs: 10.244.235.130/32
  creationTimestamp: "2023-08-15T02:00:47Z"
  labels:
    run: nginx
...

可以使用YAML文件进行Pod的创建,可以使用“--dry-run”仅打印向kubernetes发送的内容,而不真正执行,同时使用“-o”可以指定输出的格式:

[root@k8s01 ~]# k run nginx --image=nginx:1.21 --dry-run=client -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx:1.21
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

#当使用--dry-run=client选项时,kubectl会在本地客户端执行该命令,并根据命令的参数和选项生成一个
#模拟的资源配置文件。这个文件可以通过输出或重定向进行获取。这种方式不会将任何更改提交到服务器端,
#仅用于验证命令的有效性和观察其预期行为。

#相反,当使用--dry-run=server选项时,kubectl会将模拟的请求发送到服务器端进行处理。
#服务器将根据请求中的信息模拟执行相应的操作,并返回模拟结果给客户端。
#这种方式可以验证命令在服务器端的行为,并获得更准确的模拟结果。

或者直接输出到yaml文件

[root@k8s01 ~]# k run nginx -o yaml --image=nginx:1.21 --dry-run=client > nginx.yaml
#借助生成的yaml文件,创建一个新的Pod;
[root@k8s01 ~]# cat nginx.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx:1.21
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

[root@k8s01 ~]# k create -f nginx.yaml 
Error from server (AlreadyExists): error when creating "nginx.yaml": pods "nginx" already exists  
#提示nginx已存在
[root@k8s01 ~]# sed -i '0,/name: nginx/s/nginx/nginx-yaml/' nginx.yaml #改变容器标签,因为标签是唯一的所以无法创建
#或者可以修改命名空间,每个命名空间相对隔离
[root@k8s01 ~]# k create -f nginx.yaml 
pod/nginx-yaml created

登录到容器内部

[root@k8s01 ~]# k exec -it nginx -- /bin/bash

删除容器

[root@k8s01 ~]# k delete pods nginx 
pod "nginx" deleted
#可以把容器名换为 --all 代表删除所有容器
#加 --force 代表强制删除
#--grace-period
#非负值: --grace-period 参数只能接受非负整数值,即大于等于零的整数。
#删除流程: 当你执行删除操作时(例如 kubectl delete 命令),Kubernetes 会尝试触发资源的优雅终止。
#优雅终止期: --grace-period 设置了资源的优雅终止期,即在这个时间段内,Kubernetes 将等待资源进行清理操作。
#超时处理: 如果在优雅终止期内资源没有终止,Kubernetes 将强制删除该资源,以确保资源的删除不会无限期地阻塞。
#立即删除: 如果你希望立即删除资源,可以将 --grace-period 设置为 0,这将绕过优雅终止期,立即删除资源。
#--force: 如果在删除操作中同时使用了 --force 参数,--grace-period 的设置会被忽略,资源将会被立即强制删除

二、YAML文件创建Pod

1、标签部分作用

1.生成yaml文件

[root@k8s01 ~]# k run nginx -o yaml --image=nginx:1.21 --dry-run=client > nginx1.yaml
[root@k8s01 ~]# k run nginx -o yaml --image=nginx:1.21 --dry-run=client > nginx2.yaml
[root@k8s01 ~]# k run nginx -o yaml --image=nginx:1.21 --dry-run=client > nginx3.yaml

2.修改yaml中的标签名

[root@k8s01 ~]# sed -i '0,/name: nginx/s/nginx/nginx1/' nginx1.yaml
[root@k8s01 ~]# sed -i '0,/name: nginx/s/nginx/nginx2/' nginx2.yaml
[root@k8s01 ~]# sed -i '0,/name: nginx/s/nginx/nginx3/' nginx3.yaml

2、imagePullPolicy

1.增加镜像拉取策略

拉取策略说明

[root@k8s01 ~]# for i in {1..3}; do sed -i '/resources/ a \ \ \ \ imagePullPolicy: IfNotPresent' nginx$i.yaml; done
#IfNotPresent: 只有当镜像在本地不存在时才会拉取,默认策略
#Always: 每当 kubelet 启动一个容器时,kubelet 会查询容器的镜像仓库, 将名称解析为一个镜像摘要。 如果 kubelet 有一个容器镜像,并且对应的摘要已在本地缓存,kubelet 就会使用其缓存的镜像; 否则,kubelet 就会使用解析后的摘要拉取镜像,并使用该镜像来启动容器。
#Never: Kubelet 不会尝试获取镜像。如果镜像已经以某种方式存在本地, kubelet 会尝试启动容器;否则,会启动失败。

2.yaml批量创建pod

[root@k8s01 ~]# for i in {1..3}; do k create -f nginx${i}.yaml; done
pod/nginx1 created
pod/nginx2 created
pod/nginx3 created

3.通过label查询

[root@k8s01 ~]# kubectl get pods -l run
NAME     READY   STATUS    RESTARTS   AGE
nginx1   1/1     Running   0          6m54s
nginx2   1/1     Running   0          6m54s
nginx3   1/1     Running   0          6m54s
[root@k8s01 ~]# k get pods -l run!=nginx1    #通过排除标签名显示
NAME     READY   STATUS    RESTARTS   AGE
nginx2   1/1     Running   0          8m28s
nginx3   1/1     Running   0          8m28s
[root@k8s01 ~]# k get pods -l run=nginx1    #显示匹配的标签名显示
NAME     READY   STATUS    RESTARTS   AGE
nginx1   1/1     Running   0          8m36s

3、restartPolicy 作用

1.修改重启策略

#修改2,3yaml文件的重启策略
[root@k8s01 ~]# sed -i "/restartPolicy/ s/Always/OnFailure/" nginx2.yaml 
[root@k8s01 ~]# sed -i "/restartPolicy/ s/Always/Never/" nginx3.yaml 
#删除pod
[root@k8s01 ~]# k delete pods --all #删除default空间内的所有pod
#重新创建Pod
[root@k8s01 ~]# for i in {1..3}; do k create -f nginx${i}.yaml; done
pod/nginx1 created
pod/nginx2 created
pod/nginx3 created

2.停止容器内的服务


#停止所有pod
[root@k8s01 ~]# for i in {1..3}; do k exec -it nginx${i} -- nginx -s stop; done
2023/08/15 06:22:56 [notice] 32#32: signal process started
2023/08/15 06:22:56 [notice] 32#32: signal process started
2023/08/15 06:22:56 [notice] 32#32: signal process started
#查看pod状态,发现2/3处理完成状态
[root@k8s01 ~]# k get pods
NAME     READY   STATUS      RESTARTS   AGE
nginx1   1/1     Running     1          8m4s
nginx2   0/1     Completed   0          8m4s
nginx3   0/1     Completed   0          8m4s
#查看容器退出码
[root@k8s01 ~]# for i in {1..3}; do kubectl describe pods nginx${i} | grep "Exit Code"; done
      Exit Code:    0
      Exit Code:    0
      Exit Code:    0
#均为正常退出

3.常用退出码表: 

4、​​​​​​dnsPolicy的作用

1.修改策略

[root@k8s01 ~]# k delete pods --all    #删除之前的Pod

#nginx1改为None,加入2行:使用指定的 DNS 服务器来解析域名
[root@k8s01 ~]# sed -i "/dnsPolicy/ s/ClusterFirst/None/" nginx1.yaml 
[root@k8s01 ~]# sed -i '/dnsPolicy/ a \  dnsConfig:\n    nameservers: ["192.168.1.1","192.168.1.2"]' nginx1.yaml

#nginx2改为Default
[root@k8s01 ~]# sed -i "/dnsPolicy/ s/ClusterFirst/Default/" nginx2.yaml 

#nginx3不变,保持ClusterFirst

2.查看Pod的DNS配置

[root@k8s01 ~]# for i in {1..3}; do k exec -it nginx${i} -- cat /etc/resolv.conf; done
nameserver 192.168.1.1
nameserver 192.168.1.2
nameserver 114.114.114.114
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
[root@k8s01 ~]# k get service -A | grep "10.96.0.10"
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   15h

5、容器对外暴露端口设置

1.修改yaml文件端口部分

[root@k8s01 ~]# k delete pods --all    #删除之前创建的Pod
#修改nginx3.yaml文件,增加端口部分信息
[root@k8s01 ~]# sed -i '/imagePullP/ a\    ports:\n    - name: nginx\n      protocol: TCP\n      containerPort: 80\n      hostPort: 30001' nginx3.yaml

2.修改后重新创建nginx3,测试是否能访问服务

[root@k8s01 ~]# k apply -f nginx3.yaml
[root@k8s01 ~]# k get pods -o wide | awk 'NR==2 {print $6}'   #查看Pod的IP地址
10.244.236.137
[root@k8s01 ~]# curl -s 10.244.236.136 | awk '/h1/ {print $0}' #可以正常访问
<h1>Welcome to nginx!</h1>

[root@k8s01 ~]# k describe pods nginx3 | grep "Host Port"    #查看暴露端口
    Host Port:      30001/TCP
[root@k8s01 ~]# k get pods -o wide | awk 'NR==2 {print $7}'  #查看pod所处工作节点
k8s02
# 访问测试Pod的工作节点
[root@k8s01 ~]# curl -s 192.168.248.21:30001 | awk '/h1/ {print $0}'
<h1>Welcome to nginx!</h1>

3.若在master和非Pod工作节点 访问pod需要如下修改

Kubernetes 网络解决四方面的问题:

        每个 Service 创建时,会被分配一个唯一的 IP 地址(也称为 clusterIP)。 这个 IP 地址与 Service 的生命周期绑定在一起,只要 Service 存在,它就不会改变。 可以配置 Pod 使它与 Service 进行通信,Pod 知道与 Service 通信将被自动地负载均衡到该 Service 中的某些 Pod 上 

使用 Service 连接到应用

①若集群外需要访问以service为例(还可以通过ingress)

#创建service文件
[root@k8s01 ~]# cat >> nginx1-service.yaml << EOF    
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  ports:
    - name: nginx
      protocol: TCP
      port: 80
      targetPort: 80
  selector:
    run: nginx3
EOF
[root@k8s01 ~]# k apply -f nginx-service.yaml #启动服务

②修改nginx3的yaml文件 

[root@k8s01 ~]# cat nginx3.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx3
  name: nginx3
spec:
  containers:
  - image: nginx:1.21
    name: nginx
    resources: {}
    imagePullPolicy: IfNotPresent
    ports:
    - name: nginx
      protocol: TCP
      containerPort: 30001
      hostPort: 30001
  dnsPolicy: ClusterFirstWithHostNet
  restartPolicy: Never
  hostNetwork: true
status: {}
#集群中创建一个 Service,并且会分配一个随机的 NodePort,可以让你通过 
#curl <NodeIP>:<NodePort> 访问到该 Pod。请确保替换 <NodeIP> 为实际的节点 IP 地址

[root@k8s01 ~]# k replace -f nginx3.yaml --force 
pod "nginx3" deleted
pod/nginx3 replaced

③测试

[root@k8s01 ~]# k get service nginx-service #查看暴露端口
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx-service   NodePort   10.97.117.100   <none>        80:32656/TCP   15m
[root@k8s01 ~]# curl -s 192.168.248.20:32656|awk '/h1/ {print $0}' #访问正常
<h1>Welcome to nginx!</h1>

6、容器执行的命令及参数设置

#删除pod和service
[root@k8s01 ~]# k delete pods nginx3 --force
[root@k8s01 ~]# k delete service nginx-service 

1. 编辑yaml文件

1.删除所有yaml文件

[root@k8s01 ~]# for i in {1..3}; do rm -rf nginx${i}.yaml; done

2.编写nginx1.yaml文件

更改了镜像、及内部命令
[root@k8s01 ~]# cat >> nginx1.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx1
  name: nginx1
spec:
  containers:
  - image: busybox
    imagePullPolicy: IfNotPresent
    name: nginx
    command: ["ping","www.baidu.com","-c","10"]
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
EOF

3.编写nginx2.yaml文件

更改了镜像、及内部命令
[root@k8s01 ~]# cat >> nginx2.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx2
  name: nginx2
spec:
  containers:
  - image: busybox
    imagePullPolicy: IfNotPresent
    name: nginx
    command:
    - ping
    - www.baidu.com
    - -c
    - "10"
    resources: {}
  dnsPolicy: Default
  restartPolicy: OnFailure
EOF

4.编写nginx3.yaml文件

更改了镜像、及内部命令
[root@k8s01 ~]# cat >> nginx3.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx3
  name: nginx3
spec:
  containers:
  - image: busybox
    imagePullPolicy: IfNotPresent
    name: nginx
    command: ["ping"]
    args: ["www.baidu.com","-c","10"]
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
EOF

5.创建三个Pod

[root@k8s01 ~]# for i in {1..3}; do k apply -f nginx${i}.yaml; done
pod/nginx1 created
pod/nginx2 created
pod/nginx3 created

6.查看运行日志

[root@k8s01 ~]# k logs nginx1 
PING www.baidu.com (110.242.68.3): 56 data bytes
64 bytes from 110.242.68.3: seq=0 ttl=127 time=12.853 ms
64 bytes from 110.242.68.3: seq=1 ttl=127 time=13.670 ms
64 bytes from 110.242.68.3: seq=2 ttl=127 time=13.384 ms
64 bytes from 110.242.68.3: seq=3 ttl=127 time=13.487 ms
64 bytes from 110.242.68.3: seq=4 ttl=127 time=13.523 ms
64 bytes from 110.242.68.3: seq=5 ttl=127 time=13.780 ms
64 bytes from 110.242.68.3: seq=6 ttl=127 time=14.470 ms
64 bytes from 110.242.68.3: seq=7 ttl=127 time=14.060 ms
64 bytes from 110.242.68.3: seq=8 ttl=127 time=12.710 ms
64 bytes from 110.242.68.3: seq=9 ttl=127 time=14.310 ms

--- www.baidu.com ping statistics ---
10 packets transmitted, 10 packets received, 0% packet loss
round-trip min/avg/max = 12.710/13.624/14.470 ms
[root@k8s01 ~]# k logs nginx2
PING www.baidu.com (110.242.68.4): 56 data bytes
64 bytes from 110.242.68.4: seq=0 ttl=127 time=13.175 ms
64 bytes from 110.242.68.4: seq=1 ttl=127 time=13.465 ms
64 bytes from 110.242.68.4: seq=2 ttl=127 time=14.807 ms
64 bytes from 110.242.68.4: seq=3 ttl=127 time=13.442 ms
64 bytes from 110.242.68.4: seq=4 ttl=127 time=14.531 ms
64 bytes from 110.242.68.4: seq=5 ttl=127 time=13.285 ms
64 bytes from 110.242.68.4: seq=6 ttl=127 time=13.944 ms
64 bytes from 110.242.68.4: seq=7 ttl=127 time=12.497 ms
64 bytes from 110.242.68.4: seq=8 ttl=127 time=13.932 ms
64 bytes from 110.242.68.4: seq=9 ttl=127 time=14.161 ms

--- www.baidu.com ping statistics ---
10 packets transmitted, 10 packets received, 0% packet loss
round-trip min/avg/max = 12.497/13.723/14.807 ms
[root@k8s01 ~]# k logs nginx3
PING www.baidu.com (110.242.68.4): 56 data bytes
64 bytes from 110.242.68.4: seq=0 ttl=127 time=13.705 ms
64 bytes from 110.242.68.4: seq=1 ttl=127 time=13.956 ms
64 bytes from 110.242.68.4: seq=2 ttl=127 time=14.217 ms
64 bytes from 110.242.68.4: seq=3 ttl=127 time=13.594 ms
64 bytes from 110.242.68.4: seq=4 ttl=127 time=14.257 ms
64 bytes from 110.242.68.4: seq=5 ttl=127 time=13.735 ms
64 bytes from 110.242.68.4: seq=6 ttl=127 time=14.431 ms
64 bytes from 110.242.68.4: seq=7 ttl=127 time=13.181 ms
64 bytes from 110.242.68.4: seq=8 ttl=127 time=13.301 ms
64 bytes from 110.242.68.4: seq=9 ttl=127 time=13.492 ms

--- www.baidu.com ping statistics ---
10 packets transmitted, 10 packets received, 0% packet loss
round-trip min/avg/max = 13.181/13.786/14.431 ms

根据输出,可以看到三个Pod都执行了“ping www.baidu.com -c 10”的命令

7.使用Dockerfile创建image验证

1.编写Dockerfile
[root@k8s01 ~]# cat >> Dockerfile << EOF
FROM nginx:1.21
LABEL label='My_image'

ENTRYPOINT ["nginx","-g","daemon off;"]
EOF
2.使用dockerfile创建镜像
[root@k8s01 ~]# docker build -t nginx4:1.21 .
Sending build context to Docker daemon  4.024MB
Step 1/3 : FROM nginx:1.21
 ---> 605c77e624dd
Step 2/3 : LABEL label='For HCIE_v3.0'
 ---> Running in df10dcec8d7b
Removing intermediate container df10dcec8d7b
 ---> 5d29e7b24699
Step 3/3 : ENTRYPOINT ["nginx","-g","daemon off;"]
 ---> Running in a8821fda2b20
Removing intermediate container a8821fda2b20
 ---> 868e92f29793
Successfully built 868e92f29793
Successfully tagged nginx4:1.21
3.将制作的image制作成tar归档文件
[root@k8s01 ~]# docker save nginx_4.tar nginx_4:1.21
4.将tar归档文件传送给工作节点(默认做过ssh免密)
[root@k8s01 ~]# for i in 192.168.248.2{1..2}; do scp nginx_4.tar root@$i:/root; done
5.加载镜像
[root@k8s01 ~]# for i in 192.168.248.2{1..2}; do ssh root@$i "docker load -i /root/nginx_4.tar"; done
Loaded image: nginx4:1.21
Loaded image: nginx4:1.21
6. 编写nginx4yaml文件
[root@k8s01 ~]# cat >> nginx4.yaml  << EOF
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx4
  name: nginx4
spec:
  containers:
  - image: nginx4:1.21
    imagePullPolicy: IfNotPresent
    name: nginx
    command: ["ping"]
    args: ["www.baidu.com","-c","10"]
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
EOF
7.启动Pod
[root@k8s01 ~]# k apply -f nginx4.yaml
pod/nginx4 created
[root@k8s01 ~]# k get pods nginx4
NAME     READY   STATUS               RESTARTS   AGE
nginx4   0/1     ContainerCannotRun   0          4s
8.查看详细信息 
[root@k8s01 ~]# k describe pods nginx4|tail -1
  Warning  Failed     41m   kubelet            Error: failed to start container "nginx": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"ping\": executable file not found in $PATH": unknown

提示容器的 $PATH 环境变量中找不到可执行文件 “ping”

容器镜像本身并不包含此命令,所以无法执行

7、容器中环境变量设置

删除所有Pod
[root@k8s01 ~]# k delete pods --all --force

1.修改nginx3.yaml文件

将yaml文件中命令及参数部分删除
[root@k8s01 ~]# sed -i "/command/,+1d" nginx3.yaml

将yaml文件中的镜像源修改为nginx:1.21
[root@k8s01 ~]# sed -i '/- image/ s/busybox/nginx:1.21/' nginx3.yaml

再yaml中添加环境变量及运行env
[root@k8s01 ~]# sed -i '/name: nginx$/ a\    env:\n    - name: WELCOME\n      value: "this is a test page"\n    - name: WRONG\n      value: "try again"\n    command: ["env"]' nginx3.yaml

修改后的文件

[root@k8s01 ~]# cat nginx3.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx3
  name: nginx3
spec:
  containers:
  - image: nginx:1.21
    imagePullPolicy: IfNotPresent
    name: nginx
    env:
    - name: WELCOME
      value: "this is a test page"
    - name: WRONG
      value: "try again"
    command: ["env"]
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never

2.创建Pod

[root@k8s01 ~]# k apply -f nginx3.yaml

3.查看Pod日志

[root@k8s01 ~]# k logs nginx3 
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=nginx3
WELCOME=this is a test page
WRONG=try again
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
HOME=/root

发现增加的变量已经在容器内部环境变量中

4.知识点

5.通过kubectl打印指定字段中的变量

[root@k8s01 ~]# k get pods nginx3 -o custom-columns=NODE:.spec.nodeName
NODE
k8s02

NODE字段可自行命名,如果只想打印出变量的值需要额外处理,
-o custom-columns选项时,它只是输出了您定义的自定义列,但是在自定义列之前还会包含一列名称。这就是为什么在输出中会出现"NODE",然后才是实际的节点名称
示例如下:
[root@k8s01 ~]# kubectl get pods nginx3 -o custom-columns=NODE:.spec.nodeName --no-headers | awk '{print $1}'
k8s02
具体字段可参照kubectl explain <type>.<fieldName>[.<fieldName>]
[root@k8s01 ~]# k explain pods.spec
...
   nodeName	<string>
     NodeName is a request to schedule this pod onto a specific node. If it is
     non-empty, the scheduler simply schedules this pod onto that node, assuming
     that it fits resource requirements.
...

8、容器资源限制

删除所有Pod
[root@k8s01 ~]# k delete pods --all --force

1.修改nginx3.yaml文件

[root@k8s01 ~]# cat > nginx3.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx3
  name: nginx3
spec:
  containers:
  - image: progrium/stress
    imagePullPolicy: IfNotPresent
    name: nginx
    args:
    - -c
    - "1"
    resources:
      requests:
        cpu: 0.1
      limits:
        cpu: 0.3
  dnsPolicy: ClusterFirst
  restartPolicy: Never
EOF

K8S-资源配额 — 指导手册

2.创建Pod

[root@k8s01 ~]# k apply -f nginx3.yaml

3.验证结果-已达到最大限额

1.使用kubectl进入容器查询

[root@k8s01 ~]# k exec -it nginx3 -- /bin/bash -c "ps -aux | grep stress"
root          1  0.0  0.0   7304   428 ?        Ss   04:22   0:00 /usr/bin/stress --verbose -c 1
root          6 29.9  0.0   7304   100 ?        R    04:22   3:24 /usr/bin/stress --verbose -c 1
root         32  0.0  0.0  17952  1400 pts/0    Ss+  04:34   0:00 /bin/bash -c ps -aux | grep stress
root         38  0.0  0.0   8860   644 pts/0    S+   04:34   0:00 grep stress

2.在Node节点使用Docker查询

[root@k8s02 ~]# docker exec -it 2a3c /bin/bash -c "ps -aux | grep stress"
root          1  0.0  0.0   7304   428 ?        Ss   04:22   0:00 /usr/bin/stress --verbose -c 1
root          6 29.9  0.0   7304   100 ?        R    04:22   6:40 /usr/bin/stress --verbose -c 1
root         46  0.0  0.0  17952  1400 pts/0    Ss+  04:45   0:00 /bin/bash -c ps -aux | grep stress
root         52  0.0  0.0   8860   648 pts/0    S+   04:45   0:00 grep stress

9、相同Pod中运行多个容器

删除所有Pod
[root@k8s01 ~]# k delete pods --all --force

1.修改yaml文件

[root@k8s01 ~]# cat > nginx3.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx3
  name: nginx3
spec:
  containers:
  - image: nginx:1.21
    imagePullPolicy: IfNotPresent
    name: nginx
    resources:
      requests:
        cpu: 0.1
      limits:
        cpu: 0.3
  - image: busybox
    name: busybox
    command:
    - sleep
    - "3600"
  dnsPolicy: ClusterFirst
  restartPolicy: Never
EOF

2.创建Pod

[root@k8s01 ~]# k apply -f nginx3.yaml

3.查看状态

[root@k8s01 ~]# k get pods
NAME     READY   STATUS    RESTARTS   AGE
nginx3   2/2     Running   0          109s

4.思考问题

1.两个容器是否有启动顺序?

        Pod中有多个容器时,这些容器是同时启动的,它们之间没有固定的启动顺序。Kubernetes 不会强制执行容器的启动顺序,因此它们可以并行启动。这允许Pod内的多个容器之间可以进行协同工作或共享资源

2.如果其中一个容器没有顺利启动,是否会影响另外一个容器的启动?

1.修改其中一个镜像为不存在
[root@k8s01 ~]# sed -i '/image: b/ s/busybox/oxxxxxx-321/' nginx3.yaml
2.应用 
[root@k8s01 ~]# k get pods
NAME     READY   STATUS             RESTARTS   AGE
nginx3   1/2     ImagePullBackOff   0          5m25s
3.验证
[root@k8s01 ~]# k get pods -o wide | awk 'NR==2 {print $6}'    #确认Pod访问地址
10.244.236.150

[root@k8s01 ~]# curl -s 10.244.236.150 | grep "h1"    #可以正常访问web服务
<h1>Welcome to nginx!</h1>

        一个容器在Pod中没有成功启动,通常情况下不会直接影响其他容器的启动。每个容器都有自己的运行环境和资源,它们的启动和运行是独立的。然而,这并不意味着一个容器无法影响另一个容器,因为它们可能共享一些资源,如共享的卷(volumes)或网络命名空间

3.如果要登入某个容器,该如何执行命令?(将镜像改为正常镜像并应用不在赘述)

[root@k8s01 ~]# k exec -it nginx3 -c nginx -- /bin/bash
root@nginx3:/# exit
exit
[root@k8s01 ~]# k exec -it nginx3 -c busybox -- /bin/sh
/ # exit

10、初始化容器

删除所有Pod
[root@k8s01 ~]# k delete pods --all --force

1.initContainers优势

2.修改yaml文件

初始化容器

[root@k8s01 ~]# cat > nginx3.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx3
  name: nginx3
spec:
  initContainers:
  - image: busybox
    name: busybox
    command:
    - ping
    - www.baidu.com
    - -c
    - "5"
  containers:
  - image: nginx:1.21
    imagePullPolicy: IfNotPresent
    name: nginx
    resources:
      requests:
        cpu: 0.1
      limits:
        cpu: 0.3
  dnsPolicy: ClusterFirst
  restartPolicy: Never
EOF

3.创建Pod

[root@k8s01 ~]# k apply -f nginx3.yaml

4.查看容器初始化信息

[root@k8s01 ~]# k logs nginx3 -c busybox 
PING www.baidu.com (110.242.68.4): 56 data bytes
64 bytes from 110.242.68.4: seq=0 ttl=127 time=34.744 ms
64 bytes from 110.242.68.4: seq=1 ttl=127 time=28.026 ms
64 bytes from 110.242.68.4: seq=2 ttl=127 time=53.680 ms
64 bytes from 110.242.68.4: seq=3 ttl=127 time=29.150 ms
64 bytes from 110.242.68.4: seq=4 ttl=127 time=26.672 ms

--- www.baidu.com ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss

5.模拟问题

1.修改yaml文件,将访问地址改为一个无法访问的

[root@k8s01 ~]# sed -i '/www/ s/www.baidu.com/abc.abc.abc/' nginx3.yaml

2.刷新Pod

[root@k8s01 ~]# k replace -f nginx3.yaml --force 
pod "nginx3" deleted
pod/nginx3 replaced

3.查看pod

[root@k8s01 ~]# k get pods
NAME     READY   STATUS       RESTARTS   AGE
nginx3   0/1     Init:Error   0          114s
[root@k8s01 ~]# k logs nginx3 -c busybox 
ping: bad address 'abc.abc.abc'
[root@k8s01 ~]# k logs nginx3 -c nginx 
Error from server (BadRequest): container "nginx" in pod "nginx3" is waiting to start: PodInitializing
因为是无效的域名,导致InitContainers失败,由于配置的Never最终导致Pod启动失败

4.结论

11、Pod健康检查

​删除所有Pod
[root@k8s01 ~]# k delete pods --all --force

1.容器探针介绍

种类:存活(Liveness)就绪(Readiness)启动(Startup)探针

手册 — 配置存活、就绪和启动探针

kubernetes -- Pod健康检查*_花非人陌_*的博客-CSDN博客

2.修改yaml文件

[root@k8s01 ~]# cat > nginx3.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx3
  name: nginx3
spec:
  containers:
  - image: nginx:1.21
    imagePullPolicy: IfNotPresent
    name: nginx
    resources:
      requests:
        cpu: 0.1
      limits:
        cpu: 0.3
    startupProbe:
      tcpSocket:
        port: 80
      failureThreshold: 15
      successThreshold: 1
      periodSeconds: 10
      timeoutSeconds: 1
   livenessProbe:
      tcpSocket:
        port: 80
      failureThreshold: 3
      successThreshold: 1
      periodSeconds: 5
      timeoutSeconds: 1   
  dnsPolicy: ClusterFirst
  restartPolicy: Never
EOF

3.创建Pod

[root@k8s01 ~]# k apply -f nginx3.yaml

4.kubelet 驱逐时 Pod 的选择

5.问题

就绪探针只是首次执行成功,才会到达存活探针?

那么就绪探针成功以后,再存活探针工作时间就绪探针还会工作吗

12、PV和PVC

1.介绍

2.实验准备工作

1.k8s01中安装nfs-server

yum install -y nfs-utils rpcbind

2.工作节点  -  安装nfs客户端

安装服务
[root@k8s01 ~]# for i in 192.168.248.2{1..2}; do ssh root@${i} "yum -y install nfs-utils"; done
设置当前启动并且下次开机时自动启动
[root@k8s01 ~]# for i in 192.168.248.2{1..2}; do ssh root@${i} "systemctl enable --now nfs-utils"; done

3.配置nfs规则及启动服务

[root@k8s01 ~]# echo "/nfsData    *(rw,no_root_squash)" > /etc/exports
#NFS 服务器将 /nfsData 目录共享给所有客户端,并允许以读写(rw)方式访问,同时禁用了对 root 用户的权限缩小

[root@k8s01 ~]# systemctl enable --now rpcbind
[root@k8s01 ~]# systemctl enable --now nfs
[root@k8s01 ~]# mkdir /nfsData  #创建文件夹

4.测试服务

[root@k8s01 ~]# showmount -e
Export list for k8s01:
/nfsData *

3.编写YAML文件

1.编写PV的yaml文件

[root@k8s01 ~]# ls /nfsDate
ls: cannot access /nfsDate: No such file or directory
[root@k8s01 ~]# cat >> pv.yaml << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /nfsData
    server: 192.168.248.20
EOF

2.编写PVC的yaml文件

[root@k8s01 ~]# cat >> pvc.yaml << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  volumeName: pv01
  resources:
    requests:
      storage: 1Gi
EOF
查看状态pv与pvc已经绑定
[root@k8s01 ~]# k get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
pv01   1Gi        RWX            Recycle          Bound    default/myclaim                           12m
[root@k8s01 ~]# k get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myclaim   Bound    pv01     1Gi        RWX                           12m

4.部署PV与PVC

[root@k8s01 ~]# k apply -f pv.yaml 
persistentvolume/pv01 created
[root@k8s01 ~]# k apply -f pvc.yaml 
persistentvolumeclaim/myclaim created

5.使用PV和PVC

1.修改nginx3.yaml文件

增加PVC内容
[root@k8s01 ~]# cat > nginx3.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx3
  name: nginx3
spec:
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: myclaim
  containers:
  - image: nginx:1.21
    imagePullPolicy: IfNotPresent
    name: nginx
    volumeMounts:
    - name: data
      mountPath: /usr/share/nginx/html/
  dnsPolicy: ClusterFirst
  restartPolicy: Never
EOF

2.部署Pod

[root@k8s01 ~]# k apply -f nginx3.yaml

5.测试

1.通过url访问测试

经过验证可以正常访问之前宿主机写的内容
[root@k8s01 ~]# echo Hello_World > /nfsData/index.html
[root@k8s01 ~]# curl 10.244.235.147
Hello_World

2.通过容器内部查询

[root@k8s01 ~]# k exec -it nginx3 -- /bin/bash -c "cat /usr/share/nginx/html/index.html"
Hello_World

当将nfsData目录中内容删除时,容器内部文件也随之消失

08-19 15:53