本文介绍了如何在使用kubeadm部署的kubernetes集群中使Pod CIDR范围更大?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我部署了添加了--pod-network-cidr的群集,并使用calicoctl创建了新的IP池,以将Pod更改为该范围.我遇到的问题恰恰是我需要在kubernetes方面进行更改以使pod cidr范围更改吗?我要在API服务器,控制器管理器和调度程序中进行更改,还是仅需要更改某些特定部分.我仅尝试更改控制器管理器,并且在更改yaml中的--cluster-cidr后,这些控制平面舱进入了崩溃循环.

I deployed my cluster with the --pod-network-cidr added, and have created the new ip pool using calicoctl to change the pods to this range. The problem I am having is exactly what I need to change on the kubernetes side to make the pod cidr range changes? Do I make changes in the API server, Controller manager, and scheduler or is there only specific parts I need to change. I have attempted only changing the controller manager, and those control plane pods go into a crash loop after changing the --cluster-cidr in the yaml.

控制器-管理器日志中的输出如下?

The output in the controller-manager logs are below?

controllermanager.go:235]启动控制器时出错:无法将idx [0]处的cidr [192.168.0.0/24]标记为已占用节点::cidr 192.168.0.0/24不在群集cidr 10.0的范围内. 0.0/16

controllermanager.go:235] error starting controllers: failed to mark cidr[192.168.0.0/24] at idx [0] as occupied for node: : cidr 192.168.0.0/24 is out the range of cluster cidr 10.0.0.0/16

推荐答案

更改集群CIDR不是一件容易的事.我设法重现了您的方案,并设法按照以下步骤进行了更改.

Changing a cluster CIDR isn't a simple task. I managed to reproduce your scenario and I managed to change it using the following steps.

更改IP池

过程如下:

  1. 将calicoctl安装为Kubernetes容器(来源)
  2. 添加新的IP池().
  3. 禁用旧的IP池.这样可以防止从旧的IP池中分配新的IPAM,而不会影响现有工作负载的联网.
  4. 更改节点podCIDR参数()
  5. 更改主节点上kube-controller-manager.yaml上的--cluster-cidr. (为此获得 OP 的信用)
  6. 重新创建从旧IP池分配了地址的所有现有工作负载.
  7. 删除旧的IP池.
  1. Install calicoctl as a Kubernetes pod (Source)
  2. Add a new IP pool (Source).
  3. Disable the old IP pool. This prevents new IPAM allocations from the old IP pool without affecting the networking of existing workloads.
  4. Change nodes podCIDR parameter (Source)
  5. Change --cluster-cidr on kube-controller-manager.yaml on master node. (Credits to OP on that)
  6. Recreate all existing workloads that were assigned an address from the old IP pool.
  7. Remove the old IP pool.

让我们开始吧.

在此示例中,我们将把192.168.0.0/16替换为10.0.0.0/8.

In this example, we are going to replace 192.168.0.0/16 to 10.0.0.0/8.

  1. 将calicoctl安装为Kubernetes容器
$ kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml

设置别名:

$ alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl "

  • 添加新的IP池:

  • Add a new IP pool:

    calicoctl create -f -<<EOF
    apiVersion: projectcalico.org/v3
    kind: IPPool
    metadata:
      name: new-pool
    spec:
      cidr: 10.0.0.0/8
      ipipMode: Always
      natOutgoing: true
    EOF
    

    我们现在应该有两个启用的IP池,运行calicoctl get ippool -o wide时可以看到它们:

    We should now have two enabled IP pools, which we can see when running calicoctl get ippool -o wide:

    NAME                  CIDR             NAT    IPIPMODE   DISABLED
    default-ipv4-ippool   192.168.0.0/16   true   Always     false
    new-pool              10.0.0.0/8       true   Always     false
    

  • 禁用旧的IP池.

  • Disable the old IP pool.

    首先将IP池定义保存到磁盘:

    First save the IP pool definition to disk:

    calicoctl get ippool -o yaml > pool.yaml
    

    pool.yaml应该看起来像这样:

    apiVersion: projectcalico.org/v3
    items:
    - apiVersion: projectcalico.org/v3
      kind: IPPool
      metadata:
        name: default-ipv4-ippool
      spec:
        cidr: 192.168.0.0/16
        ipipMode: Always
        natOutgoing: true
    - apiVersion: projectcalico.org/v3
      kind: IPPool
      metadata:
        name: new-pool
      spec:
        cidr: 10.0.0.0/8
        ipipMode: Always
        natOutgoing: true
    

    编辑文件,将disabled: true添加到default-ipv4-ippool IP池:

    Edit the file, adding disabled: true to the default-ipv4-ippool IP pool:

    apiVersion: projectcalico.org/v3
    kind: IPPool
    metadata:5
      name: default-ipv4-ippool
    spec:
      cidr: 192.168.0.0/16
      ipipMode: Always
      natOutgoing: true
      disabled: true
    

    应用更改:

    calicoctl apply -f pool.yaml
    

    我们应该看到更改反映在calicoctl get ippool -o wide的输出中:

    We should see the change reflected in the output of calicoctl get ippool -o wide:

    NAME                  CIDR             NAT    IPIPMODE   DISABLED
    default-ipv4-ippool   192.168.0.0/16   true   Always     true
    new-pool              10.0.0.0/8       true   Always     false
    

  • 更改节点podCIDR参数:

    使用新的IP源范围覆盖特定k8s节点资源上的podCIDR参数,这是使用以下命令的理想方式:

    Override podCIDR parameter on the particular k8s Node resource with a new IP source range, desirable way with the following commands:

    $ kubectl get no kubeadm-0 -o yaml > file.yaml; sed -i "s~192.168.0.0/24~10.0.0.0/16~" file.yaml; kubectl delete no kubeadm-0 && kubectl create -f file.yaml
    $ kubectl get no kubeadm-1 -o yaml > file.yaml; sed -i "s~192.168.1.0/24~10.1.0.0/16~" file.yaml; kubectl delete no kubeadm-1 && kubectl create -f file.yaml
    $ kubectl get no kubeadm-2 -o yaml > file.yaml; sed -i "s~192.168.2.0/24~10.2.0.0/16~" file.yaml; kubectl delete no kubeadm-2 && kubectl create -f file.yaml
    

    我们必须对每个节点执行此操作.注意IP范围,它们在一个节点与另一个节点之间是不同的.

    We had to perform this action for every node we have. Pay attention to the IP Ranges, they are different from one node to the other.

    更改kubeadm-config ConfigMap和kube-controller-manager.yaml上的CIDR

    Change CIDR on kubeadm-config ConfigMap and kube-controller-manager.yaml

    编辑kubeadm-config ConfigMap并将podSubnet更改为新的IP范围:

    Edit kubeadm-config ConfigMap and change podSubnet to the new IP Range:

    kubectl -n kube-system edit cm kubeadm-config
    

    此外,在主节点上的/etc/kubernetes/manifests/kube-controller-manager.yaml上更改--cluster-cidr.

    Also, change the --cluster-cidr on /etc/kubernetes/manifests/kube-controller-manager.yaml located in the master node.

    $ sudo cat /etc/kubernetes/manifests/kube-controller-manager.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        component: kube-controller-manager
        tier: control-plane
      name: kube-controller-manager
      namespace: kube-system
    spec:
      containers:
      - command:
        - kube-controller-manager
        - --allocate-node-cidrs=true
        - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
        - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
        - --bind-address=127.0.0.1
        - --client-ca-file=/etc/kubernetes/pki/ca.crt
        - --cluster-cidr=10.0.0.0/8
        - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
        - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
        - --controllers=*,bootstrapsigner,tokencleaner
        - --kubeconfig=/etc/kubernetes/controller-manager.conf
        - --leader-elect=true
        - --node-cidr-mask-size=24
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
        - --root-ca-file=/etc/kubernetes/pki/ca.crt
        - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
        - --service-cluster-ip-range=10.96.0.0/12
        - --use-service-account-credentials=true
    
    1. 使用禁用池中的IP重新创建所有现有工作负载.在此示例中,kube-dns是Calico唯一联网的工作负载:

    1. Recreate all existing workloads using IPs from the disabled pool. In this example, kube-dns is the only workload networked by Calico:

    kubectl delete pod -n kube-system kube-dns-6f4fd4bdf-8q7zp
    

    通过运行calicoctl get wep --all-namespaces,检查新的工作负载现在在新的IP池中是否有地址:

    Check that the new workload now has an address in the new IP pool by running calicoctl get wep --all-namespaces:

    NAMESPACE     WORKLOAD                   NODE      NETWORKS            INTERFACE
    kube-system   kube-dns-6f4fd4bdf-8q7zp   vagrant   10.0.24.8/32   cali800a63073ed
    

  • 删除旧的IP池:

  • Delete the old IP pool:

    calicoctl delete pool default-ipv4-ippool
    

  • 从头开始正确创建

    要使用Kubeadm和Calico在特定IP范围内部署群集,您需要使用--pod-network-cidr=192.168.0.0/24(其中192.168.0.0/24是您想要的范围)初始化群集,并且需要先调整Calico清单,然后再将其应用到您的计算机中.新鲜的群集.

    To deploy a cluster under a specific IP range using Kubeadm and Calico you need to init the cluster with --pod-network-cidr=192.168.0.0/24 (where 192.168.0.0/24 is your desired range) and than you need to tune the Calico manifest before applying it in your fresh cluster.

    要在应用之前调整Calico,您必须下载yaml文件并更改网络范围.

    To tune Calico before applying, you have to download it's yaml file and change the network range.

    1. 下载Kubernetes的Calico网络清单.
    1. Download the Calico networking manifest for the Kubernetes.
    $ curl https://docs.projectcalico.org/manifests/calico.yaml -O
    

  • 如果您使用的是Pod CIDR 192.168.0.0/24,请跳至下一步.如果使用其他Pod CIDR,请使用以下命令设置名为POD_CIDR的环境变量,其中包含您的Pod CIDR,并将清单中的192.168.0.0/24替换为Pod CIDR.

  • If you are using pod CIDR 192.168.0.0/24, skip to the next step. If you are using a different pod CIDR, use the following commands to set an environment variable called POD_CIDR containing your pod CIDR and replace 192.168.0.0/24 in the manifest with your pod CIDR.

    $ POD_CIDR="<your-pod-cidr>" \
    sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml
    

  • 使用以下命令应用清单.

  • Apply the manifest using the following command.

    $ kubectl apply -f calico.yaml
    

  • 这篇关于如何在使用kubeadm部署的kubernetes集群中使Pod CIDR范围更大?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

    06-19 10:51