Day24 让你的k8s Pod 具备多介面功能 - 实做篇

昨天简述了关於Multus CNI的使用需求和架构,今天我们来介绍Multus的环境建置和测试,此次使用OpenvSwitch(ovs)做而外介面的测试。

ps.本篇是一篇无情的实做文章

Multus

可以於其github上做详细的档案

ps.如k8s的版本低於16,於github上寻找pre-1.16.yml系列的档案布署,此区使用Kubernetes v1.18.15
*布署Multus
multus-daemonset.yml

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: network-attachment-definitions.k8s.cni.cncf.io
spec:
  group: k8s.cni.cncf.io
  scope: Namespaced
  names:
    plural: network-attachment-definitions
    singular: network-attachment-definition
    kind: NetworkAttachmentDefinition
    shortNames:
    - net-attach-def
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          description: 'NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing
            Working Group to express the intent for attaching pods to one or more logical or physical
            networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec'
          type: object
          properties:
            apiVersion:
              description: 'APIVersion defines the versioned schema of this represen
                tation of an object. Servers should convert recognized schemas to the
                latest internal value, and may reject unrecognized values. More info:
                https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
              type: string
            kind:
              description: 'Kind is a string value representing the REST resource this
                object represents. Servers may infer this from the endpoint the client
                submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
              type: string
            metadata:
              type: object
            spec:
              description: 'NetworkAttachmentDefinition spec defines the desired state of a network attachment'
              type: object
              properties:
                config:
                  description: 'NetworkAttachmentDefinition config is a JSON-formatted CNI configuration'
                  type: string
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: multus
rules:
  - apiGroups: ["k8s.cni.cncf.io"]
    resources:
      - '*'
    verbs:
      - '*'
  - apiGroups:
      - ""
    resources:
      - pods
      - pods/status
    verbs:
      - get
      - update
  - apiGroups:
      - ""
      - events.k8s.io
    resources:
      - events
    verbs:
      - create
      - patch
      - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: multus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: multus
subjects:
- kind: ServiceAccount
  name: multus
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: multus
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: multus-cni-config
  namespace: kube-system
  labels:
    tier: node
    app: multus
data:
  # NOTE: If you'd prefer to manually apply a configuration file, you may create one here.
  # In the case you'd like to customize the Multus installation, you should change the arguments to the Multus pod
  # change the "args" line below from
  # - "--multus-conf-file=auto"
  # to:
  # "--multus-conf-file=/tmp/multus-conf/70-multus.conf"
  # Additionally -- you should ensure that the name "70-multus.conf" is the alphabetically first name in the
  # /etc/cni/net.d/ directory on each node, otherwise, it will not be used by the Kubelet.
  cni-conf.json: |
    {
      "name": "multus-cni-network",
      "type": "multus",
      "capabilities": {
        "portMappings": true
      },
      "delegates": [
        {
          "cniVersion": "0.3.1",
          "name": "default-cni-network",
          "plugins": [
            {
              "type": "flannel",
              "name": "flannel.1",
                "delegate": {
                  "isDefaultGateway": true,
                  "hairpinMode": true
                }
              },
              {
                "type": "portmap",
                "capabilities": {
                  "portMappings": true
                }
              }
          ]
        }
      ],
      "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        image: docker.io/nfvpe/multus:stable
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=auto"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      terminationGracePeriodSeconds: 10
      volumes:
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: cnibin
          hostPath:
            path: /opt/cni/bin
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 70-multus.conf
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: ppc64le
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        # ppc64le support requires multus:latest for now. support 3.3 or later.
        image: docker.io/nfvpe/multus:stable-ppc64le
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=auto"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "90Mi"
          limits:
            cpu: "100m"
            memory: "90Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      terminationGracePeriodSeconds: 10
      volumes:
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: cnibin
          hostPath:
            path: /opt/cni/bin
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 70-multus.conf
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-arm64v8
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: arm64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        image: docker.io/nfvpe/multus:stable-arm64v8
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=auto"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "90Mi"
          limits:
            cpu: "100m"
            memory: "90Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      terminationGracePeriodSeconds: 10
      volumes:
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: cnibin
          hostPath:
            path: /opt/cni/bin
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 70-multus.conf

ovs建立和环境

  • 首先先安装ovs的package
sudo apt update && sudo apt install openvswitch-switch -y
  • 加入Bridge
#ovs-vsctl add-br <Bridge Name>
ovs-vsctl add-br br0

ps.删除为ovs-vsctl del-br

  • 建立CNI内容
    ovs-cni.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ovs-cni-amd64
  namespace: kube-system
  labels:
    tier: node
    app: ovs-cni
spec:
  selector:
    matchLabels:
      app: ovs-cni
  template:
    metadata:
      labels:
        tier: node
        app: ovs-cni
      annotations:
        description: OVS CNI allows users to attach their Pods/VMs to Open vSwitch bridges available on nodes
    spec:
      serviceAccountName: ovs-cni-marker
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      initContainers:
      - name: ovs-cni-plugin
        image: quay.io/kubevirt/ovs-cni-plugin:latest
        command: ['cp', '/ovs', '/host/opt/cni/bin/ovs']
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        volumeMounts:
        - name: cnibin
          mountPath: /host/opt/cni/bin
      containers:
      - name: ovs-cni-marker
        image: quay.io/kubevirt/ovs-cni-marker:latest
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        args:
          - -node-name
          - $(NODE_NAME)
          - -ovs-socket
          - /host/var/run/openvswitch/db.sock
        volumeMounts:
          - name: ovs-var-run
            mountPath: /host/var/run/openvswitch
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
      volumes:
        - name: cnibin
          hostPath:
            path: /opt/cni/bin
        - name: ovs-var-run
          hostPath:
            path: /var/run/openvswitch
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ovs-cni-marker-cr
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - nodes/status
  verbs:
  - get
  - update
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ovs-cni-marker-crb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ovs-cni-marker-cr
subjects:
- kind: ServiceAccount
  name: ovs-cni-marker
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ovs-cni-marker
  namespace: kube-system
  • 建立OVS的CRD

ovs-crd.yaml

cat <<EOF >./ovs-net-crd.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: ovs-net
  annotations:
    k8s.v1.cni.cncf.io/resourceName: ovs-cni.network.kubevirt.io/<Bridge Name>
spec:
  config: '{
      "cniVersion": "0.3.1",
      "type": "ovs",
      "bridge": "<Bridge Name>"
    }'
EOF
  • 最後建立CNI Network Controller

unix-daemonset.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: network-controller-server-unix
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: network-controller-server-unix
  template:
    metadata:
      labels:
        name: network-controller-server-unix
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: network-controller-server-unix
        image: sdnvortex/network-controller:v0.4.9
        securityContext:
          privileged: true
        command: ["/go/bin/server"]
        args: ["-unix=/tmp/vortex.sock", "-netlink-gc"]
        volumeMounts:
        - mountPath: /var/run/docker/netns:shared
          name: docker-ns
          #mountPropagation: Bidirectional
        - mountPath: /var/run/docker.sock
          name: docker-sock
        - mountPath: /var/run/openvswitch/db.sock
          name: ovs-sock
        - mountPath: /tmp/
          name: grpc-sock
      volumes:
      - name: docker-ns
        hostPath:
          path: /run/docker/netns
      - name: docker-sock
        hostPath:
          path: /run/docker.sock
      - name: ovs-sock #since the UNIX version only add-port, the db.sock is enough
        hostPath:
          path: /run/openvswitch/db.sock
      - name: grpc-sock
        hostPath:
          path: /tmp/vortex
      hostNetwork: true

测试

  • 最後做一个测试,确认有新增网路介面
    test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ubuntu-deployment
spec:
  selector:
    matchLabels:
      app: ubuntu
  replicas: 1
  template:
    metadata:
      labels:
        app: ubuntu
    spec:
      containers:
      - name: ubuntu-container
        image: ubuntu:20.04
        args: [bash, -c, 'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 100; done']
      initContainers:
      - name: init-network-client
        image: sdnvortex/network-controller:v0.4.9
        command: ["/go/bin/client"]
        args: ["-s=unix:///tmp/vortex.sock", "-b=br0", "-n=eth1", "-i=192.168.2.160/23"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_UUID
          valueFrom:
            fieldRef:
              fieldPath: metadata.uid
        volumeMounts:
        - mountPath: /tmp/
          name: grpc-sock
      volumes:
      - name: grpc-sock
        hostPath:
          path: /tmp/vortex/
  • 结果:
    pod的情况

    进入pod观看网路介面

结论

我们建立了第二介面之後,可以通过第二介面与其他的pod做通讯,因为我使用ovs的方式,因此只要是通过ovs的方式走的网路都可以通,不需要理会namespace的切割情况。


<<:  Day.30 「什麽!? Promise 的语法糖?」 —— ES8 Async & Await

>>:  Day 24 - 资料结构入门理解

Day17 Let's ODOO: Data Files

通常我们在写module的时候,会需要一些初始资料或是固定需要的资料,我们可以定义资料在创立Mode...

Day 2 - 原型: Figma

前言 我先建立一个原型(Prototype), 用来厘清这个WebApp的功能跟UI界面。 建立原型...

应用系统开发的防护基准

适用人员: 技术人员(开发人员)。 适用法规: 资通安全责任等级分级办法 - 附表十资通系统防护基准...

Day-24 : 开发时,使用到tailwindCSS,今天来讲安装

yarn add -D tailwindcss@latest postcss@latest auto...

17. STM32-I²C EEPROM

上篇针对AT24C256B DataSheet当中的地址以及功能说明,这一篇会使用STM32去对EE...