Skip to content

Implement IPv6 support for secondary network#7762

Open
wenqiq wants to merge 4 commits intoantrea-io:mainfrom
wenqiq:multi-nic-ipv6
Open

Implement IPv6 support for secondary network#7762
wenqiq wants to merge 4 commits intoantrea-io:mainfrom
wenqiq:multi-nic-ipv6

Conversation

@wenqiq
Copy link
Contributor

@wenqiq wenqiq commented Feb 6, 2026

Add IPv6 and dual-stack support for secondary network

Previously, Antrea IPAM only supported IPv4 for secondary networks.
This change removes that restriction by allowing multiple IPPool
allocators (one per IP family) to be used during address allocation.

TestDone:
0fc24a01-1a3e-4496-9a29-261cd6730a1c

vlan-pod1:

kubectl get pod vlan-pod1 -n testvlannetwork-9mpw214k -oyaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "vlan-net1",
          "interface": "eth1",
          "ips": [
              "148.14.24.2",
              "10:2400:1::2"
          ],
          "mac": "aa:bb:cc:dd:ee:01",
          "dns": {}
      },{
          "name": "vlan-net2",
          "interface": "eth2",
          "ips": [
              "148.14.25.112",
              "10:2400:2::3"
          ],
          "mac": "aa:bb:cc:dd:ee:02",
          "dns": {}
      },{
          "name": "antrea",
          "interface": "eth0",
          "ips": [
              "10.244.0.4",
              "fd00:10:244::4"
          ],
          "mac": "46:ff:fa:fb:43:25",
          "default": true,
          "dns": {},
          "gateway": [
              "10.244.0.1",
              "fd00:10:244::1"
          ]
      }]
    k8s.v1.cni.cncf.io/networks: '[{"name": "vlan-net1", "namespace": "default", "interface":
      "eth1", "mac": "aa:bb:cc:dd:ee:01"}, {"name": "vlan-net2", "namespace": "default",
      "interface": "eth2", "mac": "aa:bb:cc:dd:ee:02"}]'
  creationTimestamp: "2026-02-26T11:38:33Z"
  labels:
    App: secondaryTest
    antrea-e2e: vlan-pod1
    app: toolbox
  name: vlan-pod1
  namespace: testvlannetwork-9mpw214k
  resourceVersion: "907"
  uid: 753c1c51-d188-4972-a8ed-7a67c73e83a3
spec:
  containers:
  - image: antrea/toolbox:1.5-1
    imagePullPolicy: IfNotPresent
    name: toolbox
    resources: {}
    securityContext:
      privileged: false
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-6hgn8
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: kind-control-plane
  nodeSelector:
    kubernetes.io/hostname: kind-control-plane
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 1
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
    operator: Exists
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-6hgn8
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2026-02-26T11:38:43Z"
    status: "True"
    type: PodReadyToStartContainers
  - lastProbeTime: null
    lastTransitionTime: "2026-02-26T11:38:33Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2026-02-26T11:38:43Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2026-02-26T11:38:43Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2026-02-26T11:38:33Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://06b28cd07f94a5e74b978e2f4fb4629c4ef9087b6cc371c72b5d7be98d184f21
    image: docker.io/antrea/toolbox:1.5-1
    imageID: docker.io/antrea/toolbox@sha256:f75f9572f3913591af9f9a955a764436258d8b8b25561aac0d6233332499bfe0
    lastState: {}
    name: toolbox
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2026-02-26T11:38:42Z"
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-6hgn8
      readOnly: true
      recursiveReadOnly: Disabled
  hostIP: 172.18.0.2
  hostIPs:
  - ip: 172.18.0.2
  - ip: fde7:2cda:19b6::2
  phase: Running
  podIP: 10.244.0.4
  podIPs:
  - ip: 10.244.0.4
  - ip: fd00:10:244::4
  qosClass: BestEffort
  startTime: "2026-02-26T11:38:33Z"
b283434f-a793-4eac-9e62-d5678c03df12

vlan-pod3:

kubectl get pod vlan-pod3 -n testvlannetwork-9mpw214k -oyaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "vlan-net2",
          "interface": "eth1",
          "ips": [
              "148.14.25.111",
              "10:2400:2::2"
          ],
          "mac": "aa:bb:cc:dd:ee:05",
          "dns": {}
      },{
          "name": "antrea",
          "interface": "eth0",
          "ips": [
              "10.244.1.3",
              "fd00:10:244:1::3"
          ],
          "mac": "7e:f6:11:15:c4:a0",
          "default": true,
          "dns": {},
          "gateway": [
              "10.244.1.1",
              "fd00:10:244:1::1"
          ]
      }]
    k8s.v1.cni.cncf.io/networks: '[{"name": "vlan-net2", "namespace": "default", "interface":
      "eth1", "mac": "aa:bb:cc:dd:ee:05"}]'
  creationTimestamp: "2026-02-26T11:38:33Z"
  labels:
    App: secondaryTest
    antrea-e2e: vlan-pod3
    app: toolbox
  name: vlan-pod3
  namespace: testvlannetwork-9mpw214k
  resourceVersion: "904"
  uid: e48990a6-ddfb-438b-a068-9a3372d44d3f
spec:
  containers:
  - image: antrea/toolbox:1.5-1
    imagePullPolicy: IfNotPresent
    name: toolbox
    resources: {}
    securityContext:
      privileged: false
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-5wt97
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: kind-worker
  nodeSelector:
    kubernetes.io/hostname: kind-worker
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 1
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-5wt97
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2026-02-26T11:38:43Z"
    status: "True"
    type: PodReadyToStartContainers
  - lastProbeTime: null
    lastTransitionTime: "2026-02-26T11:38:33Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2026-02-26T11:38:43Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2026-02-26T11:38:43Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2026-02-26T11:38:33Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://cd675653b0973204fe31394b103e4535620ee0ecf76db0de73f0a20d1697e68f
    image: docker.io/antrea/toolbox:1.5-1
    imageID: docker.io/antrea/toolbox@sha256:f75f9572f3913591af9f9a955a764436258d8b8b25561aac0d6233332499bfe0
    lastState: {}
    name: toolbox
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2026-02-26T11:38:43Z"
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-5wt97
      readOnly: true
      recursiveReadOnly: Disabled
  hostIP: 172.18.0.3
  hostIPs:
  - ip: 172.18.0.3
  - ip: fde7:2cda:19b6::3
  phase: Running
  podIP: 10.244.1.3
  podIPs:
  - ip: 10.244.1.3
  - ip: fd00:10:244:1::3
  qosClass: BestEffort
  startTime: "2026-02-26T11:38:33Z"
2b426025-807e-4be7-b898-5f6455ba9d8f

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request implements IPv6 support for secondary networks in Antrea IPAM, extending the existing IPv4-only implementation to support IPv6 and dual-stack (IPv4+IPv6) configurations. The PR is marked as "[WIP]" (Work in Progress), indicating it may not be complete.

Changes:

  • Extended IPAM controller to support multiple IP pool allocators instead of a single allocator, enabling dual-stack configurations
  • Updated test helpers to dynamically determine IP version (IPv4 vs IPv6) and added test cases for IPv6 and dual-stack scenarios
  • Enhanced documentation with IPv6 and dual-stack configuration examples

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
pkg/agent/secondarynetwork/podwatch/controller_test.go Added test helper for dual-stack IPAM results and new test cases for IPv6 VLAN, dual-stack VLAN, and IPv6 SR-IOV networks
pkg/agent/cniserver/ipam/antrea_ipam_test.go Added test cases for IPv6 pool allocation, dual-stack pool allocation, and static IPv6 address configuration
pkg/agent/cniserver/ipam/antrea_ipam_controller.go Refactored getPoolAllocatorByPod to getPoolAllocatorsByPod to return multiple allocators and removed IPv4-only restriction
pkg/agent/cniserver/ipam/antrea_ipam.go Enhanced Add, Check, and owns functions to handle multiple allocators; added findMatchingIP helper for IP family matching; implemented rollback on allocation failure
docs/antrea-ipam.md Added documentation sections for IPv6 and dual-stack secondary network configurations with complete examples

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@wenqiq wenqiq changed the title [WIP]Implement IPv6 support for secondary network Implement IPv6 support for secondary network Feb 26, 2026
@wenqiq wenqiq marked this pull request as ready for review February 26, 2026 11:53
"type": "antrea",
"networkType": "vlan",
"mtu": 1200,
"mtu": 1300,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RFC 8200 mandates a minimum MTU of 1280 for IPv6.

}

return mineTrue, allocator, ips, reservedOwner, err
return mineTrue, allocators, ips, reservedOwner, nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can a Pod get >1 v4 or v6 allocators?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment refers to the same issue as the comment #7762 (comment) ?


ipConfig, defaultRoute := generateIPConfig(ip, int(subnetInfo.PrefixLength), gwIP)
result.Routes = append(result.Routes, defaultRoute)
result.IPs = append(result.IPs, ipConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cannot we get >1 v4 or v6 IPs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think it’s possible. Current code returns allocators for all valid pools from the annotation (getPoolAllocatorsByPod), and we append one IPConfig per allocator. It’s intended for dual-stack (one v4 + one v6) but it doesn’t enforce ‘one pool per family’, so multiple v4/v6 can happen.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should allocate at most 1 v4 and 1 v6 IP.

And probably we should report an error if no v4 / v6 IP is available, but the Pod does have a v4 / v6 pool specified.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. Please help check it again. Thanks.


// Test dual-stack allocation for primary network: a pod in a namespace
// annotated with both an IPv4 and an IPv6 pool should receive two IPs.
testAddDualStack := func(test string, expectedIPv4, expectedGWv4, expectedMaskv4, expectedIPv6, expectedGWv6, expectedMaskv6 string) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to enhance and share code with existing teatAdd/Del/Check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

wenqiq added 3 commits March 9, 2026 22:59
Signed-off-by: Wenqi Qiu <wenqi.qiu@broadcom.com>
Signed-off-by: Wenqi Qiu <wenqi.qiu@broadcom.com>
Signed-off-by: Wenqi Qiu <wenqi.qiu@broadcom.com>
Signed-off-by: Wenqi Qiu <wenqi.qiu@broadcom.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants