Verified Scale Limits for Cisco DCNM

Verified Scale Limits for Cisco DCNM

This document describes the verified scale limits for Cisco DCNM 11.5(1) for managing LAN, SAN, and Media Controller fabrics. The values are validated on testbeds that are enabled with a reasonable number of features, and aren’t theoretical system limits for Cisco DCNM software or Cisco Nexus/MDS switch hardware and software. The values can increase over time with more testing and validation. When you try to achieve maximum scalability by scaling multiple features at the same time, results might differ from the values that are listed here.

Cisco DCNM LAN Fabric Deployment

All LAN deployments will be managed using the LAN Fabric installation mode. The LAN Fabric mode has various fabric templates that can be used for different kinds of data center deployments. For example, the Easy_Fabric template is used for VXLAN BGP EVPN deployments that primarily use Cisco Nexus 9000 and Cisco Nexus 3000 Series switches. Similarly, External and LAN_Classic fabric templates can be used for legacy 3-tier, FabricPath, and other kinds of deployments.


Note


  • We recommend that you deploy Cisco DCNM server in Native HA mode in a production setup.

  • We recommend native HA deployment for DCNM servers in the DCNM cluster mode with 3 compute nodes.

  • NIR scale with DCNM is 350 switches, independent of Managed/Monitored mode. Network Insights applications are only supported in cluster mode. Refer to Cisco Network Insights for Resources Application for Cisco DCNM User Guide.


Refer to the following table if you are provisioning new VXLAN EVPN fabrics. The table applies to all deployments except for "Brownfield Migrations".

Table 1. Scale Limits For Provisioning New VXLAN EVPN Fabrics (Also referred to as "Greenfield" Deployment)

Description

Verified Limit

Fabric Underlay Overlay

Switches

Note

 

The maximum recommended number of switches per fabric when DCNM is in managed mode is 150.

80 – Managed by a DCNM server with no compute nodes. The managed switches can be part of any of the fabrics: Easy, eBGP, External or LAN_Classic.

350 – Managed by a DCNM server with three compute nodes. The managed switches can be part of any of the fabrics: Easy, eBGP, External, or LAN_Classic.

750 – Monitored by a DCNM server with and without compute nodes. Monitored switches are typically part of External or LAN_Classic fabrics with monitor mode enabled.

Physical Interfaces

30000

Layer-3 scenario: VRFs

500

Layer-3 scenario: Networks

1000

Layer 2 scenario: Networks

1500

VRF instances for external connectivity

300

Note

 

300 VRFs over 1000 Layer-3 network or 300 VRFs over 1500 Layer-2 network is supported.

Easy fabrics supported for one Multi-Site Domain (MSD)

8

Endpoint Locator

Endpoints

100000 across a maximum of 4 fabrics (in cluster mode with 3 compute nodes)

Virtual Machine Manager (VMM)

Virtual Machines (VMs)

5500

VMware vCenter Servers

4

IPAM Integrator application

150 networks with a total of 4K IP allocations on the Infoblox server

Kubernetes Visualizer application

A maximum of 159 namespaces along with a maximum of 1002 pods


Note


There is no limit on the number of Multi-Site Domains (MSDs) that can be created.


Refer to the following table if you are transitioning a Cisco Nexus 9000 Series switches based VXLAN EVPN fabric management to DCNM. Before the migration, your fabric was an NFM managed or CLI configured fabric.

Table 2. Scale Limits For Transitioning Existing Fabric Management to DCNM (Also referred to as "Brownfield Migration")

Description

Verified Limit

Fabric Underlay and Overlay

Switches per fabric

100

Physical Interfaces

5000

VRF instances

500

Overlay networks

1000

VRF instances for external connectivity

300

Endpoint Locator

Endpoints

100000 across a maximum of 4 fabrics

Virtual Machine Manager (VMM)

Virtual Machines (VMs)

5500

VMware vCenter Servers

4

IPAM Integrator application

150 networks with a total of 4K IP allocations on the Infoblox server

Kubernetes Visualizer application

A maximum of 159 namespaces along with a maximum of 1002 pods

Cisco DCNM LAN Fabric Deployment Without Network Insights (NI)


Note


For information about various system requirements for proper functioning of Cisco DCNM LAN Fabric deployment, see System Requirements.

Refer to Network Insights User guide for sizing information for Cisco DCNM LAN Deployment with Network Insights (NI).

To see the verified scale limits for Cisco DCNM 11.5(1) for managing LAN Fabric deployments, see Verified Scale Limits for Cisco DCNM.


Table 3. Upto 80 Switches
Node CPU Deployment Mode CPU Memory Storage Network
DCNM OVA/ISO 16 vCPUs 32G 500G HDD 3xNIC
Computes NA
Table 4. 81–350 Switches
Node CPU Deployment Mode CPU Memory Storage Network
DCNM OVA/ISO 16 vCPUs 32G 500G HDD 3xNIC
Computes OVA/ISO 16 vCPUs 64G 500G HDD 3xNIC

Cisco DCNM SAN Management

This fabric is used for SAN topologies.

Description

Verified Limit

Switches

80

Hosts or targets

20000

Zone sets

1000

Zones

16000

SAN Insights

The table specifies values supported for Cisco DCNM SAN deployments.

Description

Verified Limit

1

Cisco Nexus Dashboard

60,000 ITLs/ITNs

Cisco DCNM on OVA Virtual Appliances

40,000 ITLs/ITNs

Cisco DCNM on Linux (RHEL)

20,000 ITLs/ITNs

1
  • Initiator-Target-LUNs (ITLs)

  • Initiator-Target-Namespace ID (ITNs)