понедельник, 29 февраля 2016 г.

Supervisor 2T White Paper

The Cisco Catalyst 6500 Supervisor Engine 2T is the latest addition to the Catalyst 6500 & 7600 family of Multi-Layer Switching Supervisor Engines. It offers much higher levels of forwarding performance, increases the scalability of many previously supported features, and introduces a host of new hardware-enabled functions beyond all previous Catalyst 6500 & 7600 Supervisor models.
This white paper will provide an architectural overview of the new Supervisor 2T. It will explore the physical layout of the Supervisor 2T, provide details about its updated hardware components, and give an overview of its newly introduced features.
The Supervisor 2T is made up of four main physical components:
   The baseboard
   The 5th generation Multi-Layer Switching Feature Card (MSFC5)
   The 4th generation Policy Feature Card (PFC4)
   The 2 Tbps Switch Fabric


The Supervisor baseboard forms the foundation upon which many of the purpose-built daughter cards and other components are placed. It houses a multitude of application-specific integrated circuits (ASICs), including the ASIC complex that makes up the primary two Terabit (2080 Gbps) crossbar switch fabric, as well as the port ASICs that control the front-panel 10 GE and GE ports.
The MSFC5 is a daughter card that holds the CPU complex, which serves as the control plane for the switch. The control plane handles the processing of all software-related features. One major difference from earlier versions of the MSFC is that this version combines what were previously two separate CPU complexes into one. More details on this new CPU complex will be explored later in this paper.
The PFC4 is another daughter card that incorporates a special set of ASICs and memory blocks, which provide hardware-accelerated data-plane services for packets traversing the switch. It introduces a number of scalability enhancements, by increasing the size of many of the memory tables used by many of the hardware-accelerated features. The PFC4 also introduces a number of new hardware-accelerated features, such as Cisco TrustSec (CTS) and Virtual Private LAN Service (VPLS).
The 2 Tbps Switch Fabric provides 26 dedicated 20 Gbps or 40 Gbps channels to support the new 6513-E & 7613-S chassises (in addition to all existing 6500 E series chassis models). On the Supervisor 720, the switch fabric supported 18 fabric channels, which were used to provide two fabric channels per slot on all slots (with the exception of the 6513 chassis). With the new 6513-E & 7613-S chassis, the 2T Switch Fabric is capable of supporting dual fabric channels for all linecard slots (Slots 7 and 8 are reserved for the Active and Standby Supervisors).
A high-level overview of the Supervisor 2T board layout is shown in the diagram below.
Figure 1.      Supervisor 2T Board Layout
A summary of the Supervisor 2T critical features is listed in the table below:
Table 1.       Important Baseboard Features of Supervisor 2T
Feature
Description
Switch fabric type
2080 Gbps (2 Tbps) crossbar switch fabric
Forwarding engine daughter card
PFC4 or PFC4XL
CPU daughter card
MSFC5
Uplink ports
2 x 10 GE (X2 optic support)
3 x GE (SFP support)
USB ports
2 x USB (1 x Type-A and 1 x Type-B)
Management Ports
Serial console port (RJ-45)
Connectivity management processor Ethernet port (RJ-45)
Management LED
Blue beacon LED
Media slot
Compact flash slot (Type II)
Forwarding performance
Up to 60 Mpps for L2, IPv4, and MPLS traffic
Up to 30 Mpps for IPv6 traffic
All packet processing is performed in a specific sequence through the different ASIC blocks. A high-level packet walk is provided for packets that ingress and egress the local ports on the Supervisor below.
Ingress Packet Processing
1.     Packets arriving on either the 1 G or 10 G ports have preliminary checks performed on them (by the PHY), such as cyclic redundancy checks (CRCs), before being forwarded to the Cisco TS ASIC.
2.     The CTS ASIC performs ingress 802.1ae decryption (if enabled) and extracts the Security Group Tag (SGT) for Roles-Based ACL (RBACL) processing (if enabled). If CTS is disabled, this ASIC is passive. The packet is then forwarded to the port ASIC.
3.     The port ASIC will store the packet in its local packet buffer and then applies an internal packet header, which contains information about the source port, VLAN, and more.
4.     The packet is then forwarded to the fabric interface and replication ASIC, where information from the packet header is forwarded to the PFC4 or DFC4 for forwarding lookup processing.
5.     The Layer 2 and Layer 3 processing is performed by the PFC4 (if a CFC is used, or local uplink ports) and the forwarding result of is sent back to fabric interface and replication ASIC.
6.     Additional lookups may be performed by the fabric interface and replication ASIC (if this packet needs replication services, such as SPAN, multicast, and others).
7.     The packet is then forwarded to the egress port ASIC (if the destination is local), or to the switch fabric (if the destination is a port on a remote fabric-capable linecard), or to the bus replication ASIC (if the packet destination is a linecard with connectivity only to the bus).
Egress Packet Processing (Packets Received from Crossbar)
1.     Packets are received from one of the switch fabric channels.
2.     The switch fabric sends the packet to the fabric replication ASIC.
3.     The fabric replication ASIC stores the packet and sends the packet header information to the PFC4 for forwarding lookup processing.
4.     The PFC4 performs the lookup and sends the results back to the fabric replication ASIC.
5.     The fabric replication ASIC performs an additional lookup, if replication services are required.
6.     The fabric replication ASIC then forwards the packet (and replicated packets, if applicable) to the port ASIC, which contains destination information from PFC4.
7.     The port ASIC stores the packet in the packet buffer and performs egress checks on the packet.
8.     The port ASIC performs the packet rewrite and then forwards the packet to the CTS ASIC.
9.     If CTS is enabled, the CTS ASIC performs 802.1ae encryption, and the packet is forwarded to the Physical Layer Protocol (PHY), to be transmitted onto the cable.
Egress Packet Processing (Packets Received from Shared Bus)
1.     Packets are received from the bus and are placed on the local bus.
2.     The Bus Replication ASIC stores the packet and sends packet header information to the PFC4 for forwarding lookup processing.
3.     The PFC4 performs the lookup and sends the results back to the bus replication ASIC.
4.     The bus replication ASIC performs an additional lookup, if replication services are required.
5.     The bus replication ASIC forwards the packet (and replicated packets, if applicable) to the port ASIC, which contains destination information from PFC4.
6.     The port ASIC stores the packet in the packet buffer, and performs egress checks on the packet.
7.     The port ASIC performs the packet rewrite and forwards the packet to CTS ASIC.
8.     If CTS is enabled, the CTS ASIC performs 802.1ae encryption, and the packet is forwarded to the PHY, to be transmitted onto the cable.
The following sections provide more detail for each processing block.
The Catalyst 6500 & 7600 supports two different switch backplanes, the crossbar switch fabric (top left in the above diagram), and bus backplanes (top right in the above diagram). The crossbar switch fabric is the high-capacity backplane that is used by the CEF720 and CEF2T generation of linecards to optimize switching performance. This backplane refers to the 2 Terabit backplane that is contained within the Supervisor’s name. A second backplane (referred to as the “bus” backplane) is also present to support WS-X61xx linecards, supported service modules, and linecards that do not utilize a local DFC4 for forwarding.
The crossbar switch fabric provides a set of fabric channels (or data paths) that are assigned to the slots in the chassis where linecards are inserted. This is referred to collectively as the crossbar switch backplane. This array of fabric channels provides an any-to-any (full-mesh) connection option for the attached linecard to forward data over a dedicated path to any other linecard installed in the chassis.
The bus backplane is a 16 Gbps (full duplex) shared data bus that is used to provide a connection between attached “classic” linecards. The data bus operates at 62.5 Mhz and is 256 bits wide. The bridge ASIC provides the interface through which those classic linecards can communicate with the PFC4 and MSFC5 for data processing services.
The crossbar switch fabric on the Supervisor 2T provides 2080 Gbps of switching capacity. This capacity is based on the use of 26 fabric channels that are used to provision data paths to each slot in the chassis. Each fabric channel can operate at either 40 Gbps or 20 Gbps, depending on the inserted linecard. The capacity of the switch fabric is calculated as follows:
26 x 40 Gbps = 1040 Gbps
1040 Gbps x 2 (full duplex) = 2080 Gbps
The 2080 Gbps number is a marketing number (common among all switch vendors) and is used in all literature to denote that full duplex transmission allows data traffic to be transmitted and received simultaneously. While the switch fabric capacity is documented as a full duplex number, note that the per-slot capacity of the E-Series chassis is NOT a full duplex number.
The 80 Gbps per slot nomenclature represents 2 x 40 Gbps fabric channels that are assigned to each slot providing for 80 Gbps per slot in total. If marketing math were used for this per slot capacity, one could argue that the E-Series chassis provides 160 Gbps per slot.
Figure 3.      Fabric Channel Layout in 6509-E
For every chassis (except the 6513-E), there are enough fabric channels to provide dual fabric channels to each linecard slot, including the two Supervisor slots. The exception is the 6513-E. For the 6513-E chassis, there are dual fabric channels provided for Slots 1 through 6 and for Slots 9 through 13. Slots 7 and 8 are designated Supervisor-only slots. If a linecard is inserted in either of these Supervisor-only slots, it will not be powered up.
Figure 4.      Fabric Channel Layout in 6513-E
This ASIC is used to provide a number of important functions. First and foremost, it receives packets from the front panel GE and 10GE ports, extrapolates valuable information from packet headers, and forwards this information to the PFC4 for packet lookup and associated forwarding services processing (Security, Quality of Service, NetFlow, and others). When packets return from packet lookup processing, this ASIC will perform packet rewrites according to the lookup result.
Another important processing role performed by this ASIC is multicast replication. This includes IGMP snooping for Layer 2 packets, as well as multicast expansion for Layer 3 multicast packets. Additionally, other replication services are supported to provision switched port analyser (SPAN, ER-SPAN, and more) capabilities.
New capabilities also include support for Cisco TrustSec (CTS) and Virtual Switch Link (VSL). Given that, the front panel 10 GE ports can be used to become part of a VSL that facilitates the creation of a Virtual Switching System (VSS) domain.
There are two port ASICs on the Supervisor used to provision the front panel 2 x 10GE and 3 x 1 GE ports. One port ASIC supports a single 10 GE port and a single GE port. The other port ASIC supports a single 10 GE port and two GE ports. The following list defines this port ASIC’s capabilities:
   Per-port VLAN translation
   VSL support (10 GE ports only)
   Cisco TrustSec support (802.1ae link layer encryption)
   Jumbo frames (up to 9216 bytes)
   Flow control
   1P3Q4T (one strict priority queue, three normal round robin queues, and four Weighted Random Early Detection [WRED] thresholds per normal queue) queue structure for GE ports (this is the TX queue structure)
   1P7Q4T (one strict priority queue, seven normal round robin queues, and four WRED thresholds per normal queue) queue structure for 10 GE ports (this is the TX queue structure)
   1Q8T (one normal round robin queues and eight WRED thresholds for that queue) queue structure for 1 GE ports (this is the RX queue structure)
   2Q4T (two normal round robin queues and four WRED thresholds per normal queue) queue structure for 10 GE ports (this is the RX queue structure)
   256 MB total queue buffer (split among the front panel 10 G and 1 G ports)
   DWRR, WRR, and SRR scheduling schemes
   WRED and Tail Drop congestion management
   802.1Q VLAN encapsulation
   ECC protection
The bridge ASIC primarily serves as the gateway for linecards using the bus to connect to the control plane and data plane (MSFC5 and PFC4 respectively). It provides a connection into the bus backplane and receives packets from linecards, which it will forward to the MSFC5 or PFC4 for processing. It provides a packet buffer, as well as flow control to manage data flows from the linecards. Once packet processing is complete for the packet, the bridge ASIC will send the results of the forwarding operation back over the bus to the classic linecards.
Both of these will be discussed individually in more detail later in this paper.
The MSFC5 is a next-generation CPU daughter card for the Supervisor 2T. It is not an optional daughter card and will be present on every Supervisor 2T. The MSFC5 cannot be installed on any other Supervisor 32 or Supervisor 720, and is designed for the exclusive use on the Supervisor 2T.
The MSFC5 performs control plane services for the switch. Control plane functions typically process all those features and other processes that are not handled directly in hardware by purpose built ASICs. The MSFC5 CPU handles Layer 2 and Layer 3 control plane processes, such as the routing protocols, management protocols like SNMP and SYSLOG, and Layer 2 protocols (such as Spanning Tree, Cisco Discovery Protocol, and others), the switch console, and more.
Figure 5.      MSFC5 on Supervisor 2T
On previous generations of the MSFC, there were two main CPU complexes that resided on the MSFC. These CPU complexes were known as the Route Processor (RP) and Switch Processor (SP) complex. The RP complex was responsible for performing Layer 3 control plane services, IOS configuration and associated management of the configuration, Address Resolution Protocol (ARP) processing, Internet Control Message Protocol (ICMP) processing and more.
Its other main function was to create the CEF forwarding tables that are programmed into the PFC hardware memory tables (through the SP). The SP complex was responsible for performing Layer 2 control plane services, managing system power as well as programming various hardware elements in the switch. The IOS image that ran on these previous MSFCs, while downloaded fromhttp://www.cisco.com as one binary image file, was in fact two discrete images: one that ran on the RP CPU complex and one that ran on the SP CPU complex.
The most important enhancement of the MSFC5 is the move from the dual CPU complex (RP/SP) to a single CPU complex that combines the RP and SP complexes into one. As such, this also introduces a new IOS image for the Supervisor 2T that combines the previous two running images into one.
Another valuable enhancement of this new MSFC5 is the introduction of a Connectivity Management Processor (CMP). The CMP is a stand-alone CPU that the administrator can use to perform a variety of remote management services. Examples of how the CMP can be used include:
   System recovery of the control plane
   System resets and reboots
   The copying of IOS image files should the primary IOS image be corrupted or deleted
The CMP and the RP share the same console through a programmable multiplexor. By default, the firmware programs the multiplexor so the RP console is active on the front panel. The multiplexor intercepts specific escape sequences that instruct it to switch from one console to the other.
If the sequence (Ctrl-C, Shift-M) is used three consecutive times, the multiplexor will switch the console to the CMP. If the sequence (Ctrl-R, Shift-M) is used three consecutive times, the multiplexor will switch back to the RP console.
External IP connectivity to the CMP is provided by a new 10/100/1000 RJ-45 management interface on the front panel. This port can then be configured with an IP address, gateway, and connection method (for example, Telnet and SSH). The user can then access and control the system remotely, even if the RP is currently down.

Комментариев нет:

Отправить комментарий