Архитектура шелезяк от Cisco

Multilayer Switch Feature Card (MSFC)

 Multilayer Switch Feature Card is the Layer 3 switching engine that sites on the Catalyst Supervisor as a daughter card. The MSFC is an integral part of the Supervisor Engine, providing high performance, multilayer switching and routing intelligence. On the MSFC daughter card, the route processor (RP) is located on the MSFC itself. Equipped with a high performance processor, the MSFC runs layer 2 protocols on one CPU and layer 3 protocols on the second CPU. These include routing protocol support, layer 2 protocols (Spanning Tree Protocol and VLAN Trunking Protocol for example), and security services.
 The control plane functions in the Cisco Catalyst 6500 are processed by the MSFC and include handling Layer 3 routing protocols, maintaining the routing table, some access control, flow initiation, and other services not found in hardware. Performance of the control plane is dependent on the type and number of processes running on the MSFC. The MSFC3 can support forwarding rates up to 500Kpps. The MSFC provide a means to perform Multilayer Switching (MLS) and interVLAN routing.

Читать далее...

The MSFC builds the Cisco Express Forwarding information Base (FIB) table in software and then downloads this table to the hardware Application-specific-integrated circuits (ASICs) on the PFC and DFC (if present) that make the forwarding decisions for IP unicast and multicast traffic.

MSFC on Sup720.bmp



Role of MSFC 

  1. Provide IOS based multi-protocol routing using a variety of routing protocols.
  2. Work with the PFC for implementing layer 3 switching & traditional router based input/output ACL's. Note, PFC can implement ACL's without requiring a MSFC.
  3. Provide other SW based features (like NAT, Policy Routing, Encryption etc) which are not supported in PFC hardware.

Table1. MSFC/Supervisor Compatibility Chart


MSFC ModelSupervisor Model
MSFC1
Supervisor 1 or Supervisor 1A
MSFC2
Supervisor 1A or Supervisor 2
MSFC3
Supervisor 720 Models
MSFC4
Supervisor 720-10GE Models


Policy Feature Card (PFC)

 The PFC3 is the ASIC-based forwarding engine daughtercard for the Sup720; the DFC3 is the ASIC-based forwarding engine daughtercard for various fabric-enabled linecards (CEF256, CEF720). Contains the ASICs that are used to accelerate Layer 2 and Layer 3 switching, store and process QoS and security ACLs, and maintain NetFlow statistics.
The PFC3/DFC3 generation is built upon a forwarding architecture known as EARL7. Within this generation, there are three different versions - 'A', 'B', and 'BXL' - that are all based on the same fundamental technologies but that each have incremental functionality. 'A' is the standard offering; 'B' is the intermediate option, and 'BXL' is the high-end option.
The PFC contains a Layer 2 and a Layer 3 forwarding engine.

PFC3_Card.bmp

Role of PFC Layer 2 engine

  1. Layer 2 MAC address lookups into the Layer 2 CAM table.
  2. Looking into the packet headers to determine if this switching operation will be a Layer 2 or a Layer 3 operation. If it is going to be a Layer 3 operation, then it will hand off the packet to the Layer 3 engine for further processing.

Role of PFC Layer 3 Engine

  1. NetFlow Statistics collection.
  2. Hardware based forwarding of IPv4, IPv6 and MPLS tagged packets.
  3. QoS mechanism for ACL classification, marking of packets, and policing (rate limiting).
  4. Security mechanism for validating ACL rules against incoming packets.
  5. Maintaining Adjacency entries and statistics.
  6. Maintaining Security ACL counters.
The PFC3 supports hardware based Layer 2 and Layer 3 switching, processing security and QoS ACLs in hardware and the collection of NetFlow statistics.
There are five versions of the Policy Feature Card in use today. The PFC3A , PFC3B, and PFC3BXL are integrated into the Supervisor 720-3A, Supervisor 720-3B and Supervisor 720-3BXL respectively. The PFC3B is the only option for the Supervisor 32, while the PFC3C and PFC3CXL are integrated into the Supervisor 720-10G-3C and Supervisor 720-10G-3CXL.

Table2. PFC/Supervisor Compatibility Chart

PFC ModelSupervisor Model
PFC1Supervisor 1 or Supervisor 1A
PFC2
Supervisor 2
PFC3A
Supervisor 720
PFC3B
Supervisor 720-3B
PFC3BXL
Supervisor 720-3BXL
PFC3C
Supervisor 720-10GE-3C
PFC3CXL
Supervisor 720-10GE-3CXL

Distributed Forwarding Card (DFC)

The Catalyst 6500 architecture supports the use of Distributed Forwarding Cards (DFC). Distributed Forwarding Card is a combo daughter card comprising a MSFC and PFC used by a fabric enabled Cat6500 linecard to perform distributed switching. DFCs are located in linecards, not in Supervisors.
A DFC is used to hold a local copy of the forwarding tables (constructed by the MSFC) along with Security and QoS policies to facilitate local switching on the linecard. The DFC3A is available as an option on CEF256 and CEF720 based linecards. The DFC3B and DFC3BXL were introduced for linecards to operate with the Supervisor 720 equipped with PFC3B and PFC3BXL. The last generation of DFC, the DFC3C, is available as an option on the CEF720 based linecards but are integrated on the latest generation linecards, the WS-X6708 and WS-X6716.
It is important to note that there are some operational considerations that can impact the ability of the Catalyst 6500 system to provide specific QoS features. This can happen when you mix different generations of PFC's and DFC's together. The rule is that the system will operate at the lowest common feature denominator

Table 3.  DFC/PFC Operation 

PFC3APFC3BPFC3BXLPFC3CPFC3CXL
DFC3ANormal OperationPFC3B operates as a PFC3APFC3BXL operates as a PFC3APFC3C operates as a PFC3APFC3CXL operates as a PFC3A
DFC3BDFC3B operates as a DFC3ANormal OperationPFC3BXL operates as a PFC3BPFC3C operates as a PFC3APFC3CXL operates as a PFC3B
DFC3BXLDFC3BXL operates as a DFC3ADFC3BXL operates as a DFC3BNormal OperationPFC3C operates as a PFC3BXLPFC3CXL operates as a PFC3BXL
DFC3CDFC3C operates as a DFC3ADFC3C operates as a DFC3BDFC3C operates as a DFC3B and PFC3BXL operates as a PFC3B
Normal Operation
PFC3CXL operates as a PFC3C
DFC3CXL
DFC3CXL operates as a DFC3ADFC3CXL operates as a DFC3BDFC3CXL operates as a DFC3BXLDFC3CXL operates as a DFC3CNormal Operation

The primary MSFC3 will calculate, then push down a FIB table (Forwarding Information Base) giving the DFC3x its layer 3 forwarding tables. The MSFC3 will also push down a copy of the QoS policies so that they are also local to the line card. Subsequent to this, local switching decisions can reference the local copy of any QoS policies providing hardware QoS processing speeds and yielding higher levels of performance though distributed switching.

DFC_Card.bmp

Benefits of DFC

Performance is the biggest and most obvious reason to implement DFCs. You move from a 30 Mpps centralized forwarding system anywhere up to a 400 Mpps distributed forwarding system. This forwarding performance is for all L2 bridging, L3 routing, ACLs, QoS, and Netflow features, i.e., not just L3.
The performance benefit of a DFC is most applicable when you use the 67xx series modules. This is because these modules have enough ports and bandwidth to generate much more than the 30Mpps centralized forwarding engine has available. A 67xx-series module without a DFC is subject to the same centralized performance characteristics of all other centralized forwarding modules.
DFC also minimize the impact that a classic module has in a system. Classic modules do affect the centralized forwarding performance of a system, limiting the maximum centralized forwarding rate to 15Mpps. Modules enabled with DFCs have their own forwarding engine and are not subject to this performance degradation. If a classic module used, the inclusion of a DFC mitigates any performance issues/concerns. Any non-DFC modules are still subject to the available 15 Mpps of forwarding available when a classic-module is present.

Packet Forwarding

Packet Forwarding is done on the ingress forwarding engine. Therefore, packets coming into the ports on the Sup720-3B will have forwarding done on the PFC3B of the Supervisor. Packets coming into ports of line cards with DFC3s will have the forwarding done on the DFC3. Packets coming into ports of line cards with CFCs will have the forwarding done on the PFC3B of the Supervisor. The MSFC3 only does forwarding in the cases where the PFC3 or DFC3 cannot make the forwarding decision. Some of these cases include when traffic has IP Options set, when ACLs are applied to an interface but the ACL is not programmed into the ACL TCAM for some reason, when packets have TTL expiration, when packets hit an ACE with the "log" keyword, and others.

Centralized Forwarding Card (CFC)

CFC is a centralized forwarding card for the switching modules which makes IPv4 Routing over the PFC. CFC does not do local forwarding, the forwarding is done by the PFC in the Supervisor. As the forwarding is centralized, the PFC performance, FIB entries, ACL lables are shared among the line cards that uses the Supervisor PFC for forwrding. WS-F6700-CFC is the CFC card used on WS-X67xx Ethernet Modules. This daughter card is supported only by the Supervisor Engine 720.
Note: CFC or the Centralized Forwarding Card was introduced along with the CEF720 modules. It provides centralized connectivity to the supervisor for look-ups and results. Though the switch fabric is used for the data, but the CFC is responsible to send a look-up request from the Supervisor and then get those results back.

CFC Card.jpg

Table 4. CFC And Supported Modules

CFCModule
WS-F6700-CFC
WS-X6704-10GE
WS-X6724-SFP
WS-X6748-SFP
WS-X6748-GE-TX

Switch Fabric Architecture

The example in this section illustrates a logical diagram of a Catalyst 6509. The diagram shows the interconnections between a Supervisor Engine in slot 1, a nonswitch fabric-enabled module in slot 2, one fabric channel switch fabric-enabled module (for example, WS-X6516=) in slot 3, a dual fabric channel switch fabric-enabled module (for example, WS-X6816=) in slot 4, and a SFM in slot 5.
The switch fabric is comprised of the SFM and connecting components located on the Catalyst 6500 chassis. The dual fabric channel switch fabric-enabled module has connections to the switch fabric only.
Switch fabric-enabled modules with one fabric channel have one connection to the Data BUS and one connection to the switch fabric.
The Supervisor Engine, nonfabric-enabled module, and switch fabric-enabled module with one fabric channel have a connection to the Data BUS.
The Data BUS has a lower data forwarding capacity (32 Gbps) than the switch fabric (256 Gbps), and all data going to and from the nonfabric-enabled modules must traverse the Data BUS.
183a.gif

Forwarding Decisions for Layer 3-Switched Traffic

Either a PFC3 or a Distributed Feature Card 3 (DFC3) makes the forwarding decision for Layer 3-switched traffic as follows:
A PFC3 makes all forwarding decisions for each packet that enters the switch through a module without a DFC3.
A DFC3 makes all forwarding decisions for each packet that enters the switch on a DFC3-enabled module in these situations:
If the egress port is on the same module as the ingress port, the DFC3 forwards the packet locally (the packet never leaves the module).
If the egress port is on a different fabric-enabled module, the DFC3 sends the packet to the egress module, which sends it out the egress port.
If the egress port is on a different nonfabric-enabled module, the DFC3 sends the packet to the supervisor engine. The supervisor engine fabric interface transfers the packet to the 32-Gbps switching bus where it is received by the egress module and is sent out the egress port.

Режимы работы Switch Fabric Module 


The SFM creates a dedicated channel between the fabric-enabled module and the SFM, and provides uninterrupted transmission of frames between these modules.
This table is a summary of the different flows:
Data Flow Between ModulesMode of Operation in Cisco IOS SoftwareMode of Operation in CatOS
Between fabric-enabled modules (no nonfabric-enabled modules installed)CompactCompact
Between fabric-enabled modules (when nonfabric-enabled modules are also installed)TruncatedTruncated
Between fabric-enabled and nonfabric-enabled modulesBUSFlow-through
Between nonfabric-enabled moduleBUSFlow-through
Fabric-enabled modules with the Distributed Feature Card (DFC) installedDistributed Cisco Express Forwarding (dCEF)N/A
With the SFM, the traffic is forwarded to and from the modules in the modes described in this section. The mode of operation determines the flow of data through the switch.

BUS-Only or Flow-Through Mode

A Catalyst 6500 with a SFM and nonfabric-enabled modules like the WS-X6348-RJ-45 or WS-X6416-GBIC work in flow-through mode. In flow-through mode, data flowing between nonfabric-enabled modules do not use the SFM, but the 32 Gbps Data BUS. Data flowing between a Supervisor Engine/MSFC and a nonfabric-enabled module also traverse the Data BUS and do not use the SFM. Data flowing between a module with no connections to the Data BUS, like the WS-X6816-GBIC, and a nonfabric-enabled module traverse from the switch fabric-enabled module through the SFM, then to the Supervisor Engine, and then to the nonfabric module.

Truncated Mode

When the switch contains a fabric-enabled module and a nonfabric-enabled module, the fabric-enabled line cards operate in truncated mode. In this mode, the traffic between the fabric-capable module and the nonfabric modules goes through the switch fabric channel and the Data BUS through the Supervisor Engine. In the case of traffic between fabric-enabled modules, only the truncated data (the first 64 bytes of the frame) are sent over the switch fabric channel. In the case of traffic between two nonfabric-enabled modules, it acts like the flow-through mode.
You can manually specify which switching mode the system uses to improve performance by analyzing the data packets. The default mode should work well, unless another mode is needed for specific reasons. If you have nonfabric-enabled and fabric-enabled modules in the chassis, and most of the traffic is between the fabric-enabled and nonfabric-enabled module, then there is greater advantage in using BUS-mode than truncated mode. With most of the packets flowing between fabric-enabled modules, especially the jumbo-size frames, truncated mode is preferred. You can configure the threshold for the truncated mode with this command in the Cisco IOS Software:
fabric switching-mode allow {bus-mode | {truncated [{threshold [number]}]}
In this command, the threshold number is the number of the fabric-enabled line card before the truncated mode is enabled.

Compact Mode

When a chassis contains only switch fabric-enabled modules, the fabric-enabled line cards can run in compact mode. This delivers the best possible switching rate in CatOS, depending on the line cards.

Distributed Cisco Express Forwarding Mode

This mode is only available in the Cisco IOS Software with the fabric-enabled line cards that have a DFC. This delivers the best possible Layer 3 switching rate in Cisco IOS Software.

Типы линейных карт Cisco Catalyst 6500 

4 основные категории линейных карт в семействе Cisco Catalyst 6500.
   Classic: In this mode the line card has a single connection into the 32-Gbps shared bus.
   CEF256: The line card in this mode supports a connection into the 32-Gbps shared bus and the switch fabric—these line cards will use the switch fabric for data switching when the Supervisor Engine 720 is present—if a Supervisor Engine 32 is present it will revert back to using the 32-Gbps shared bus.
   CEF256 and CEF720: The line card in this mode supports a connection into the 32-Gbps shared bus and the switch fabric: these line cards will use the switch fabric on the Supervisor Engine720 for data switching.
   dCEF256: These line cards require the presence of the switch fabric to operate—these line cards do not connect into the shared bus.
   dCEF720: Like the dCEF256 linecards, they only require the switch fabric to be present to switch packets. They connect into the switch fabric channels at 20Gbps as opposed to the 8Gbps that the dCEF256 linecards connect.
Cisco Catalyst 6500 Line-Card Types
Line card
Bus Connection
Fabric Connection
Distributed Forwarding
Classic
Yes
No
No
CEF256
Yes
Yes at 8 Gbps
Upgradeable to DFC, DFC3a, DFC3B, or DFC3BXL
dCEF256
No
Yes at 8 Gbps
Yes, integrated DFC, DFC3a, DFC3B or DFC3BXL,
CEF720
Yes
Yes at 20 Gbps
Upgradeable to DFC3a, DFC3B, or DFC3BXL
dCEF720
No
Yes at 20Gbps
Yes, either a DFC3C or DFC3CXL is present

Отличия карт между fabric-capable module и fabric-only:


CharacteristicsFabric Capable ModulesFabric Only Modules
Model NumbersBegin with WS-X65XXBegin with WS-X68XX
Number of Connections to Switch Fabriconetwo
Number of Connections to Data BUSonezero
Supports Data BUS Fallback RedundancyYes. It has a connection to the Data BUS. If the SFM is removed from the chassis, the card continues to function.No. This module only connects to the SFM. Without the SFM, the card is powered off and ceases to function until a SFM is inserted into the chassis.
Comes Standard with DFCNo. A DFC must be purchased separately.A DFC module comes with each module.


What are the benefits of a DFC?

  1. Performance is the biggest and most obvious reason to implement DFCs. You move from a 30 Mpps centralized forwarding system anywhere up to a 400 Mpps distributed forwarding system. This forwarding performance is for all L2 bridging, L3 routing, ACLs, QoS, and Netflow features, i.e., not just L3.
  2. The performance benefit of a DFC is most applicable when you use the 67xx series modules. This is because these modules have enough ports and bandwidth to generate much more than the 30Mpps centralized forwarding engine has available. A 67xx-series module without a DFC is subject to the same centralized performance characteristics of all other centralized forwarding modules, i.e., 30 Mpps maximum for the whole system.
    Consider a 6704 module; the 4x 10G ports have the ability to receive up to 60 Mpps of traffic (4x 14.88 Mpps, which is the linerate for 10GE at 64B frames). This is 2 times the amount of traffic that the centralized forwarding engine can handle, and this only account for one slot in the system. More modules in the system can also contribute to this oversubscription.
    The addition of a DFC3 to the 6704 module increases the forwarding performance of that module to 48 Mpps; this is just for that slot, so the forwarding performance of the DFC3 is dedicated to the module on which it resides, i.e., not shares. The overall system over-subscription rate is greatly reduced when you add DFCs. The same principles applies to the 65xx modules, which also have optional DFC support, albeit at lower performance levels.
  3. Minimize the impact that a classic module has in a system. Classic modules do affect the centralized forwarding performance of a system, limiting the maximum centralized forwarding rate to 15Mpps. Modules enabled with DFCs have their own forwarding engine and are not subject to this performance degradation. If a classic module used, the inclusion of a DFC mitigates any performance issues/concerns. Any non-DFC modules are still subject to the available 15 Mpps of forwarding available when a classic-module is present.
  4. Increase the number of Netflow entries in the system. The system learns Netflow entries on a per DFC/PFC-basis; there is no synchronization between the Netflow tables. If we have 256K Netflow entries on a PFC3BXL/DFC3BXL, we can scale the system to 256K multiplied by the number of PFC3BXL/DFC3BXLs.
  5. Increase the number of port-based QoS aggregate policers. A single PFC/DFC can support 1023 aggregate policers. We only download the policy for a port-based agg-policer to the PFC/DFC that manages the port in question. With 'x' number of PFC/DFCs, we can support 1023 times 'x' number of policers.
  6. Increase the number of UBRL/Microflow policer entries. Since Microflow policer entries are stored in the Netflow table, the maximum capacity of these is directly related to the aggregate capacity of the Netflow table.
  7. The addition of a DFC module effectively disconnects a module from the Data Bus. As such, a DFC-enabled module is not subject to the bus stall mechanism that occurs when a module is inserted or removed from the chassis. Throughout these Online Insertion and Removal (OIR) events, the Data Bus is temporarily paused for just enough time to ensure that the insertion/removal process does not cause any data corruption on the backplane. This protection mechanism causes a very brief amount of packet loss (sub-second, but dependent on the time it takes to fully insert a module). A module with a DFC onboard is not directly affected by this stall mechanism and does not have any packet loss on OIR.

 For what is the CFC used?

The WS-F6700-CFC is a daughtercard that provides centralized forwarding for the 67xx linecards. The CFC is the base requirement for 67xx linecard operation and is a zero cost option. The daughtercard houses two ASICs that function only as a bus interface. That is, these ASICs form truncated or compact headers, which are sent to the central PFC3* for forwarding decisions.
As the name implies, the CFC is only used for centralized forwarding. The centralized forwarding rate for the Catalyst 6500 is 30 Mpps, maximun. The CFC does not provide any local forwarding capabilities. This is different from any other type of module on the Catalyst 6500; typically, the centralized forward capability is inherent to the baseboard, and any daughtercards provide additional (non-standard) functionality.
When a DFC3** is added to a 67xx linecard, the CFC needs to be removed. The DFC3 daughtercard provides distributed forwarding (dCEF). The CFC and DFC3 each use the same linecard connector, so they are mutually exclusive on a particular module.
* PFC3 can be PFC3A, PFC3B, or PFC3BXL.
** DFC3 daughtercard options are WS-F6700-DFC3A, WS-F6700-DFC3B, WS-F6700-DFC3BXL, WS-F6700-DFC3C, or WS-F6700-DFC3CXL.

1 комментарий: