Implementing Cisco IP Switched Networks (SWITCH) Foundation Learning Guide: Network Design Fundamentals
Date: Jun 1, 2015
Every time you go to an office to work or go to class at school, college, or university, you will use a campus network to access critical applications, tools, the Internet, and so on over wired or wireless connections. Often, you may even gain access by using a portable device such as an Apple iPhone connected on a corporate Wi-Fi to reach applications such as e-mail, calendaring, or instant messaging over a campus network. Therefore, the persons responsible for building this network need to deploy sound fundamentals and design principles for the campus networks to function adequately and provide the necessary stability, scalability, and resiliency necessary to sustain interconnectivity with a 100 percent uptime.
This chapter begins the journey of exploring campus network design fundamentals by focusing on a few core concepts around network design and structure and a few details about the architecture of Cisco switches. This is useful knowledge when designing and building campus networks. Specifically, this chapter focuses on the following two high-level topics:
- Campus network structure
- Introduction to Cisco switches and their associated architecture
Campus Network Structure
A campus network describes the portion of an enterprise infrastructure that interconnects end devices such as computers, laptops, and wireless access points to services such as intranet resources or the Internet. Intranet resources may be company web pages, call center applications, file and print services, and almost anything end users connect to from their computer.
In different terms, the campus network provides for connectivity to company applications and tools that reside in a data center for end users. Originally, prior to around 2005, the term campus network and its architectures were relevant for application server farms and computing infrastructure as well. Today, the infrastructure that interconnects server farms, application servers, and computing nodes are clearly distinguished from campus networks and referred to as data centers.
Over the past few years, data center architectures have become more complex and require sophistication not required in the campus network due to high-availability, low-latency, and high-performance requirements. Therefore, data centers may use bleeding-edge technologies that are not found in the campus network, such as FabricPath, VXLAN, and Application Centric Infrastructure (ACI). For the purpose of CCNP Switch at the time of this writing, these technologies, as well as data center architectures, are out of scope. Nevertheless, we will point out some of the differences as to avoid any confusion with campus network fundamentals.
The next subsection describes the hierarchical network design with the following subsections breaking down the components of the hierarchical design in detail.
Hierarchical Network Design
A flat enterprise campus network is where all PCs, servers, and printers are connected to each other using Layer 2 switches. A flat network does not use subnets for any design purposes. In addition, all devices on this subnet are in the same broadcast domain, and broadcasts will be flooded to all attached network devices. Because a broadcast packet received by an end device, such as tablet or PC, uses compute and I/O resources, broadcasts will waste available bandwidth and resources. In a network size of ten devices on the same flat network, this is not a significant issue; however, in a network of thousands of devices, this is a significant waste of resources and bandwidth (see Figure 2-1).
Figure 2-1 Flat Versus Hierarchical Network Design
As a result of these broadcast issues and many other limitations, flat networks do not scale to meet the needs of most enterprise networks or of many small and medium-size businesses. To address the sizing needs of most campus networks, a hierarchical model is used. Figure 2-2 illustrates, at a high level, a hierarchical view of campus network design versus a flat network.
Figure 2-2 The Hierarchical Model
Hierarchical models for network design allow you to design any networks in layers. To understand the importance of layering, consider the OSI reference model, which is a layered model for understanding and implementing computer communications. By using layers, the OSI model simplifies the task that is required for two computers to communicate. Leveraging the hierarchical model also simplifies campus network design by allowing focus at different layers that build on each other.
Referring to Figure 2-2, the layers of the hierarchical model are divided into specific functions categorized as core, distribution, and access layers. This categorization provides for modular and flexible design, with the ability to grow and scale the design without major modifications or reworks.
For example, adding a new wing to your office building may be as simple as adding a new distribution layer with an access layer while adding capacity to the core layer. The existing design will stay intact, and only the additions are needed. Aside from the simple physical additions, configuration of the switches and routes is relatively simple because most of the configuration principles around hierarchy were in place during the original design.
By definition, the access, distribution, and core layer adhere to the following characteristics:
- Access layer: The access layer is used to grant the user access to network applications and functions. In a campus network, the access layer generally incorporates switched LAN devices with ports that provide connectivity to workstations, IP phones, access points, and printers. In a WAN environment, the access layer for teleworkers or remote sites may provide access to the corporate network across WAN technologies.
- Distribution layer: The distribution layer aggregates the access layer switches wiring closets, floors, or other physical domain by leveraging module or Layer 3 switches. Similarly, a distribution layer may aggregate the WAN connections at the edge of the campus and provides policy-based connectivity.
- Core layer (also referred to as the backbone): The core layer is a high-speed backbone, which is designed to switch packets as fast as possible. In most campus networks, the core layer has routing capabilities, which are discussed in later chapters of this book. Because the core is critical for connectivity, it must provide a high level of availability and adapt to changes quickly. It also provides for dynamic scalability to accommodate growth and fast convergence in the event of a failure.
The next subsections of this chapter describe the access layer, distribution layer, and core layer in more detail.
Access Layer
The access layer, as illustrated in Figure 2-3, describes the logical grouping of the switches that interconnect end devices such as PCs, printers, cameras, and so on. It is also the place where devices that extend the network out one more level are attached. Two such prime examples are IP phones and wireless APs, both of which extend the connectivity out one more layer from the actual campus access switch.
Figure 2-3 Access Layer
The wide variety of possible types of devices that can connect and the various services and dynamic configuration mechanisms that are necessary make the access layer one of the most capable parts of the campus network. These capabilities are as follows:
- High availability: The access layer supports high availability via default gateway redundancy using dual connections from access switches to redundant distribution layer switches when there is no routing in the access layer. This mechanism behind default gateway redundancy is referred to as first-hop redundancy protocol (FHRP). FHRP is discussed in more detail in later chapters of this book.
- Convergence: The access layer generally supports inline Power over Ethernet (PoE) for IP telephony, thin clients, and wireless access points (APs). PoE allows customers to easily place IP phones and wireless APs in strategic locations without the need to run power. In addition, the access layers allow support for converged features that enable optimal software configuration of IP phones and wireless APs, as well. These features are discussed in later chapters.
- Security: The access layer also provides services for additional security against unauthorized access to the network by using tools such as port security, quality of service (QoS), Dynamic Host Configuration Protocol (DHCP) snooping, dynamic ARP inspection (DAI), and IP Source Guard. These security features are discussed in more detail in later chapters of this book.
The next subsection discusses the upstream layer from the access layer, the distribution layer.
Distribution Layer
The distribution layer in the campus design has a unique role in which it acts as a services and control boundary between the access layer and the core. Both the access layer and the core are essentially dedicated special-purpose layers. The access layer is dedicated to meeting the functions of end-device connectivity, and the core layer is dedicated to providing nonstop connectivity across the entire campus network. The distribution layer, in contrast, serves multiple purposes. Figure 2-4 references the distribution layer.
Figure 2-4 Distribution Layer
Availability, fast path recovery, load balancing, and QoS are all important considerations at the distribution layer. Generally, high availability is provided through Layer 3 redundant paths from the distribution layer to the core, and either Layer 2 or Layer 3 redundant paths from the access layer to the distribution layer. Keep in mind that Layer 3 equal-cost load sharing allows both uplinks from the distribution to the core layer to be used for traffic in a variety of load-balancing methods discussed later in this chapter.
With a Layer 2 design in the access layer, the distribution layer generally serves as a routing boundary between the access and core layer by terminating VLANs. The distribution layer often represents a redistribution point between routing domains or the demarcation between static and dynamic routing protocols. The distribution layer may perform tasks such as controlled routing decision making and filtering to implement policy-based connectivity, security, and QoS. These features allow for tighter control of traffic through the campus network.
To improve routing protocol performance further, the distribution layer is generally designed to summarize routes from the access layer. If Layer 3 routing is extended to the access layer, the distribution layer generally offers a default route to access layer switching while leveraging dynamic routing protocols when communicating with core routers.
In addition, the distribution layer optionally provides default gateway redundancy by using a first-hop routing protocol (FHRP) such as Host Standby Routing Protocol (HSRP), Gateway Load Balancing Protocol (GLBP), or Virtual Router Redundancy Protocol (VRRP). FHRPs provide redundancy and high availability for the first-hop default gateway of devices connected downstream on the access layer. In designs that leverage Layer 3 routing in the access layer, FHRP might not be applicable or may require a different design.
In summary, the distribution layer performs the following functions when Layer 3 routing is not configured in the access layer:
- Provides high availability and equal-cost load sharing by interconnecting the core and access layer via at least dual paths
- Generally terminates a Layer 2 domain of a VLAN
- Routes traffic from terminated VLANs to other VLANs and to the core
- Summarizes access layer routes
- Implements policy-based connectivity such as traffic filtering, QoS, and security
- Provides for an FHRP
Core Layer (Backbone)
The core layer, as illustrated in Figure 2-5, is the backbone for campus connectivity, and is the aggregation point for the other layers and modules of an enterprise network. The core must provide a high level of redundancy and adapt to changes quickly.
Figure 2-5 Core Layer
From a design point-of-view, the campus core is in some ways the simplest yet most critical part of the campus. It provides a limited set of services and is designed to be highly available and requires 100 percent uptime. In large enterprises, the core of the network must operate as a nonstop, always-available service. The key design objectives for the campus core are based on providing the appropriate level of redundancy to allow for near-immediate data-flow recovery in the event of the failure of any component (switch, supervisor, line card, or fiber interconnect, power, and so on). The network design must also permit the occasional, but necessary, hardware and software upgrade or change to be made without disrupting any network applications. The core of the network should not implement any complex policy services, nor should it have any directly attached user or server connections. The core should also have the minimal control plane configuration that is combined with highly available devices that are configured with the correct amount of physical redundancy to provide for this nonstop service capability. Figure 2-6 illustrates a large campus network interconnected by the core layer (campus backbone) to the data center.
Figure 2-6 Large Campus Network
From an enterprise architecture point-of-view, the campus core is the backbone that binds together all the elements of the campus architecture to include the WAN, the data center, and so on. In other words, the core layer is the part of the network that provides for connectivity between end devices, computing, and data storage services that are located within the data center, in addition to other areas and services within the network.
Figure 2-7 illustrates an example of the core layer interconnected with other parts of the enterprise network. In this example, the core layer interconnects with a data center and edge distribution module to interconnect WAN, remote access, and the Internet. The network module operates out of band from the network but is still a critical component.
Figure 2-7 Core Layer Interconnecting with the Enterprise Network
In summary, the core layer is described as follows:
- Aggregates the campus networks and provides interconnectivity to the data center, the WAN, and other remote networks
- Requires high availability, resiliency, and the ability to make software and hardware upgrades without interruption
- Designed without direct connectivity to servers, PCs, access points, and so on
- Requires core routing capability
- Architected for future growth and scalability
- Leverages Cisco platforms that support hardware redundancy such as the Catalyst 4500 and the Catalyst 6800
Layer 3 in the Access Layer
As switch products become more commoditized, the cost of Layer 3 switches has diminished significantly. Because of the reduced cost and a few inherit benefits, Layer 3 switching in the access layer has become more common over typical Layer 2 switching in the access layer. Using Layer 3 switching or traditional Layer 2 switching in the access layer has benefits and drawbacks. Figure 2-8 illustrates the comparison of Layer 2 from the access layer to the distribution layer with Layer 3 from the access layer to the distribution layer.
Figure 2-8 Layer 3 in the Access Layer
As discussed in later chapters, deploying a Layer 2 switching design in the access layer may result in suboptimal usage of links between the access and distribution layer. In addition, this method does not scale as well in very large numbers because of the size of the Layer 2 domain.
Using a design that leverages Layer 3 switching to the access layer VLANs scales better than Layer 2 switching designs because VLANs get terminated on the access layer devices. Specifically, the links between the distribution and access layer switches are routed links; all access and distribution devices would participate in the routing scheme.
The Layer 2-only access design is a traditional, slightly cheaper solution, but it suffers from optimal use of links between access and distribution due to spanning tree. Layer 3 designs introduce the challenge of how to separate traffic. (For example, guest traffic should stay separated from intranet traffic.) Layer 3 designs also require careful planning with respect to IP addressing. A VLAN on one Layer 3 access device cannot be on another access layer switch in a different part of your network because each VLAN is globally significant. Traditionally, mobility of devices is limited in the campus network of the enterprise in Layer 3 access layer networks. without using an advanced mobility networking features.
In summary, campus networks with Layer 3 in the access layer are becoming more popular. Moreover, next-generation architectures will alleviate the biggest problem with Layer 3 routing in the access layer: mobility.
The next subsection of this chapter applies the hierarchical model to an enterprise architecture.
The Cisco Enterprise Campus Architecture
The Cisco enterprise campus architecture refers to the traditional hierarchical campus network applied to the network design, as illustrated in Figure 2-9.
Figure 2-9 Cisco Enterprise Campus Network
The Cisco enterprise campus architecture divides the enterprise network into physical, logical, and functional areas while leveraging the hierarchical design. These areas allow network designers and engineers to associate specific network functionality on equipment that is based on its placement and function in the model.
Note that although the tiers do have specific roles in the design, no absolute rules apply to how a campus network is physically built. Although it is true that many campus networks are constructed by three physical tiers of switches, this is not a strict requirement. In a smaller campus, the network might have two tiers of switches in which the core and distribution elements are combined in one physical switch: a collapsed distribution and core. However, a network may have four or more physical tiers of switches because the scale, wiring plant, or physical geography of the network might require that the core be extended.
The hierarchy of the network often defines the physical topology of the switches, but they are not the same thing. The key principle of the hierarchical design is that each element in the hierarchy has a specific set of functions and services that it offers and a specific role to play in the design.
In reference to CCNP Switch, the access layer, the distribution layer, and core layer may be referred to as the building access layer, the building distribution layer, and the building core layer. The term building implies but does not limit the context of layers as physical buildings. As mentioned previously, the physical demarcation does not have to be a building; it can be a floor, group of floors, wiring closets, and so on. This book will solely use the terms access layer, distribution layer, and core layer for simplicity.
In summary, network architects build Cisco enterprise campus networks by leveraging the hierarchical model and dividing the layers by some physical or logical barrier. Although campus network designs go much further beyond the basic structure, the key takeaway of this section is that the access, distribution, and core layers are applied to either physical or logical barriers.
The Need for a Core Layer
When first studying campus network design, persons often question the need for a core layer. In a campus network contained with a few buildings or a similar physical infrastructure, collapsing the core into the distribution layer switches may save on initial cost because an entire layer of switches is not needed. Figure 2-10 shows a network design example where the core layer has been collapsed into the distribution layer by fully meshing the four distinct physical buildings.
Figure 2-10 Collapsed Core Design
Despite a possible lower cost to the initial build, this design is difficult to scale. In addition, cabling requirements increase dramatically with each new building because of the need for full-mesh connectivity to all the distribution switches. The routing complexity also increases as new buildings are added because additional routing peers are needed.
With regard to Figure 2-10, the distribution module in the second building of two interconnected switches requires four additional links for full-mesh connectivity to the first module. A third distribution module to support the third building would require 8 additional links to support the connections to all the distribution switches, or a total of 12 links. A fourth module supporting the fourth building would require 12 new links for a total of 24 links between the distribution switches.
As illustrated in Figure 2-11, having a dedicated core layer allows the campus to accommodate growth without requiring full-mesh connectivity between the distribution layers. This is particularly important as the size of the campus grows either in number of distribution blocks, geographical area, or complexity. In a larger, more complex campus, the core provides the capacity and scaling capability for the campus as a whole and may house additional services such as security features.
Figure 2-11 Scaling with a Core Layer
The question of when a separate physical core is necessary depends on multiple factors. The ability of a distinct core to allow the campus network to solve physical design challenges is important. However, remember that a key purpose of having a distinct campus core is to provide scalability and to minimize the risk from (and simplify) moves, adds, and changes in the campus network. In general, a network that requires routine configuration changes to the core devices does not yet have the appropriate degree of design modularization. As the network increases in size or complexity and changes begin to affect the core devices, it often points out design reasons for physically separating the core and distribution functions into different physical devices.
In brief, although design networks without a core layer may work at small scale, medium-sized to enterprise-sized networks, they require a core layer for design modularization and scalability.
In conclusion of the hierarchical model presented in this section, despite its age, the hierarchical model is still relevant to campus network designs. For review, the layers are described as follows:
- The access layer connects end devices such as PCs, access points, printers, and so on to the network.
- The distribution layer has multiple roles, but primarily aggregates the multiple access layers. The distribution may terminate VLANs in Layer 2 to the access layer designs or provide routing downstream to the access layer with Layer 3 to the access layer designs.
The next section delves into a major building block of the campus network: the Cisco switch itself.
Types of Cisco Switches
Switches are the fundamental interconnect component of the campus network. Cisco offers a variety of switches specifically designed for different functions. At the time of this writing, Cisco designs the Catalyst switches for campus networks and Nexus switches for data centers. In the context of CCNP, this book focuses mostly on Catalyst switches.
Figure 2-12 illustrates the current recommended Catalyst switches. However, in the competitive campus switch marketplace, Cisco continuously updates the Catalyst switches with new capabilities, higher performance, higher density, and lower cost.
Figure 2-12 Cisco Catalyst Switches
Interesting enough, the Catalyst 6500 was not detailed in Figure 2-12. Despite its extremely long life cycle, Cisco marketing has finally shifted focus to the Catalyst 6800. For a large number of you reading this book, you have likely come across the Catalyst 6500 at some point in your career.
Cisco offers two types of network switches: fixed configuration and modular switches. With fixed configuration switches, you cannot swap or add another module, like you can with a modular switch. In enterprise access layers, you will find fixed configuration switches, like the Cisco Catalyst, 2960-X series. It offers a wide range of deployments.
In the enterprise distribution layer, you will find either fixed or modular switches depending on campus network requirements. An example of a modular switch that can be found in the distribution layer is the Cisco Catalyst 3850-X series. This series of switches allows you to select different network modules (Ethernet or fiber optic) and redundant power supply modules. In small businesses without a distribution layer, the 3850-X can be found in the core layer. In large enterprise networks, you might find 3850-X in the access layer in cases where high redundancy and full Layer 3 functionality at the access layer are requirements.
In the enterprise core layer, you will often find modular switches such as the Cisco Catalyst 6500 or the Catalyst 6800 series. With the 6800 switch, nearly every component, including the route processing/supervisor module and Ethernet models to power supplies) is individually installed in a chassis. This individualization allows for customization and high-availability options when necessary.
If you have a network where there is a lot of traffic, you have the option to leverage the Cisco Catalyst 4500-X series switches into the distribution layer. The Catalyst 4500-X supports supervisor/route process redundancy and supports 10 Gigabit Ethernet.
All switches within the 2960-X, 3850-X, 4500-X, and 6800-X series are managed. This means that you can configure an IP address on the device. By having a management IP address, you can connect to the device using Secure Shell (SSH) or Telnet and change device settings. An unmanaged switch is only appropriate for a home or very small business environment. It is highly recommended not to use an unmanaged switch in any campus network.
This section just described a few examples of Cisco switches and their placement in the network. For more information, go to http://www.cisco.com/c/en/us/products/switches/index.html.
The next section compares Layer 2 and Layer 3 (multilayer switches).
Comparing Layer 2 and Multilayer Switches
A Layer 2 Ethernet switch operates at the Data Link Layer of the OSI model. These types of switches make decisions about forwarding frames based on the destination MAC addresses found within the frame.
Recalling basic networking: A switch collision domain is only port to port because each switch port and its associated end device is its own collision domain. Because there is no contention on the media, all hosts can operate in full-duplex mode, which means that they can receive and transmit data at the same time. The concept of half duplex is legacy and applies only to hubs and older 10/100-Mbps switches, because 1 Gbps operates by default at full duplex.
When a switch receives in store-n-forward mode, the frame is checked for errors, and frames with a valid cyclic redundancy check (CRC) are regenerated and transmitted. Some models of switches, mostly Nexus switches, opt to switch frames based only on reading the Layer 2 information and bypassing the CRC check. This bypass, referred to as cut-through switching, lowers the latency of the frame transmission as the entire frame is not stored before transmission to another port. Lower switching latency is beneficial for low-latency applications such as algorithm trading programs found in the data center. The assumption is that the end device network interface card (NIC) or an upper-level protocol will eventually discard the bad frame. Most Catalyst switches are store-n-forward.
MAC Address Forwarding
To figure out where a frame must be sent, the switch will look up its MAC address table. This information can be told to the switch or it can learn it automatically. The switch listens to incoming frames and checks the source MAC addresses. If the address is not in the table already, the MAC address, switch port, and VLAN will then get recorded in the forwarding table. The forwarding table is also called the CAM table.
What happens if the destination MAC address of the frame is unknown to the switch? The switch then forwards the frame through all ports within a VLAN except the port the frame was received on. This is known as unknown unicast flooding. Broadcast and multicast traffic is destined for multiple destinations, so it will get flooded by default.
Referring to Figure 2-13, in the first example, the switch receives a frame on port 1. The destination MAC address for the frame is 0000.0000.5555. The switch will look up its forwarding table and figure out that MAC address 0000.0000.5555 is recorded on port 5. The switch will then forward the frame through port 5.
Figure 2-13 Layer 2 Switching Operation: MAC Address Forwarding
In the second example, the switch receives a broadcast frame on port 1. The switch will forward the frame through all ports that are within the same VLAN except port 1. The frame was received on port 1, which is in VLAN 1; therefore, the frame is forwarded through all ports on the switch that belong to VLAN 1 (all ports except port 3).
The next subsection discusses Layer 2 switch operation from a mechanics point of view.
Layer 2 Switch Operation
When a switch receives a frame, it places the frame into an ingress queue. A port can have multiple ingress queues, and typically these queues are used to service frames differently (for example, apply quality of service [QoS]). From a simplified viewpoint, when the switch selects a frame from a queue to transmit, the switches need to answer a few questions:
- Where should the frame be forwarded?
- Are there restrictions preventing the forwarding of the frame?
- Is there any prioritization or marking that needs to be applied to the frame?
Decisions about these three questions are answered, respectively, as illustrated in Figure 2-14 and described in the list that follows.
Figure 2-14 Layer 2 Switch Operation: Mechanics
- Layer 2 forwarding table: The Layer 2 forwarding table, also called the MAC table, contains information about where to forward the frame. Specifically, it contains MAC addresses and destination ports. The switches reference the destination MAC address of the incoming frame in the MAC table and forward the frames to the destination ports specified in the table. If the MAC address is not found, the frame is flooded through all ports in the same VLAN.
- ACLs: Access control lists (ACLs) do not only apply to routers. Switches can also apply ACLs based on MAC and IP addresses. Generally only higher-end switches support ACLs based on both MAC and IP addresses, whereas Layer 2 switches support ACLs only with MAC addresses.
- QoS: Incoming frames can be classified according to QoS parameters. Traffic can then be marked, prioritized, or rate-limited.
Switches use specialized hardware to house the MAC table, ACL lookup data, and QoS lookup data. For the MAC table, switches use content-addressable memory (CAM), whereas the ACL and QoS tables are housed in ternary content-addressable memory (TCAM). Both CAM and TCAM are extremely fast access and allow for line-rate switching performance. CAM supports only two results: 0 or 1. Therefore, CAM is useful for Layer 2 forwarding tables.
TCAM provides three results: 0, 1, and don’t care. TCAM is most useful for building tables for searching on longest matches, such as IP routing tables organized by IP prefixes. The TCAM table stores ACL, QoS, and other information generally associated with upper-layer processing. As a result of using TCAM, applying ACLs does not affect the performance of the switch.
This section only touches on the details and implementation of CAM and TCAM needed for the CCNP certification. For a more detailed description, review the following support document at Cisco.com:
The next subsection discusses Layer 3 (multilayer) switch operation in more detail.
Layer 3 (Multilayer) Switch Operation
Multilayer switches not only perform Layer 2 switching but also forward frames based on Layer 3 and 4 information. Multilayer switches not only combine the functions of a switch and a router but also add a flow cache component.
Multilayer switches apply the same behavior as Layer 2 switches but add an additional parallel lookup for how to route a packet, as illustrated in Figure 2-15.
Figure 2-15 Multilayer Switch Operation
The associated table for Layer 3 lookups is called a FIB table. The FIB table contains not only egress ports and VLAN information but also MAC rewrite information. The ACL and QoS parallel lookups happen the same as Layer 2 switches, except there may be additional support for Layer 3 ACLs and QoS prioritization.
For example, a Layer 2 switch may only be able to apply to rate-limiting frames based on source or destination MAC addresses, whereas a multilayer switch generally supports rate-limiting frames on IP/MAC addresses.
Unfortunately, different models of Cisco switches support different capabilities, and some Layer 2-only switches actually support Layer 3 ACLs and QoS lookups. It is best to consult the product documentation at Cisco.com for clear information about what your switch supports. For the purpose of CCNP Switch and the context of this book, Layer 2 switches support ACLs and QoS based on MAC addresses, whereas Layer 3 switches support ACLs and QoS based on IP or MAC addresses.
Useful Commands for Viewing and Editing Catalyst Switch MAC Address Tables
There is one command for viewing the Layer 2 forwarding table on Catalyst and Nexus switches: show mac address-table. The table has many optional parameters to narrow the output to a more manageable result in large networks. The full command options are as follows: show mac-address-table [aging-time | count | dynamic | static] [address hw-addr] [interface interface-id] [vlan vlan-id] [ | {begin | exclude | include} expression].
Example 2-1 illustrates sample uses of the command and several useful optional uses.
Example 2-1 Layer 2 Forwarding Table
Switch1# show mac address-table Mac Address Table ------------------------------------------- Vlan Mac Address Type Ports ---- ----------- -------- ----- 1 0000:0c00.9001 DYNAMIC Et0/1 1 0000.0c00.9002 DYNAMIC Et0/2 1 0000.0c00.9002 DYNAMIC Et0/3 Total Mac Addresses for this criterion: 3 Switch1# show mac address-table interface ethernet 0/1 Mac Address Table ------------------------------------------- Vlan Mac Address Type Ports ---- ----------- -------- ----- 1 0000:0c00.9001 DYNAMIC Et0/1 Total Mac Addresses for this criterion: 1 Switch1# show mac address-table | include 9001 1 0000:0c00.9001 DYNAMIC Et0/1
Frame Rewrite
From your CCNA studies, you know that many fields of a packet must be rewritten when the packets are routed between subnets. These fields include both source and destination MAC addresses, the IP header checksum, the TTL (Time-to-Live), and the trailer checksum (Ethernet CRC). See Chapter 1, “Fundamentals Review,” for an example.
Distributed Hardware Forwarding
Network devices contain at least three planes of operation:
- Management plane
- Control plane
- Forwarding plane
The management plane is responsible for the network management, such as SSH access and SNMP, and may operate over an out-of-band (OOB) port. The control plane is responsible for protocols and routing decisions, and the forwarding plane is responsible for the actual routing (or switching) of most packets.
Multilayer switches must achieve high performance at line rate across a large number of ports. To do so, multilayer switches deploy independent control and forwarding planes. In this manner, the control plane will program the forwarding plane on how to route packets. Multilayer switches may also employ multiple forwarding planes. For example, a Catalyst 6800 uses forwarding planes on each line module, with a central control plane on the supervisor module.
To continue the example of the Catalyst 6800, each line module includes a microcoded processor that handles all packet forwarding. For the control plane on the supervisor to communicate with the line module, a control layer communication protocol exists, as shown in Figure 2-16.
Figure 2-16 Distributed Hardware Forwarding
The main functions of this control layer protocol between the control plane and the forwarding plane are as follows:
- Managing the internal data and control circuits for the packet-forwarding and control functions
- Extracting the other routing and packet-forwarding-related control information from the Layer 2 and Layer 3 bridging and routing protocols and the configuration data, and then conveying the information to the interface module for control of the data path
- Collecting the data path information, such as traffic statistics, from the interface module to the route processor
- Handling certain data packets that are sent from the Ethernet interface modules to the route processor (for example, DCHP requests, broadcast packets, routing protocol packets)
Cisco Switching Methods
The term Cisco switching methods describes the route processor behavior found on Cisco IOS routers. Because multilayer switches are capable of routing and, in fact, contain a routing process, a review of these concepts is necessary.
A Cisco IOS-based router uses one of three methods to forward packets: process switching, fast switching, and Cisco Express Forwarding (CEF). Recall from your study of routers that process switching is the slowest form of routing because the router processor must route and rewrite using software. Because speed and the number of cores limit the route processor, this method does not scale. The second method, fast switching, is a faster method by which the first packet in a flow is routed and rewritten by a route processor using software, and each subsequent packet is then handled by hardware. The CEF method uses hardware forwarding tables for most common traffic flows, with only a few exceptions. If you use CEF, the route processor spends its cycles mostly on other tasks.
The architecture of the Cisco Catalyst and Nexus switches both focus primarily on the Cisco router equivalents of CEF. The absolute last-resort switching method for Cisco Catalyst or Nexus switches is process switching. The route processors of these switches were never designed to switch or route packets, and by doing so, this will have an adverse effect on performance. Fortunately, the default behavior of these switches is to use fast switching or CEF, and process switching occurs only when necessary.
With Cisco Catalyst switching terminology, fast switching is referred to as route caching, and the application of CEF with distributed hardware forwarding is referred to as topology-based switching.
As a review, the following list summarizes route caching and topology-based forwarding on Cisco Catalyst switches:
- Route caching: Also known as flow-based or demand-based switching, route caching describes a Layer 3 route cache that is built within the hardware functions as the switch detects traffic flow into the switch. This method is functionally equivalent to fast switching in Cisco IOS Software.
- Topology-based switching: Information from the routing table is used to populate the route cache, regardless of traffic flow. The populated route cache is the FIB, and CEF is the facility that builds the FIB. This method is functionally equivalent to CEF in Cisco IOS Software.
The next subsections describe route caching and topology-based switching in more detail.
Route Caching
Route caching is the fast switching equivalent in Cisco Catalyst switches. For route caching to operate, the destination MAC address of an incoming frame must be that of a switch interface with Layer 3 capabilities. The first packet in a stream is switched in software by the route processor, because no cache entry exists yet for the new flow. The forwarding decision that is made by the route processor is then programmed into a cache table (the hardware forwarding table), and all subsequent packets in the flow are switched in the hardware, commonly referred to as using application-specific interface circuits (ASICs). Entries are created only in the hardware forwarding table as the switch detects new traffic flows, and entries will time out after they have been unused for a period of time.
Because entries are created only in the hardware cache as flows are detected by the switch, route caching will always forward at least one packet in a flow using software.
Route caching carries many other names, such as NetfFow LAN switching, flow-based or demand-based switching, and route once, switch many.
Figure 2-17 briefly highlights this concept from a hardware perspective.
Figure 2-17 Route Caching
Topology-Based Switching
Topology-based switching is the CEF equivalent feature of Cisco Catalyst switches. Topology-based switching is ideal for Layer 3 switching over route caching because it offers the best performance and scalability. Fortunately, all Cisco Catalyst switches capable of Layer 3 routing leverage topology-based switching / CEF. For the purpose of CCNP Switch, focus primarily on the benefits and operation of topology-based switching.
CEF uses information in the routing table to populate a route cache (known as an FIB), without traffic flows being necessary to initiate the caching process. Because this hardware FIB exists regardless of traffic flow, assuming that a destination address has a route in the routing table, all packets that are part of a flow will be forwarded by the hardware. The FIB even handles the first packet of a flow. Figure 2-18 illustrates this behavior.
Figure 2-18 Topology-Based Switching
In addition, CEF adds enhanced support for parallel paths and thus optimizes load balancing at the IP layer. In most current-generation Catalyst switches, such as the Catalyst 4500 and 6800, CEF supports both load balancing based on source IP address and destination IP address combination and source and destination IP plus TCP/UDP port number.
CEF load-balancing schemes allow for Layer 3 switches to use multiple paths to achieve load sharing. Packets for a given source-destination host pair are guaranteed to take the same path, even if multiple paths are available. This ensures that packets for a given host pair arrive in order, which in some cases may be the desired behavior with legacy applications.
Moreover, load balancing based only on source and destination IP address has a few shortcomings. Because this load-balancing method always selects the same path for a given host pair, a heavily used source-destination pair, such as a firewall to web server, might not leverage all available links. In other words, the behavior of this load-balancing scheme may “polarize” the traffic by using only one path for a given host pair, thus effectively negating the load-balancing benefit of the multiple paths for that particular host pair.
So, optimal use of any load-balancing scheme depends on the statistical distribution of traffic because source and destination IP load sharing becomes more effective as the number of source-destination IP pairs increases. In an environment where there is a broad distribution of traffic among host pairs, polarization is of minimal concern. However, in an environment where the data flow between a small number of host pairs creates a disproportionate percentage of the packets traversing the network, polarization can become a serious problem.
A popular alternative that is now the default behavior in new Catalyst switches is load balancing based on source and destination IP to include TCP/UDP port numbers. The more additional factors added to the load-balancing scheme, the less likely polarization will exist.
Cisco Catalyst supports additional load-balancing methods and features by which to tune load balancing based on hardware model and software version. Consult Cisco.com for such configuration optimizations if necessary.
Hardware Forward Details
The actual Layer 3 switching of packets occurs at two possible different locations on Catalyst switches. These possible locations are in a centralized manner, such as on a supervisor module, or in distributed fashion, where switching occurs on individual line modules. These methods are referred to as centralized switching and distributed switching, respectively.
The Catalyst 6500 was a perfect example where there was an option to centralize switch everything on the supervisor or place specific hardware versions of line modules in the chassis to gain distributed switching capability.
The benefits of centralized switching include lower hardware cost and lower complexity. For scaling and large enterprise core networks, distributed switching is optimal. Most small form-factor switches leverage centralized switching.
In conclusion, the subsections of this chapter pertaining to switching methods and hardware forwarding included many specific details about routing and switching operations on Cisco switches. Among all the lengthy explanations and details, conclude this section with the following concepts:
- The control plane (CPU/route processor) of a Cisco Catalyst was never designed to route or switch frames. The control plane is intended only to populate hardware tables with routing information and maintain routing protocols. The control plane may route frames in a few exception conditions.
- Medium- to high-end Cisco Catalyst switches were designed based on the distributing forward model to scale to demands of campus and data center networks.
- Cisco Catalyst switches leverage CEF (topology-based switching) for routing of frames as a means to implement a distributing hardware forwarding model.
- Cisco Catalyst switches use either a centralized method or a distributed line module method of hardware forwarding, depending on specific platform model and configuration.
Study Tips
- The show mac address-table command displays the Layer 2 forwarding table of a Cisco switch.
- Layer 2 switches forward traffic based on the destination MAC address of a frame.
- Campus network designs are still built upon the hierarchical model, where end devices connect to the access layer, the distribution layer aggregates the access layer, and the core aggregates the entire enterprise network.
- Cisco switches leverage CEF (topology-based switching) for Layer 3 forwarding.
Summary
This chapter briefly introduced some concepts about campus networks, including the hierarchical model, benefits of Layer 3 routing the access, Cisco switches, and some hardware details related to Cisco Catalyst switches. The next chapters of this book go into more detail about specific feature and design elements of the campus network, such as VLANs, spanning tree, and security. The information in this chapter is summarized as follows:
- Flat Layer 2 networks are extremely limited in scale and in most cases will only scale to 10 to 20 end users before adverse conditions may occur.
- Despite its age, the hierarchical model continues to be a key design fundamental of any network design, including campus network designs.
- The hierarchical model consists of an access, distribution, and core layer, thus allowing for scalability and growth of a campus network in a seamless manner.
- The different models of Cisco Catalyst switches provide for a range of capabilities depending on need and placement within the hierarchical model.
- Cisco Catalyst switches leverage CAM for Layer 2 forwarding tables and TCAM for Layer 3 forwarding tables to achieve line-rate performance.
- Cisco Catalyst switches leverage CEF (topology-based switching) for routing, utilizing a distributed hardware forwarding model that is centralized or distributed per line card.
Review Questions
Use the questions in this section as a review of what you have learned in this chapter. The correct answers are found in Appendix A, “Answers to Chapter Review Questions.”
Which of the following statements is true about campus networks?
- The campus network describes the interconnections of servers in a data center.
- The campus network describes the WAN interconnectivity between two remote sites and head office.
- The campus network describes the network devices that interconnect end users to applications such as e-mail, the intranet, or the Internet over wire or wireless connections.
Which of the following is a disadvantage to using flat Layer 2 networks?
- Broadcast packets are flooded to every device in the network.
- No IP boundary to administer IP-based access control.
- A host flooding traffic onto the network effects every device.
- Scalability is limited.
- All the above
Why are networks designed with layers?
- Allows focus within specific layers due to grouping, segmentation, and compartmentalization
- Simplification of network design
- Optimizes use of physical interconnects (links)
- Optimizes application of policies and access control
- Eases network management
- All of the above
Identify the three layers of the hierarchical model for designing networks.
- Core
- Access
- Distribution
- Enterprise edge
- WAN
- Wireless
What is another common name for the core layer?
- Backbone
- Campus
- Data center
- Routing layer
In newer terminology, what layers are referred to as the spine layer and the leaf layer?
- The spine layer is the equivalent to the core layer, and the leaf layer is equivalent to the distribution layer.
- The spine layer is equivalent to the access layer, and the leaf layer is equivalent to the distribution layer.
- The spine layer is equivalent to the distribution layer, and the leaf layer is equivalent to the access layer.
- The spine layer is equivalent to the core layer, and the leaf layer is equivalent to the access layer.
Match each layer to its definition.
- Core
- Distribution
Access
- Connects PCs, wireless access points, and IP phones
- High-speed interconnectivity layer that generally supports routing capability
- Aggregates access layer switches and provides for policy control
Which of the following are generally true about recommended core layer designs?
- Requires high-availability and resiliency
- Connects critical application servers directly for optimal latency and bandwidth
- Leverages fixed form factor switches in large enterprises
In which layer are you most likely to find fixed Catalyst switches?
- Access layer
- Core layer
- Distribution layer
In which layer are you most likely to find modular Catalyst switches?
- Access layer
- Backbone layer
- Core layer
Which of the following are benefits to using Layer 3 in the access layer? (Choose two.)
- Reduced cost
- Reduced Layer 2 domain
- Reduced spanning-tree domain
- Mobility
Which of the following is the biggest disadvantage with using Layer 3 in the access layer using current technologies?
- More difficult troubleshooting
- Lack of broadcast forwarding
- Native mobility without additional features
- Lack of high availability
A Layer 2-only switch makes forwarding decisions based on what?
- Source MAC address
- Destination MAC address
- Source IP address
- Destination IP address
What does a switch do when it does not know how to forward a frame?
- Drops the frame
- Floods the frames on all ports in the same Layer 2 domain except the source port
- Stores the frame for later transmission
- Resends the frame out the port where it was received
The Layer 2 forwarding table of Cisco switches is also referred to as which of the following?
- CAM table
- Routing table
- MAC address table
- FIB table
Which of the following lookups does a Layer 2-only Cisco Catalyst switch perform on an ingress frame?
- Layer 2 forwarding for destination port
- ACL for access control
- NetFlow for statistics monitoring
- QoS for classification, marking, or policing
Which of the following are true about CAM and/or TCAM? (Choose three.)
- TCAM stands for ternary content-addressable memory.
- CAM provides three results: 0, 1, and don’t care.
- Leveraging CAM and TCAM ensures line-rate performance of the switch.
- CAM and TCAM are software-based tables.
- TCAM is leveraged by QoS and ACL tables.
Why is TCAM necessary for IP routing tables over CAM?
- TCAM supports longest matching instead of match or not match.
- TCAM is faster than CAM.
- TCAM memory is cheaper than CAM.
Cisco Catalyst switches leverage which of the following technologies for Layer 3 forwarding?
- Route caching
- Processor/CPU switching
- NetFlow
- CEF
Cisco Catalyst switches relay routing information to hardware components for additional performance and scalability (line-rate forwarding). What are the two common hardware types that receive relayed routing information?
- Centralized
- Distributed
- Aggregated
- Core-based
With regard to load balancing, what term describes the situation where less than optimal use of all links occurs?
- Reverse path forwarding (RPF)
- Polarization
- Inverse routing
- Unicast flooding
What is the default load-balancing mechanism found on Cisco Catalyst switches?
- Per-flow
- Per-destination IP address
- Per-packet
- Per-destination MAC address