Network Implementations
Date: Oct 12, 2021
In this sample chapter from CompTIA Network+ N10-008 Exam Cram, 7th Edition, you will learn how to compare and contrast various devices, their features, and their appropriate placement on the network.
All but the most basic of networks require devices to provide connectivity and functionality. Understanding how these networking devices operate and identifying the functions they perform are essential skills for any network administrator and are requirements for a Network+ candidate.
This chapter introduces commonly used networking devices, and that is followed by a discussion of basic corporate and datacenter network architecture later in the chapter. You are not likely to encounter all the devices mentioned in this chapter on the exam, but you can expect to work with at least some of them.
Common Networking Devices
Compare and contrast various devices, their features, and their appropriate placement on a network.
The best way to think about this chapter is as a catalog of networking devices. The first half looks at devices that you can commonly find in a network of any substantial size. The devices are discussed in objective order to simplify study and include everything from simple access points to VPN concentrators.
Firewall
A firewall is a networking device, either hardware or software based, that controls access to your organization’s network. This controlled access is designed to protect data and resources from an outside threat. To provide this protection, firewalls typically are placed at a network’s entry/exit points—for example, between an internal network and the Internet. After it is in place, a firewall can control access into and out of that point.
Although firewalls typically protect internal networks from public networks, they are also used to control access between specific network segments within a network. An example is placing a firewall between the Accounts and Sales departments.
As mentioned, firewalls can be implemented through software or through a dedicated hardware device. Organizations implement software firewalls through network operating systems (NOSs) such as Linux/UNIX, Windows servers, and macOS servers. The firewall is configured on the server to allow or block certain types of network traffic. In small offices and for regular home use, a firewall is commonly installed on the local system and is configured to control traffic. Many third-party firewalls are available.
Hardware firewalls are used in networks of all sizes today. Hardware firewalls are often dedicated network devices that can be implemented with little configuration. They protect all systems behind the firewall from outside sources. Hardware firewalls are readily available and often are combined with other devices today. For example, many broadband routers and wireless access points have firewall functionality built in. In such a case, the router or AP might have a number of ports available to plug systems into. Figure 4.1 shows Windows Defender Firewall and the configured inbound and outbound rules.
FIGURE 4.1 Configuration of Windows Defender Firewall
IDS/IPS
An intrusion detection system (IDS) is a passive detection system. The IDS can detect the presence of an attack and then log that information. It also can alert an administrator to the potential threat. The administrator then analyzes the situation and takes corrective measures if needed.
A variation on the IDS is the intrusion prevention system (IPS), which is an active detection system. With IPS, the device continually scans the network, looking for inappropriate activity. It can shut down any potential threats. The IPS looks for any known signatures of common attacks and automatically tries to prevent those attacks. An IPS is considered an active/reactive security measure because it actively monitors and can take steps to correct a potential security threat.
Following are several variations on IDSs/IPSs:
Behavior based: A behavior-based system looks for variations in behavior such as unusually high traffic, policy violations, and so on. By looking for deviations in behavior, it can recognize potential threats and quickly respond.
Signature based: A signature-based system, also commonly known as misuse-detection system (MD-IDS/MD-IPS), is primarily focused on evaluating attacks based on attack signatures and audit trails. Attack signatures describe a generally established method of attacking a system. For example, a TCP flood attack begins with a large number of incomplete TCP sessions. If the MD-IDS knows what a TCP flood attack looks like, it can make an appropriate report or response to thwart the attack. This IDS uses an extensive database to determine the signature of the traffic.
Network-based intrusion detection/prevention system (NIDS or NIPS): The system examines all network traffic to and from network systems. If it is software, it is installed on servers or other systems that can monitor inbound traffic. If it is hardware, it may be connected to a hub or switch to monitor traffic.
Host-based intrusion detection/prevention system (HIDS or HIPS): These applications are spyware or virus applications that are installed on individual network systems. The system monitors and creates logs on the local system.
Router
In a common configuration, routers create larger networks by joining two network segments. A small office/home office (SOHO) router connects a user to the Internet. A SOHO router typically serves 1 to 10 users on the system. A router can be a dedicated hardware device or a computer system with more than one network interface and the appropriate routing software. All modern network operating systems include the functionality to act as a router.
A router derives its name from the fact that it can route data it receives from one network to another. When a router receives a packet of data, it reads the packet’s header to determine the destination address. After the router has determined the address, it looks in its routing table to determine whether it knows how to reach the destination; if it does, it forwards the packet to the next hop on the route. The next hop might be the final destination, or it might be another router. Figure 4.2 shows, in basic terms, how a router works.
A router works at Layer 3 (the network layer) of the OSI model.
FIGURE 4.2 How a router works
Switch
Like hubs, switches are the connectivity points of an Ethernet network. Devices connect to switches via twisted-pair cabling, one cable for each device. The difference between hubs and switches is in how the devices deal with the data they receive. Whereas a hub forwards the data it receives to all the ports on the device, a switch forwards it to only the port that connects to the destination device. It does this by the MAC address of the devices attached to it and then by matching the destination MAC address in the data it receives. Figure 4.3 shows how a switch works. In this case, it has learned the MAC addresses of the devices attached to it; when the workstation sends a message intended for another workstation, it forwards the message on and ignores all the other workstations.
FIGURE 4.3 How a switch works
By forwarding data to only the connection that should receive it, the switch can greatly improve network performance. By creating a direct path between two devices and controlling their communication, the switch can greatly reduce the traffic on the network and therefore the number of collisions. As you might recall, collisions occur on Ethernet networks when two devices attempt to transmit at the same time. In addition, the lack of collisions enables switches to communicate with devices in full-duplex mode. In a full-duplex configuration, devices can send data to and receive data from the switch at the same time. Contrast this with half-duplex communication, in which communication can occur in only one direction at a time. Full-duplex transmission speeds are double that of a standard half-duplex connection. So, a 100 Mbps connection becomes 200 Mbps, and a 1000 Mbps connection becomes 2000 Mbps, and so on.
The net result of these measures is that switches can offer significant performance improvements over hub-based networks, particularly when network use is high.
Irrespective of whether a connection is at full or half duplex, the method of switching dictates how the switch deals with the data it receives. The following is a brief explanation of each method:
Cut-through: In a cut-through switching environment, the packet begins to be forwarded as soon as it is received. This method is fast, but it creates the possibility of errors being propagated through the network because no error checking occurs.
Store-and-forward: Unlike cut-through, in a store-and-forward switching environment, the entire packet is received and error-checked before being forwarded. The upside of this method is that errors are not propagated through the network. The downside is that the error-checking process takes a relatively long time, and store-and-forward switching is considerably slower as a result.
Fragment-free: To take advantage of the error checking of store-and-forward switching, but still offer performance levels nearing that of cut-through switching, fragment-free switching can be used. In a fragment-free switching environment, enough of the packet is read so that the switch can determine whether the packet has been involved in a collision. As soon as the collision status has been determined, the packet is forwarded.
Hub and Switch Cabling
In addition to acting as a connection point for network devices, hubs and switches can be connected to create larger networks. This connection can be achieved through standard ports with a special cable or by using special ports with a standard cable.
As you learned in Chapter 3, the ports on a hub, switch, or router to which computer systems are attached are called medium-dependent interface crossed (MDI-X). The crossed designation is derived from the fact that two of the wires within the connection are crossed so that the send signal wire on one device becomes the receive signal of the other. Because the ports are crossed internally, a standard or straight-through cable can be used to connect devices.
Another type of port, called a medium-dependent interface (MDI) port, is often included on a hub or switch to facilitate the connection of two switches or hubs. Because the hubs or switches are designed to see each other as an extension of the network, there is no need for the signal to be crossed. If a hub or switch does not have an MDI port, hubs or switches can be connected by using a cable between two MDI-X ports. The crossover cable uncrosses the internal crossing. Auto MDI-X ports on more modern network device interfaces can detect whether the connection would require a crossover, and automatically choose the MDI or MDI-X configuration to properly match the other end of the link.
A switch can work at either Layer 2 (the data link layer) or Layer 3 (the network layer) of the OSI model. When it filters traffic based on the MAC address, it is called a Layer 2 switch since MAC addresses exist at Layer 2 of the OSI model (if it operated only with IP traffic, it would be a Layer 3 switch).
Multilayer Switch
It used to be that networking devices and the functions they performed were separate. Bridges, routers, hubs, and more existed but were separate devices. Over time, the functions of some individual network devices became integrated into a single device. This is true of multilayer switches.
A multilayer switch is one that can operate at both Layer 2 and Layer 3 of the OSI model, which means that the multilayer device can operate as both a switch and a router (by operating at more than one layer, it is living up to the name of being “multilayer”). Also called a Layer 3 switch, the multilayer switch is a high-performance device that supports the same routing protocols that routers do. It is a regular switch directing traffic within the LAN; in addition, it can forward packets between subnets.
A content switch is another specialized device. A content switch is not as common on today’s networks, mostly due to cost. A content switch examines the network data it receives, decides where the content is intended to go, and forwards it. The content switch can identify the application that data is targeted for by associating it with a port. For example, if data uses the Simple Mail Transfer Protocol (SMTP) port, it could be forwarded to an SMTP server.
Content servers can help with load balancing because they can distribute requests across servers and target data to only the servers that need it, or distribute data between application servers. For example, if multiple mail servers are used, the content switch can distribute requests between the servers, thereby sharing the load evenly. This is why the content switch is sometimes called a load-balancing switch.
Hub
At the bottom of the networking devices food chain, so to speak, are hubs. Hubs are used in networks that use Ethernet twisted-pair cabling to connect devices. Hubs also can be joined to create larger networks. Hubs are simple devices that direct data packets to all devices connected to the hub, regardless of whether the data package is destined for the device. This makes them inefficient devices and can create a performance bottleneck on busy networks.
In its most basic form, a hub does nothing except provide a pathway for the electrical signals to travel along. Such a device is called a passive hub. Far more common nowadays is an active hub, which, as well as providing a path for the data signals, regenerates the signal before it forwards it to all the connected devices. In addition, an active hub can buffer data before forwarding it. However, a hub does not perform any processing on the data it forwards, nor does it perform any error checking.
Hubs come in a variety of shapes and sizes. Small hubs with five or eight connection ports are commonly called workgroup hubs. Others can accommodate larger numbers of devices (normally up to 32). These are called high-density devices.
A basic hub works at Layer 1 (the physical layer) of the OSI model.
Bridge
A bridge, as the name implies, connects two networks. Bridging is done at the first two layers (physical and data link layer) of the OSI model and differs from routing in its simplicity. With routing, a packet is sent to where it is intended to go, whereas with bridging, it is sent away from this network. In other words, if a packet does not belong on this network, it is sent across the bridge with the assumption that it belongs there rather than here.
If one or more segments of the bridged network are wireless, the device is known as a wireless bridge.
DSL and Cable Modems
A traditional modem (short for modulator/demodulator) is a device that converts the digital signals generated by a computer into analog signals that can travel over conventional phone lines. The modem at the receiving end converts the signal back into a format that the computer can understand. While modems can be used as a means to connect to an ISP or as a mechanism for dialing up a LAN, they have faded in use in recent years in favor of faster technologies.
Modems can be internal add-in expansion cards or integrated with the motherboard, external devices that connect to a system’s serial or USB port, or proprietary devices designed for use on other devices, such as portables and handhelds.
A DSL modem makes it possible for telephone lines to be used for high-speed Internet connections. Much faster than the old dial-up modems, DSL modems use the subscriber (dedicated) lines and send the data back and forth across them—translating them into signals the devices can use.
Similarly, a cable modem has a coaxial connection for connecting to the provider’s outlet and an unshielded twisted-pair (UTP) connection for connecting directly to a system or to a hub, switch, or router. Cable providers often supply the cable modem, with a monthly rental agreement. Many cable providers offer free or low-cost installation of cable Internet service, which includes installing a network card in a PC. Some providers also do not charge for the network card. Figure 4.4 shows the results of a speed test from a cable modem.
FIGURE 4.4 Speed test results
Most cable modems offer the capability to support a higher-speed Ethernet connection for the home LAN than is achieved. The actual speed of the connection can vary somewhat, depending on the utilization of the shared cable line in your area.
Access Point
The term access point (AP) can technically be used for either a wired or wireless connection, but in reality it is almost always associated only with a wireless-enabling device. A wireless access point (WAP) is a transmitter and receiver (transceiver) device used to create a wireless LAN (WLAN). WAPs typically are separate network devices with a built-in antenna, transmitter, and adapter. WAPs use the wireless infrastructure network mode to provide a connection point between WLANs and a wired Ethernet LAN. WAPs also usually have several ports, giving you a way to expand the network to support additional clients.
Depending on the size of the network, one or more WAPs might be required. Additional WAPs are used to allow access to more wireless clients and to expand the range of the wireless network. Each WAP is limited by a transmission range—the distance a client can be from a WAP and still obtain a usable signal. The actual distance depends on the wireless standard used and the obstructions and environmental conditions between the client and the WAP.
Saying that a WAP is used to extend a wired LAN to wireless clients does not give you the complete picture. A wireless AP today can provide different services in addition to just an access point. Today, the APs might provide many ports that can be used to easily increase the network’s size. Systems can be added to and removed from the network with no effect on other systems on the network. Also, many APs provide firewall capabilities and Dynamic Host Configuration Protocol (DHCP) service. When they are hooked up, they give client systems a private IP address and then prevent Internet traffic from accessing those systems. So, in effect, the AP is a switch, DHCP server, router, and firewall.
APs come in all shapes and sizes. Many are cheaper and are designed strictly for home or small office use. Such APs have low-powered antennas and limited expansion ports. Higher-end APs used for commercial purposes have high-powered antennas, enabling them to extend how far the wireless signal can travel.
An AP works at Layer 2 (the data link layer) of the OSI model.
Media Converter
When you have two dissimilar types of network media, a media converter is used to allow them to connect. They are sometimes referred to as couplers. Depending on the conversion being done, the converter can be a small device, barely larger than the connectors themselves, or a large device within a sizable chassis.
Reasons for not using the same media throughout the network, and thus reasons for needing a converter, can range from cost (gradually moving from coax to fiber), disparate segments (connecting the office to the factory), or the need to run particular media in a setting (the need for fiber to reduce EMI problems in a small part of the building).
Figure 4.5 shows an example of a media converter. The one shown converts between 10/100/1000TX and 1000LX (with an SC-type connector).
FIGURE 4.5 A common media converter
The following converters are commonly implemented and are ones that CompTIA has previously included on the Network+ exam.
Voice Gateway
When telephone technology is married with information technology, the result is called telephony. There has been a massive move from landlines to voice over IP (VoIP) for companies to save money. One of the biggest issues with the administration of this is security. When both data and VoIP are on the same line, they are both vulnerable in the case of an attack. Standard telephone systems should be replaced with a securable PBX.
A VoIP gateway, also sometimes called a PBX gateway, can be used to convert between the legacy telephony connection and a VoIP connection using Session Initiation Protocol (SIP). This is referred to as a “digital gateway” because the voice media are converted in the process.
Repeater
A repeater (also called a booster or wireless range extender) can amplify a wireless signal to make it stronger. This increases the distance that the client system can be placed from the access point and still be on the network. The extender needs to be set to the same channel as the AP for the repeater to take the transmission and repeat it. This is an effective strategy to increase wireless transmission distances.
Wireless LAN Controller
Wireless LAN controllers are often used with branch/remote office deployments for wireless authentication. When an AP boots, it authenticates with a controller before it can start working as an AP. This is often used with VLAN pooling, in which multiple interfaces are treated as a single entity (usually for load balancing).
Load Balancer
Network servers are the workhorses of the network. They are relied on to hold and distribute data, maintain backups, secure network communications, and more. The load of servers is often a lot for a single server to maintain. This is where load balancing comes into play. Load balancing is a technique in which the workload is distributed among several servers. This feature can take networks to the next level; it increases network performance, reliability, and availability.
A load balancer can be either a hardware device or software specially configured to balance the load.
Proxy Server
Proxy servers typically are part of a firewall system. They have become so integrated with firewalls that the distinction between the two can sometimes be lost.
However, proxy servers perform a unique role in the network environment—a role that is separate from that of a firewall. For the purposes of this book, a proxy server is defined as a server that sits between a client computer and the Internet and looks at the web page requests the client sends. For example, if a client computer wants to access a web page, the request is sent to the proxy server rather than directly to the Internet. The proxy server first determines whether the request is intended for the Internet or for a web server locally. If the request is intended for the Internet, the proxy server sends the request as if it originated the request. When the Internet web server returns the information, the proxy server returns the information to the client. Although a delay might be induced by the extra step of going through the proxy server, the process is largely transparent to the client that originated the request. Because each request a client sends to the Internet is channeled through the proxy server, the proxy server can provide certain functionality over and above just forwarding requests.
One of the most notable extra features is that proxy servers can greatly improve network performance through a process called caching. When a caching proxy server answers a request for a web page, the server makes a copy of all or part of that page in its cache. Then, when the page is requested again, the proxy server answers the request from the cache rather than going back to the Internet. For example, if a client on a network requests the web page www.comptia.org, the proxy server can cache the contents of that web page. When a second client computer on the network attempts to access the same site, that client can grab it from the proxy server cache, and accessing the Internet is unnecessary. This greatly increases the response time to the client and can significantly reduce the bandwidth needed to fulfill client requests.
Nowadays, speed is everything, and the capability to quickly access information from the Internet is a crucial concern for some organizations. Proxy servers and their capability to cache web content accommodate this need for speed.
An example of this speed might be found in a classroom. If a teacher asks 30 students to access a specific Uniform Resource Locator (URL) without a proxy server, all 30 requests would be sent into cyberspace and subjected to delays or other issues that could arise. The classroom scene with a proxy server is quite different. Only one request of the 30 finds its way to the Internet; the other 29 are filled by the proxy server’s cache. Web page retrieval can be almost instantaneous.
However, this caching has a potential drawback. When you log on to the Internet, you get the latest information, but this is not always so when information is retrieved from a cache. For some web pages, it is necessary to go directly to the Internet to ensure that the information is up to date. Some proxy servers can update and renew web pages, but they are always one step behind.
The second key feature of proxy servers is allowing network administrators to filter client requests. If a server administrator wants to block access to certain websites, a proxy server enables this control, making it easy to completely disallow access to some websites. This is okay, but what if it were necessary to block numerous websites? In this case, maintaining proxy servers gets a bit more complicated.
Determining which websites users can or cannot access is usually done through something called an access control list (ACL). Chapter 3 discussed how an ACL can be used to provide rules for which port numbers or IP addresses are allowed access. An ACL can also be a list of allowed or nonallowed websites; as you might imagine, compiling such a list can be a monumental task. Given that millions of websites exist, and new ones are created daily, how can you target and disallow access to the “questionable” ones? One approach is to reverse the situation and deny access to all pages except those that appear in an “allowed” list. This approach has high administrative overhead and can greatly limit the productive benefits available from Internet access.
Understandably, it is impossible to maintain a list that contains the locations of all sites with questionable content. In fairness, that is not what proxy servers were designed to do. However, by maintaining a list, proxy servers can better provide a greater level of control than an open system. Along the way, proxy servers can make the retrieval of web pages far more efficient.
A reverse proxy server is one that resides near the web servers and responds to requests. These are often used for load-balancing purposes because each proxy can cache information from a number of servers.
VPN Concentrators and Headends
A VPN concentrator can be used to increase remote-access security. This device can establish a secure connection (tunnel) between the sending and receiving network devices. VPN concentrators add an additional level to VPN security. They not only can create the tunnel but also can authenticate users, encrypt the data, regulate the data transfer, and control traffic.
The concentrator sits between the VPN client and the VPN server, creates the tunnel, authenticates users using the tunnel, and encrypts data traveling through the tunnel. When the VPN concentrator is in place, it can establish a secure connection (tunnel) between the sending and receiving network devices.
VPN concentrators add an additional level to VPN security. Depending on the exact concentrator, they can do the following:
Create the tunnel.
Authenticate users who want to use the tunnel.
Encrypt and decrypt data.
Regulate and monitor data transfer across the tunnel.
Control inbound and outbound traffic as a tunnel endpoint or router.
The VPN concentrator invokes various standard protocols to accomplish these functions.
A VPN headend (or head-end) is a server that receives the incoming signal and then decodes/encodes it and sends it on.
Networked Devices
One of the fastest areas of growth in networking isn’t necessarily in adding more users, but in adding more devices. Each “smart” device has the ability to monitor or perform some task and report the status of the data it has collected, or itself, back. Most of these devices require IP addresses and function like normal nodes, but some network only through Bluetooth or NFC. Table 4.1 lists some of the devices commonly being added to the network today.
TABLE 4.1 Commonly Networked Devices
Device |
Description |
Key Points |
---|---|---|
Telephones |
Utilizing voice over IP (VoIP), the cost of traditional telephone service is reduced to a fraction of its old cost. |
In the world of voice over IP (VoIP), an endpoint is any final destination for a voice call. |
Printer |
The printer was one of the first devices to be networked. Connecting the printer to the network makes it possible to share with all authorized users. |
Networked printers need to be monitored for security concerns. Many high-speed printers spool print jobs, and the spooler can be a weakness for some unauthorized person looking for sensitive information. |
Physical access control devices |
These devices include door locks, gates, and other similar devices. |
They greatly reduce the cost of manual labor, such as guards at every location. |
Cameras |
Cameras allow for monitoring areas remotely. |
The capability to pan, tilt, and zoom (PTZ) is important in camera selection. |
HVAC sensors |
These devices provide heating, ventilation, and air conditioning. |
Smart sensors for HVAC can work in conjunction with other sensors. For example, a smoke detector can go off and notify the furnace to immediately shut off the fan to prevent spreading smoke throughout the building. |
IoT |
Internet of Things (IoT) includes such devices as refrigerators, smart speakers, smart thermostats, and smart doorbells. |
The acceptance—and adoption—of these items in the home market is predicted to grow so quickly that the number of sensors in use will outnumber the number of users within the next decade. |
ICS/SCADA |
Industrial Control Systems (ICS) is a catchall term for sensors and controls used in industry. A subset of this is SCADA (supervisory control and data acquisition), which refers to equipment often used to manage automated factory equipment, dams, power generators, and similar equipment. |
When it comes to sensors and controls, an emerging area of growth is that of in-vehicle computing systems. Automobiles tend to have sophisticated systems, such as computers complete with hard drives and GPS devices. Similar devices to those always sensing the status of the vehicle are used in industrial environments for automation, safety, and efficiency. |
Networking Architecture
Explain basic corporate and datacenter network architecture.
The networking devices discussed previously in this chapter are used to build networks. For this particular objective, CompTIA wants you to be aware of some of the architecture and design elements of the network. Whether you’re putting together a datacenter or a corporate office, planning should be involved, and no network should be allowed to haphazardly sprout without management and oversight.
Three-Tiered Architecture
To improve system performance, as well as to improve security, it is possible to implement a tiered systems model. This is often referred to as an n-tiered model because the n- can be one of several different numbers.
If we were looking at database, for example, with a one-tier model, or single-tier environment, the database and the application exist on a single system. This is common on desktop systems running a standalone database. Early UNIX implementations also worked in this manner; each user would sign on to a terminal and run a dedicated application that accessed the data. With two-tier architecture, the client workstation or system runs an application that communicates with the database that is running on a different server. This common implementation works well for many applications. With three-tiered architecture, security is enhanced. In this model, the end user is effectively isolated from the database by the introduction of a middle-tier server. This server accepts requests from clients, evaluates them, and then sends them on to the database server for processing. The database server sends the data back to the middle-tier server, which then sends the data to the client system. Becoming common in business today, this approach adds both capability and complexity.
While the examples are of database tiering, this same approach can be taken with devices such as routers, switches, and other servers. In a three-tiered model of routing and switching, the three tiers would be the core, the distribution/aggregation layer, and the access/edge. We walk through each of the layers present in this scenario.
Core Layer
The core layer is the backbone: the place where switching and routing meet (switching ends, routing begins). It provides high-speed, highly redundant forwarding services to move packets between distribution-layer devices in different regions of the network. The core switches and routers would be the most powerful in the enterprise (in terms of their raw forwarding power,) and would be used to manage the highest-speed connections (such as 100 Gigabit Ethernet). Core switches also incorporate internal firewall capability as part of their features, helping with segmentation and control of traffic moving from one part of the network to another.
Distribution/Aggregation Layer
The distribution layer, or aggregation layer (sometimes called the workgroup layer), is the layer in which management takes place. This is the place where QoS policies are managed, filtering is done, and routing takes place. Distribution layer devices can be used to manage individual branch-office WAN connections, and this is considered to be smart (usually offering a larger feature set than switches used at the access/edge layer). Lower latency and larger MAC address table sizes are important features for switches used at this level because they aggregate traffic from thousands of users rather than hundreds (as access/edge switches do).
Access/Edge Layer
Switches that allow end users and servers to connect to the enterprise are called access switches or edge switches, and the layer where they operate in the three-tiered model is known as the access layer, or edge layer. Devices at this layer may or may not provide Layer 3 switching services; the traditional focus is on minimizing the cost of each provisioned Ethernet port (known as “cost-per-port”) and providing high port density. Because the focus is on connecting client nodes, such as workstations to the network, this is sometimes called the desktop layer.
Software-Defined Networking
Software-defined networking (SDN) is a dynamic approach to computer networking intended to allow administrators to get around the static limitations of physical architecture associated with traditional networks. They can do so through the implementation of technologies such as the Cisco Systems Open Network Environment.
The goal of SDN is not only to add dynamic capabilities to the network but also to reduce IT costs through implementation of cloud architectures. SDN combines network and application services into centralized platforms that can automate provisioning and configuration of the entire infrastructure.
The SDN architecture, from the top down, consists of the application layer, control layer, and infrastructure layer. CompTIA also adds the management plane as an objective, and a discussion of each of these components follows.
Application Layer
The application layer is the top of the SDN stack, and this is where load balancers, firewalls, intrusion detection, and other standard network applications are located. While a standard (non-SDN) network would use a specialized appliance for each of these functions, with an SDN network, an application is used in place of a physical appliance.
Control Layer
The control layer is the place where the SDN controller resides; the controller is software that manages policies and the flow of traffic throughout the network. This controller can be thought of as the brains behind SDN, making it all possible. Applications communicate with the controller through a northbound interface, and the controller communicates with switching using southbound interfaces.
Infrastructure Layer
The physical switch devices themselves reside at the infrastructure layer. This is also known as the control plane when breaking the architecture into “planes” because this is the component that defines the traffic routing and network topology.
Management Plane
With SDN, the management plane allows administrators to see their devices and traffic flows and react as needed to manage data plane behavior. This can be done automatically through configuration apps that can, for example, add more bandwidth if it looks as if edge components are getting congested. The management plane manages and monitors processes across all layers of the network stack.
Spine and Leaf
In an earlier section, we discussed the possibility of tiered models. A two-tier model that Cisco promotes for switches is the spine and leaf model. In this model, the spine is the backbone of the network, just as it would be in a skeleton and is responsible for interconnecting all the leaf switches in a full-mesh topology. Thanks to the mesh, every leaf is connected to every spine, and the path is randomly chosen so that the traffic load is evenly distributed among the top-tier switches. If one of the switches at the top tier were to fail, there would only be a slight degradation in performance throughout the datacenter.
Because of the design of this model, no matter which leaf switch is connected to a server, the traffic always has to cross the same number of devices to get to another server. This keeps latency at a steady level.
When top-of-rack (ToR) switching is incorporated into the network architecture, switches located within the same rack are connected to an in-rack network switch, which is connected to aggregation switches (usually via fiber cabling). The big advantage of this setup is that the switches within each rack can be connected with cheaper copper cabling and the cables to each rack are all that need be fiber.
Traffic Flows
Traffic flows within a datacenter typically occur within the framework of one of two models: East-West or North-South. The names may not be the most intuitive, but the East-West traffic model means that data is flowing among devices within a specific datacenter while North-South means that data is flowing into the datacenter (from a system physically outside the datacenter) or out of it (to a system physically outside the datacenter).
The naming convention comes from the way diagrams are drawn: data staying within the datacenter is traditionally drawn on the same horizontal line (East-to-West), while data leaving or entering is typically drawn on a vertical line (North-to-South). With the increase in virtualization being implemented at so many levels, the East-West traffic has increased in recent years.
Datacenter Location Types
One of the biggest questions a network administrator today can face is where to store the data. At one point in time, this question was a no-brainer: servers were kept close at hand so they could be rebooted and serviced regularly. Today, however, that choice is not such an easy one. The cloud, virtualization, software-defined networking, and many other factors have combined to offer several options in which cost often becomes one of the biggest components.
An on-premises datacenter can be thought of as the old, traditional approach: the data and the servers are kept in house. One alternative to this is to share a colocation. In this arrangement, several companies put their “servers” in a shared space. The advantage to this approach is that by renting space in a third-party facility, it is often possible to gain advantages associated with connectivity speed, and possibly technical support. When describing this approach, we placed “servers” in quotation marks because the provider will often offer virtual servers rather than dedicated machines for each client, thus enabling companies to grow without a reliance on physical hardware.
Incidentally, any remote and autonomous office, regardless of the number of users who may work from it, is known as a branch office. This point is important because it may be an easy decision to keep the datacenter on-premises at headquarters, but network administrators need to factor in how to best support branch offices as well. The situation could easily be that while on-premises works best at headquarters, all branch offices are supported by colocation sites.
Storage-Area Networks
When it comes to data storage in the cloud, encryption is one of the best ways to protect it (keeping it from being of value to unauthorized parties), and VPN routing and forwarding can help. Backups should be performed regularly (and encrypted and stored in safe locations), and access control should be a priority.
The consumer retains the ultimate responsibility for compliance. Per NIST SP 800-144,
The main issue centers on the risks associated with moving important applications or data from within the confines of the organization’s computing center to that of another organization (i.e., a public cloud), which is readily available for use by the general public. The responsibilities of both the organization and the cloud provider vary depending on the service model. Reducing cost and increasing efficiency are primary motivations for moving towards a public cloud, but relinquishing responsibility for security should not be. Ultimately, the organization is accountable for the choice of public cloud and the security and privacy of the outsourced service.
For more information, see http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-144.pdf.
Shared storage can be done on storage-area networks (SANs), network-attached storage (NAS), and so on; the virtual machine sees only a “physical disk.” With clustered storage, you can use multiple devices to increase performance. A handful of technologies exist in this realm, and the following are those that you need to know for the Network+ exam.
iSCSI
The Small Computer Systems Interface (SCSI) standard has long been the language of storage. Internet Small Computer Systems Interface (iSCSI) expands this through Ethernet, allowing IP to be used to send SCSI commands.
Logical unit numbers (LUNs) came from the SCSI world and carry over, acting as unique identifiers for devices. Both NAS and SAN use “targets” that hold up to eight devices.
Using iSCSI for a virtual environment gives users the benefits of a file system without the difficulty of setting up Fibre Channel. Because iSCSI works both at the hypervisor level and in the guest operating system, the rules that govern the size of the partition in the OS are used rather than those of the virtual OS (which are usually more restrictive).
The disadvantage of iSCSI is that users can run into IP-related problems if configuration is not carefully monitored.
Fibre Channel and FCoE
Instead of using an older technology and trying to adhere to legacy standards, Fibre Channel (FC) is an option providing a higher level of performance than anything else. It utilizes FCP, the Fiber Channel Protocol, to do what needs to be done, and Fibre Channel over Ethernet (FCoE) can be used in high-speed (10 GB and higher) implementations.
The big advantage of Fibre Channel is its scalability. FCoE encapsulates FC over the Ethernet portions of connectivity, making it easy to add into an existing network. As such, FCoE is an extension to FC intended to extend the scalability and efficiency associated with Fibre Channel.
Network-Attached Storage
Storage is always a big issue, and the best answer is always a storage-area network. Unfortunately, a SAN can be costly and difficult to implement and maintain. That is where network-attached storage (NAS) comes in. NAS is easier than SAN and uses TCP/IP. It offers file-level access, and a client sees the shared storage as a file server.
What’s Next?
For the Network+ exam, and for routinely working with an existing network or implementing a new one, you need to identify the characteristics of network media and their associated cabling. Chapter 5, “Cabling Solutions and Issues,” focuses on the media and connectors used in today’s networks and what you are likely to find in wiring closets.