Wednesday, December 15, 2010

LAN Design Types and Models Best CCNA Training Institute in Delhi

Network Bulls
www.networkbulls.com
Best Institute for CCNA CCNP CCSP CCIP CCIE Training in India
M-44, Old Dlf, Sector-14 Gurgaon, Haryana, India
Call: +91-9654672192

LANs can be classified as large-building LANs, campus LANs, or small and remote LANs. The large-building LAN typically contains a major data center with high-speed access and floor communications closets; the large-building LAN is usually the headquarters in larger companies. Campus LANs provide connectivity between buildings on a campus. Redundancy is usually a requirement in large-building and campus LAN deployments. Small and remote LANs provide connectivity to remote offices with a relatively small number of nodes.
Campus design factors include the following categories:
  • Network application characteristics
  • Infrastructure device characteristics
  • Environmental characteristics
Applications are defined by the business, and the network must be able to support them. Applications may require high bandwidth or be time-sensitive. The infrastructure devices influence the design. Decisions on switched or routed architectures and port limitations influence the design. The actual physical distances affect the design. The selection of copper or fiber media may be influenced by the environmental or distance requirements. The following sections show some sample LAN types. Table 3-8 summarizes the different application types.

Table 3-8. Application Types
Application Type Description
Peer-to-peer Includes instant messaging, file sharing, IP phone calls, and videoconferencing.
Client-local servers Servers are located in the same segment as the clients or close by.
Client/server farms Mail, server, file, and database servers. Access is reliable and controlled.
Client-Enterprise Edge servers External servers such as SMTP, web, public servers, and e-commerce.

Best Practices for Hierarchical Layers

Each layer of the hierarchical architecture contains special considerations. The following sections describe best practices for each of the three layers of the hierarchical architecture: access, distribution, and core.
Access Layer Best Practices
When designing the building access layer, you must take into consideration the number of users or ports required to size up the LAN switch. Connectivity speed for each host should be considered. Hosts might be connected using various technologies such as Fast Ethernet, Gigabit Ethernet, or port channels. The planned VLANs enter into the design.
Performance in the access layer is also important. Redundancy and QoS features should be considered.
The following are recommended best practices for the building access layer:
  • Limit VLANs to a single closet when possible to provide the most deterministic and highly available topology.
  • Use RPVST+ if STP is required. It provides the best convergence.
  • Set VLAN Dynamic Trunking Protocol (DTP) to desirable/desirable with negotiation on.
  • Manually prune unused VLANs to avoid broadcast propagation.
  • Use VTP transparent mode, because there is little need for a common VLAN database in hierarchical networks.
  • Disable trunking on host ports, because it is not necessary. Doing so provides more security and speeds up PortFast.
  • Consider implementing routing in the access layer to provide fast convergence and Layer 3 load balancing.
  • Use the switchport host commands on server and end-user ports to enable PortFast and disable channeling on these ports.
Distribution Layer Best Practices
As shown in Figure 3-6, the distribution layer aggregates all closet switches and connects to the core layer. Design considerations for the distribution layer include providing wire-speed performance on all ports, link redundancy, and infrastructure services.

Figure 3-6. Distribution Layer

The distribution layer should not be limited on performance. Links to the core must be able to support the bandwidth used by the aggregate access layer switches. Redundant links from the access switches to the distribution layer and from the distribution layer to the core layer allow for high availability in the event of a link failure. Infrastructure services include QoS configuration, security, and policy enforcement. Access lists are configured in the distribution layer.
The following are recommended best practices at the distribution layer:
  • Use first-hop redundancy protocols. Hot Standby Router Protocol (HSRP) or Gateway Load Balancing Protocol (GLBP) should be used if you implement Layer 2 links between the layer 2 access switches and the distribution layer.
  • Use Layer 3 links between the distribution and core switches to allow for fast convergence and load balancing.
  • Build Layer 3 triangles, not squares as shown in Figure 3-7.

    Figure 3-7. Layer 3 Triangles
  • Use the distribution switches to connect Layer 2 VLANs that span multiple access layer switches.
  • Summarize routes from the distribution to the core of the network to reduce routing overhead.
Core Layer Best Practices
Depending on the network's size, a core layer may or may not be needed. For larger networks, building distribution switches are aggregated to the core. This provides high-speed connectivity to the server farm/data center and to the Enterprise Edge (to the WAN and the Internet).
Figure 3-8 shows the criticality of the core switches. The core must provide high-speed switching with redundant paths for high availability to all the distribution points. The core must support gigabit speeds and data and voice integration.

Figure 3-8. Core Switches

The following are best practices for the campus core:
  • Reduce the switch peering by using redundant triangle connections between switches.
  • Use routing that provides a topology with no Layer 2 loops which are seen in Layer 2 links using spanning tree protocol.
  • Use Layer 3 switches on the core that provide intelligent services that Layer 2 switches do not support.

Large-Building LANs

Large-building LANs are segmented by floors or departments. The building-access component serves one or more departments or floors. The building-distribution component serves one or more building-access components. Campus and building backbone devices connect the data center, building-distribution components, and the Enterprise Edge-distribution component. The access layer typically uses Layer 2 switches to contain costs, with more expensive Layer 3 switches in the distribution layer to provide policy enforcement. Current best practice is to also deploy Layer 3 switches in the campus and building backbone. Figure 3-9 shows a typical large-building design.

Figure 3-9. Large-Building LAN Design

Each floor can have more than 200 users. Following a hierarchical model of building access, building distribution, and core, Fast Ethernet nodes can connect to the Layer 2 switches in the communications closet. Fast Ethernet or Gigabit Ethernet uplink ports from closet switches connect back to one or two (for redundancy) distribution switches. Distribution switches can provide connectivity to server farms that provide business applications, DHCP, DNS, intranet, and other services.

Enterprise Campus LANs

A campus LAN connects two or more buildings within a local geographic area using a high-bandwidth LAN media backbone. Usually the enterprise owns the medium (copper or fiber). High-speed switching devices minimize latency. In today's networks, Gigabit Ethernet campus backbones are the standard for new installations. In Figure 3-10, Layer 3 switches with Gigabit Ethernet media connect campus buildings.

Figure 3-10. Campus LAN

Ensure that you implement a hierarchical composite design on the campus LAN and that you assign network layer addressing to control broadcasts on the networks. Each building should have addressing assigned in such a way as to maximize address summarization. Apply contiguous subnets to buildings at the bit boundary to apply summarization and ease the design. Campus networks can support high-bandwidth applications such as videoconferencing. Remember to use Layer 3 switches with high-switching capabilities in the campus-backbone design. In smaller installations, it might be desirable to collapse the building-distribution component into the campus backbone. An increasingly viable alternative is to provide building access and distribution on a single device selected from among the smaller Layer 3 switches now available.
Edge Distribution
For large campus LANs, the Edge Distribution module provides additional security between the campus LAN and the Enterprise Edge (WAN, Internet, and VPNs). The edge distribution protects the campus from the following threats:
  • IP spoofing— The edge distribution switches protect the core from spoofing of IP addresses.
  • Unauthorized access— Controls access to the network core.
  • Network reconnaissance— Filtering of network discovery packets to prevent discovery from external networks.
  • Packet sniffers— The edge distribution separates the edge's broadcast domains from the campus, preventing possible network packet captures.

Medium Site LANs

Medium-sized LANs contain 200 to 1000 devices. Usually the distribution and core layers are collapsed in the medium-sized network. Access switches are still connected to both distribution/core switches to provide redundancy. Figure 3-11 shows the medium campus LAN.

Figure 3-11. Medium Campus LAN

Small and Remote Site LANs

Small and remote sites usually connect to the corporate network via a small router. The LAN service is provided by a small LAN switch. The router filters broadcast to the WAN circuit and forward packets that require services from the corporate network. You can place a server at the small or remote site to provide DHCP and other local applications such as a backup domain controller and DNS; if not, you must configure the router to forward DHCP broadcasts and other types of services. As the site grows, you will need the structure provided by the Enterprise Composite Network model. Figure 3-12 shows a typical architecture of a remote LAN.

Figure 3-12. Remote Office LAN

Server-Farm Module

The server-farm or data-center module provides high-speed access to servers for the campus networks. You can attach servers to switches via Gigabit Ethernet or 10 Gigabit Ethernet. Some campus deployments might need EtherChannel technology to meet traffic requirements. Figure 3-13 shows an example of a server-farm module for a small network. Servers are connected via Fast Ethernet or Fast EtherChannel.

Figure 3-13. Server Farm

The server-farm switches connect via redundant uplink ports to the core switches. The largest deployments might find it useful to hierarchically construct service to the data center using access and distribution network devices.
Server distribution switches are used in larger networks. Access control lists and QoS features are implemented on the server distribution switches to protect the servers and services and to enforce network policies.
Server Connectivity Options
Servers can be connected in three primary options:
  • Single NIC
  • Dual NIC EtherChannel
  • Content switching
Single NIC connected servers contain Fast or Gigabit Ethernet full-duplex speeds with no redundancy. Servers requiring redundancy can be connected with dual NICs using switch EtherChannel.
Advanced redundancy solutions use content switches that front end multiple servers. This provides redundancy and load balancing per user request.

Enterprise Data Center Infrastructure

Data centers (DC) contain different types of server technologies, including standalone servers, blade servers, mainframes, clustered servers, and virtual servers.
Figure 3-14 shows the Enterprise DC. The DC access layer must provide the port density to support the servers, provide high-performance/low-latency Layer 2 switching, and support dual and single connected servers. The preferred design is to contain Layer 2 to the access layer and Layer 3 on the distribution. Some solutions push Layer 3 links to the access layer. Blade chassis with integrated switches have become a popular solution. Each blade switch houses 16 Intel platforms, each logically connected within the chassis to two access switches.

Figure 3-14. Enterprise Data Center

The DC aggregation layer (distribution layer) aggregates traffic to the core. Deployed on the aggregation layer are
  • Load balancers to provide load balancing to multiple servers
  • SSL offloading devices to terminate SSL sessions
  • Firewalls to control and filter access
  • Intrusion detection devices to detect network attacks

Campus LAN Quality of Service Considerations

For the access layer of the campus LAN, you can classify and mark frames or packets to apply quality of service (QoS) policies in the distribution or at the Enterprise Edge. Classification is a fundamental building block of QoS and involves recognizing and distinguishing between different traffic streams. For example, you distinguish between HTTP/HTTPS, FTP, and VoIP traffic. Without classification, all traffic would be treated the same.
Marking sets certain bits in a packet or frame that has been classified. Marking is also called coloring or tagging. Layer 2 has two methods to mark frames for CoS:
  • Inter-Switch Link (ISL)
  • IEEE 802.1p/802.1Q
The IEEE 802.1D-1998 standard describes IEEE 802.1p traffic class expediting.
Both methods provide 3 bits for marking frames. The Cisco ISL is a proprietary trunk-encapsulation method for carrying VLANs over Fast Ethernet or Gigabit Ethernet interfaces.
ISL appends tags to each frame to identify the VLAN it belongs to. As shown in Figure 3-15, the tag is a 30-byte header and CRC trailer that are added around the Fast Ethernet frame. This includes a 26-byte header and 4-byte CRC. The header includes a 15-bit VLAN ID that identifies each VLAN. The user field in the header also includes 3 bits for the CoS.

Figure 3-15. ISL Frame

The IEEE 802.1Q standard trunks VLANs over Fast Ethernet and Gigabit Ethernet interfaces, and you can use it in a multivendor environment. IEEE 802.1q uses one instance of STP for each VLAN allowed in the trunk. Like ISL, IEEE 802.1Q uses a tag on each frame with a VLAN identifier. Figure 3-16 shows the IEEE 802.1Q frame. Unlike ISL, 802.1Q uses an internal tag. IEEE 802.1Q also supports the IEEE 802.1p priority standard, which is included in the 802.1D-1998 specification. A 3-bit priority field is included in the 802.1Q frame for CoS.

Figure 3-16. IEEE 802.1Q Frame

The preferred location to mark traffic is as close as possible to the source. Figure 3-17 shows a segment of a network with IP phones. Most workstations send packets with CoS or IP precedence bits (ToS) set to 0. If the workstation supports IEEE 802.1Q/p, it can mark packets. The IP phone can reclassify traffic from the workstation to 0. VoIP traffic from the phone is sent with a Layer 2 CoS set to 5 or Layer 3 ToS set to 5. The phone also reclassifies data from the PC to a CoS/ToS of 0. With Differentiated Services Code Point (DSCP), VoIP traffic is set to Expedited Forwarding (EF), binary value 101110 (hexadecimal 2E).

Figure 3-17. Marking of Frames or Packets

As shown in Figure 3-17, switches' capabilities vary in the access layer. If the switches in this layer are capable, configure them to accept the markings or remap them. The advanced switches in the distribution layer can mark traffic, accept the CoS/ToS markings, or remap the CoS/ToS values to different markings.

Multicast Traffic Considerations

Internet Group Management Protocol (IGMP) is the protocol between end workstations and the local Layer 3 switch. IGMP is the protocol used in multicast implementations between the end hosts and the local router. RFC 2236 describes IGMP version 2 (IGMPv2). RFC 1112 describes the first version of IGMP. IP hosts use IGMP to report their multicast group memberships to routers. IGMP messages use IP protocol number 2. IGMP messages are limited to the local interface and are not routed.
RFC 3376 describes IGMP Version 3 (IGMPv3) IGMPv3 provides the extensions required to support source-specific multicast (SSM). It is designed to be backward-compatible with both prior versions of IGMP. All versions of IGMP are covered in Chapter 12, "Border Gateway Protocol, Route Manipulation, and IP Multicast."
When campus LANs use multicast media, end hosts that do not participate in multicast groups might get flooded with unwanted traffic. Two solutions are
CGMP
Cisco Group Management Protocol (CGMP) is a Cisco-proprietary protocol implemented to control multicast traffic at Layer 2. Because a Layer 2 switch is unaware of Layer 3 IGMP messages, it cannot keep multicast packets from being sent to all ports.
As shown in Figure 3-18, with CGMP, the LAN switch can speak with the IGMP router to find out the MAC addresses of the hosts that want to receive the multicast packets. You must also enable the router to speak CGMP with the LAN switches. With CGMP, switches distribute multicast sessions to the switch ports that have group members.

Figure 3-18. CGMP

When a CGMP-enabled router receives an IGMP report, it processes the report and then sends a CGMP message to the switch. The switch can then forward the multicast messages to the port with the host receiving multicast traffic. CGMP Fast-Leave processing allows the switch to detect IGMP Version 2 leave messages sent by hosts on any of the supervisor engine module ports. When the IGMPv2 leave message is sent, the switch can then disable multicast for the port.
IGMP Snooping
IGMP snooping is another way for switches to control multicast traffic at Layer 2. It can be used instead of CGMP. With IGMP snooping, switches listen to IGMP messages between the hosts and routers. If a host sends an IGMP query message to the router, the switch adds the host to the multicast group and permits that port to receive multicast traffic. The port is removed from multicast traffic if an IGMP leave message is sent from the host to the router. The disadvantage of IGMP snooping is that it must listen to every IGMP control message, which can impact the switch's CPU utilization.

No comments:

Post a Comment