
Cisco Systems, Inc.
All contents are Copyright © 1992–2001 Cisco Systems, Inc. All rights reserved. Important Notices and Privacy Statement.
Page 3 of 15
of the packet. Control plane and dataplane accesscontrol
lists (ACLs) are supported on all ports to ensure proper
policing and marking on a per packet basis.
After the packet goes through classification, policing, and
marking, it is then assigned to the appropriate queue
before exiting the switch. The Catalyst 3550 supports
four egress queues per port, which allows the network
administrator to be more discriminating and specific in
assigning priorities for the various applications on the
LAN. At egress, the switch performs scheduling and
congestion control. Scheduling is an algorithm/process
that determines the order in which the queues are
processed. The switches support Weighted Round Robin
(WRR) scheduling and strict priority queuing. The WRR
queuing algorithm ensures that the lower priority packets
are not entirely starved for bandwidth and are serviced
without compromising the priority settings administered
by the network manager. Strict priority queuing ensures
that the highest priority packets will always get serviced
first, ahead of all other traffic, and allows the other three
queues to be serviced using WRR scheduling. In
conjunction with scheduling, the Catalyst 3550 Gigabit
Ethernet ports support congestion control via Weighted
Random Early Detection (WRED). WRED avoids
congestion by setting thresholds at which packets are
dropped before congestion occurs.
These features allow networkadministrators to prioritize
mission-critical and/or bandwidth-intensivetraffic, such as
ERP (Oracle, SAP, and so on.), voice (IP telephony traffic)
and CAD/CAM over less time-sensitive applications such
as FTP or e-mail (SMTP). For example, it would be highly
undesirable to have a large file download destined to one
port on a wiring closet switch and have quality
implications such as increased latency in voice traffic,
destined to another port on this switch. This condition is
avoided by ensuring that voice traffic is properly classified
and prioritized throughout the network. Other
applications, such as web browsing, can be treated as low
priority and handled on a best-efforts basis.
The Cisco Catalyst 3550 is capable of performing rate
limiting via its support of the Cisco Committed
Information Rate (CIR) functionality. Through CIR,
bandwidth can be guaranteed in increments as low as
8 Kbps. Bandwidth can be allocated based on several
criteria including MAC sourceaddress, MAC destination
address, IP source address, IP destination address, and
TCP/UDP port number. Bandwidth allocation is essential
in network environments requiring service-level
agreements or when it is necessary for the network
manager to control the bandwidth given to certain users.
Each Catalyst 3550 10/100 port supports 8 aggregate or
individualingresspolicersand8aggregateegresspolicers.
Each Catalyst 3550 Gigabit Ethernet port supports 128
aggregate or individual policers and 8 aggregate egress
policers. This gives the network administrator very
granular control of the LAN bandwidth.
Network Scalability through
High-Performance IP Routing
With hardware-based IP routing and the Enhanced
Multilayer Software Image, the Catalyst 3550 switches
deliver high performance dynamic IP routing. The Cisco
Express Forwarding (CEF)-based routing architecture
allows for increased scalability and performance. This
architectureallowsforveryhigh-speedlookupswhilealso
ensuring thestability and scalability necessaryto meet the
needs of future requirements. In addition to dynamic IP
unicast routing, the Catalyst 3550 Series is perfectly
equipped for networks requiring multicast support.
Multicast routing protocol (PIM) and Internet Group
Management Protocol (IGMP) snooping in hardware
make the Catalyst 3550 Series switches ideal for intensive
multicast environments.
These switches offer several advantages to improve
network performance when used as a stackable wiring
closet switch or as a top-of-the-stack wiring closet
aggregator switch. For example, implementing routed
uplinks from the top of the stack will improve network
availability by enabling faster failover protection and
simplifying the Spanning-Tree Protocol algorithm by
terminating all Spanning-Tree Protocol instances at the
aggregator switch. If one of the uplinks fails, quicker
failover to the redundant uplink can be achieved via a
scalablerouting protocolsuch asOpen ShortestPathFirst
(OSPF) or Enhanced Interior Gateway Routing Protocol
(EIGRP) rather than relying on standard Spanning-Tree
Protocol convergence. Redirection of a packet after a link
failure via a routing protocol results in faster network
convergence than a solution that uses Layer 2 Spanning
Comentarios a estos manuales