Bridging and Provider Bridging
It's
the beginning of the year and I'm writing here about Service Provider
technologies again.
Last year, I wrote about Metro
Ethernet Services and how to
configure E-Line
VPWS and E-LAN VPLS, and also
MPLS-TE
FRR Link Protection, but this time,
I want to write about issues that Service
Providers have in their networks, or companies in their datacenters,
when the amount of layer 2 networks is too big and they don't have
any way to segment even more their networks.
The bridging technology is well know for all us and it is able to communicate two devices, but they have to be in the same subnet and the same broadcast domain, and also MAC addresses should be known by source, destination and switches devices to send layer 2 frames. On the other hand, if we want to make segmentation and segregation of traffic over the same physical Ethernet network, we'll have to use the 802.1q (VLAN) standard. However, we could have scalability issues with 802.1q because the VLAN header (4 bytes) has a VLAN ID tag (12 bits) which can only address 4096 networks.
The bridging technology is well know for all us and it is able to communicate two devices, but they have to be in the same subnet and the same broadcast domain, and also MAC addresses should be known by source, destination and switches devices to send layer 2 frames. On the other hand, if we want to make segmentation and segregation of traffic over the same physical Ethernet network, we'll have to use the 802.1q (VLAN) standard. However, we could have scalability issues with 802.1q because the VLAN header (4 bytes) has a VLAN ID tag (12 bits) which can only address 4096 networks.
VLAN Header |
Virtual
Networks (VLAN) is a good idea for segmentation and segregation in
small networks where we can also use the Spanning Tree Protocol (STP)
for loop avoidance which offers reliability and feasibility to our
networks. However, if we are working or
designing a Cloud
or Service Provider network, the business requirements could demand
more than 4096 VLANs due to multi-tenant and multilayer
architectures. It's here where the
802.1ad (Provider Bridging)
standard plays an important role in highly
scalable networks
like Service Providers or Public and Private Clouds to
solve the VLAN limitation.
Bridges, VLANs and Provider Bridges |
The
Provider Bridging standard is known as QinQ
because it stacks two VLAN tags,
as a result, we could have till 16 millions of networks. In this way,
the Service Provider could have a VLAN for each customer, or several
VLANs for each of
them, to offer services like voice, Internet or VPN. However,
we are always talking about layer 2 networks then source and
destination MAC addresses are inside the traditional bridging and the
Provider Bridging as well,
thus layer 2 switches of the Service Provider, or in the datacenter,
have to know every MAC address of customers.
QinQ VLAN Tagged frame |
The
Provider Bridging standard is a good choice to solve the 4096 VLAN
limitation but it requires all switches know every source and
destination MAC address, which is a scalability challenge. Why?
Because in a CLOS/Leaf and Spine architecture, leaf nodes
or Top of Rack (ToR) switches are going to encapsulate and
decapsulate an additional VLAN tag, and
also making MAC address learning, aging an flooding, and then,
they'll send frames to spine nodes and into the backbone network
where core switches will “see” the
original customer MAC header and they'll have to save every source
and destination MAC address, which is a challenge for the limitation
of MAC address tables due to the fact that
these content-addressable
memories are finites and expensive.
CLOS/Leaf and Spine architecture |
We'll
see in next posts how to solve the QinQ limitation with SPBM, Trill
or FabricPath where Customer MAC addresses will be encapsulated in a
different layer 2 or layer 3 header.
Regards my
friends, drop me a line with the first thing you are thinking.
Commentaires
Enregistrer un commentaire