Brocade VCS Fabric Overview

Brocade VCS Fabric Overview



what's good afternoon now my name is Kelvin Franklin I'm senior manager product specialist for Ethernet fabrics and so I would love to be talking to you today about Ethernet fabric technology and you're going to see a common theme across the different product sets my technology focuses on the data center I hope a lot of you have seen the announcements we had a tech day around what we're doing in the whole ethernet fabric space I'm trying to make sure that this is the only marketing slide that you see hopefully we'll be able to get into some good discussion and I also have engineering here that and so anything that I say and you're not sure I look back there and go either do this oh yeah or I'll get I'll get that that type of Bob look from Ethernet and what's that is all about then I'll get into brigades Ethernet fabric technology talk about what's coming where all of this is going and then also give you an idea of some of the ways that we see our customers using this technology so that usually facilitates a lot of questions what's good all right so let's take a few steps back before we even start talking about Ethernet fabrics often when you're releasing a new technology to the market some of what you're trying to achieve or what you're trying to do from a depth standpoint gets lost they like little short blurbs and they send those out and you lose some of the subtlety that we're trying to achieve with this technology and so I wanted to first start off by talking about what a fabric is a fabric is a is a high-performance reliable connectionless interconnect so that's what a fabric is and for you guys who've you know been in tech for a long time if you think about other types of interconnects such as beef a bear or fiber channels where it's about making sure you have this underlying framework that you can use as a reliable transport for the upper layer things that are going on so a fabric inherently is a high-performance interconnect Network that's very reliable regardless of the protocols that you're going to run on top and so the reason why I say this is because if you start looking at what we're saying and Ethernet fabric is inherently Ethernet isn't that like no Ethernet does not work that way Ethernet is some people may say it's reliable it's not as reliable as technologies like InfiniBand for tech novels like fibre channel you have high performance there but for the types of things that our customers are doing within their data centers today they need the same type of functionality they would traditionally find in infinity band or fiber channel in a network and so when we started talking about doing this technology that's what we wanted to achieve we wanted you to be able to have an Ethernet network and be able to have the same type of reliability scalability low latency that you would find in these other interconnect types of Technology any questions any questions about that and so what we want to do also hear from an Ethernet fabric standpoint is allow you to have the same type of switch operation that you'd find for say per se in a sand network where the switches can make forwarding decisions independently independently they have a full concept of the whole topology they can they don't have to depend on anyone else if it's some type of outage or there's some type of loss a loss link the switches can forward the traffic regardless so long as this redundant paths in the network and then when you think about things like layer 2 and layer 3 those are services that you look overlay on top of this high-performance infinite and so our first instantiation of this was when we shipped this fabric techno so when we first season that fabric technology the first service we had was a layer 2 service now with our latest announcement you hear NOS 3.0 we we just announced that we've supporting layer three on top of this high-performance interconnect which we call Ethernet better actually think the name would have been better if it was fabric based Ethernet because then if you understand what a fabric traditionally does then you're seeing that what we're doing is trying to make Ethernet operate in the same way or fashion so this is also our what we're announcing today for me this is our first announcement of service oriented fabrics and so what we're saying here now is just like there's layer 2 and there's layer 3 services that we're supporting on the fabric moving forward in the future there will be additional services that we will support on top of the fabric and I'll get into more of that as we talk so the fabric is not built by nicking there two or three tables at all the fabric is built using underlying mechanisms like trail in the fabric in order for you to build a full deposit and then if you want to start doing point the point or pulp to multi-point layer 2 communication across it you can do that or if you want to turn on layer 3 functionality anywhere in the fabric you can do that and so this gives you a lot more flexibility and a lot more scalability and so that's our view of how fabric should be we don't believe in a content concept of IP based fabrics if you have an IP based fabric then when you want to do layer 2 how do you do it you have to look at some other way of looking like the excellent you have to look at some other protocol that allows you're doing capsule 8 layer 2 within layer 3 and for us that adds a lot more complexity to the story how about we just allow you to do layer 2 what you want to do it or layer 3 where you want to do it and then following on with other types of service in the future that allow you to do other types of things within the fabric make sense those things not working very well now I'm having my my issues you did this to me let's go back okay so our fabric technology is called brocade virtual cluster switches so VCS fabrics what we focus on with our technology with our teeth Empire is non-stop networking like I said that first component reliable reliable into connect then on top of that and everything we're doing we want to make sure that we're not adding any additional complexity to your network and will also allow you to automate wherever wherever possible and I'll talk about some of that automation so you talked about you know bringing bringing your your baseline fabric up to the levels of bill building assuming buffer control and availability that you have in your fiber channel InfiniBand fabrics we take non-stop networking area you're guaranteeing that a frame will not get dropped I can't guarantee that a frame will not get dropped what I'm going to give you as close as in the buyer for an Ethernet side that you would find and so what we're trying to design a network here is if there is that brace is going to be a very limited number I can't promise that you'll never be any frames drop but because we have that inherent intellectual property coming from that side of the house we leverage a lot of that and bring it over to even its side and that allows us to be very successful at the point lossless and the other thing here and I think it's a big feat is the evolutionary nature of this I think one of the keys to any new technology that you're going to deploy in a network if you give people a migration story you cannot say that you know what the way you did things in the past all of that needs to change you need to rip out everything and you need to start with this very expensive Ethernet traffic in order to roll out this new technology what we want to do and what has been really receptive from all the customs I talked to is we can give you a small fabric that you can start with that you can implement in your network it'll interoperate with all of the additional layers if you have in your network it'll look similar to what you're used to and you can start testing out the benefits of the fabric then grow from there so you can involve yourself and if you think about it a lot of this is based around 10 Gigabit Ethernet technology at the base of it then growing to 40 gig and 100 gate in the future not many of my customers are going to swap out hundreds of hundreds of one gig service they have in and implement ten gig on all those different service they're going to swap out some of those servers they're gonna start rolling out some ten gig and putting 40 gig in where it's needed so this is a nice migration story with a lower upfront cost to get into the Ethernet and so let's look at what we were talking about how do we build this fabric what is this fabric how do we make this fabric work and so if you look at what we're doing at layer 1 first of all we have something called brocade trunking roque trunking if you're familiar with our storage area network switches it comes over from that side we have it now an Ethernet side where the actual load balancing is done in the Asics in the hardware so what we're actually doing is packet sprang across the is elves in between the switches and so we get more if efficient distribution of the packets across your eyes else think about how this would be done in traditional products if you're doing load balancing you're usually doing flow based low bouncing across your is else if you're doing flow based eye cells you may run in you may have a likelihood of all of your flows going down a single eyes that is L or Lajoie T of your flow is going down a single is L or multiple is L you're not efficiently using your bandwidth between your switches and those are expensive I assess extending take get support so what we do actually is packet spraying and the hashing algorithm that we're using is based on a seven couple look up for those protocols that support that so it seven couples would be your source and destination MAC your source and destination IP source and destination port TLS that type of information we're less likely to run into a hash collision so now we can efficiently spray the traffic across those is else when you talk about okay how do we make it more lossless is the types of things that help with that so we can avoid congestion on the ISS themselves in that way right it is a reality but you do a new encounter right what how do you handle it and so but first you can also do some proactive types of things because in traditional Usenet Network networks you've run into flow type issues very easily and so if you can get around some of those flow types each beat issue it makes it easier for you to deal with those congestion issues that will come up and so our Ethernet fabric of course supports DCB and you can start using DCP in order to prioritize certain traffic over other traffic within the network and then pause specific lanes of traffic so that priority traffic is still different so a flow still we go down a single link because you know the flow is actually going to be space spray okay it's great across and so we're putting on a little detail yeah we're putting out an additional header on it okay so that we can actually disassemble the flow and then put the flow back together so that comes into when you're looking at how you're going to do your own equal cost multi packing across the network and you're going to look at okay what pass do I have available and how am I going to spread these different flows across the entire network so the eye cells themselves have this frame frame striping capability and packets brain capability and then to get a flow into em across the fabric you have this ability to look at the flows and do this hashing in a mortician man so isn't less about the eye cell and more about the whole fact the fabric has a whole subject devices that couldn't reassemble or reorder it so you look at trilled of spread it across multiple link groups and if that linger of itself had multiple physical 90 sorry I'm not trying to sell the spray-and-pray across across here three three or four legs so it's like well be but you'd have your higher-level protocols and then load balancing across logical links right and then get down to your individual of trunks right you got to enter it if there's time so you start spraying across exactly and so and the other sorry and the other thing here is layer three so lair one with the trunking lair two with trill layer three so with you at turn on the layer three services we actually allow you to turn on the layer three service across multiple different switches within the Ethernet fabric and then we'll load balance across those multiple switches if you have servers down here that are connecting to their default gateway they see one default gate they still see one default gateway and the fabric itself is intelligent enough to handle the flows going across making sure their load balance across and then on the other side if you're connected to your layer three core the core today will see multiple instances of OSPF for example cuz I know you had a question like that earlier but if the most important thing is here now you have a more scalable layer three mechanism that you so another networking vendor this not cisco introduced a fabric and in my opinion how they intervene layer three into it suck all the traffic to one side and bring it down how are you actually managing if I if I'm scaling out my layer three gateways based on new traffic or the needs of my traffic how are you managing that horizontal scale out of your layer three services so I so from a management standpoint are you saying how are we distributing it how we make sure it's efficient or yeah I mean there's a lot you know obviously other other none other other other main you never manufacturers that aren't Cisco failed in this endeavor so missionaries stood up I'm gonna let them handles question I mean one one use cases you have multiple and multiple l3 domains in your data center in your fabric in your and you're pushing top are at the top Iraq switching distribute all three right and host hoping a host from seven and eighties to talk to subnet be they're in the same rack great should they go get sucked all the way across the fabric or should they be no matter it would buy when you instantiate Adel three services and your fabric be able to take that source path and are you saying that no matter if I instantiate an l-3 service that I can provide that VLAN VLAN routing across any single switch my fabric so you could turn on the instantiate the legacy service they have it done at the switches may be on this that would be consistent across all of them yeah logically it's going to look like so that's the beauty of architecture as far as how the topology gets presented immediately it's about giving that flexibility to the admin decide how they should do this their implement lease is service the routing service in the network so you know no pressed paper would be good without the you some good sexy boxes W and so what we have here is our family of products we've been shipping this technology for over 18 months you've already heard about the VDX 6710 which is our 1u 44 1 gig with 10 gig up links we have our c7 20s which is 24 court 10 gig and 60 port 10 gig devices and we have our 67 30s which are 24 port with fibre channel connectivity and a 64-bit fibre channel connectivity so if you wanted to connect into an existing fibre channel network you can do that from the 6730 devices what we've announced recently are our 87 70s those are our chassis based d/dx products this allows you to scale and at a very high level giving you and I'll talk about some of the details in the coming slide upwards of 384 ports of 10-gig or 96 ports of 40 gig in the chassis it's even better than a slide has even better to the slide and so what's important to know also to is this the Asics that are in this these boxes are not a success we just thought about they're actually coming over from the sand side so it's a new generation of Asics based on our sand technology and from a writing standpoint we're marrying our our carrier class routing code into the platform so if you're familiar with our flagship router the MLA e which is the net-net re-encode set which is a lot of being an exchange points around the world that's the subset of that routing code it's coming over to this family so that you have a robust rather than platform included with is Ethernet fabric technology from a cloud standpoint this is for cloud environments that require to be able to scale them up very quickly so plug it in in the fabric uses it that was very intelligent you have to worry about going in and saying this is an ISL port or this is an access court you plug it in between two switches and switches go hey this is my is that I'll use that bandwidth automatically with all those mechanisms I talked about earlier ultra-low latency so you'll see some of the numbers on it cook the coming slides 3.6 microseconds latency any port 80 port on the chassis their power is a big deal here will see anywhere from 20 to 30 percent saving on power for a lot of data centers that's a big deal because you're paying for that power for these devices in your network so we can reduce the amount of power consumption that you have that you're using in your data center that's a really good thing so we're hearing a lot of customers talk about yeah thank you for helping me to make it easier to go to the guy to sign into Jackson and say look we're gonna reduce the power but increase the bandwidth the last thing we have and then you know of course we're going to support whatever type of storage you're gonna use within your network one of the use cases I'm going to show you is around that high performance storage type of network a lot of the customers I'm talking to today are building these high performance Ethernet attached storage networks and because we're one of the leading storage vendors they come to us to say help us to build this you've helped us you helped this in the past to build these great fibre channel networks helped me to build an Ethernet attached storage network whether it be I study babies fibre channel or even then a space help me build it out using the same type of technology you used in the past any questions here so right here it's better than looking out there so yeah the chassis sports up to 384 reports of 10-gig today 96 ports of 40-gig we've designed this such that you can scale out to over 8,000 ports within your Ethernet fabric today there's a four slot that's the 8 slot and in the future there will be a 16 slot chassis I talked about 110 40 gig it has the capacity to support a hundred gig capabilities in the platform up to 4 terabytes per slot the capacity is there so we can scale this platform we're building it for a 10 year life cycle if possible so that you can put this chassis in place and be able to grow as time goes by as your network needs increase so to 96 ports of 48 is out of slot replacement or are you supporting the 10 to 40 gig Aggron aggregating transceivers so the 40 is a separate 12 for us SP line card that 48 work is a forty eight or ten gig yeah today we don't support the ability to take the USB to break out before so goal here is allow people to scale you can start off with just maybe a couple of the devices and then in the future scale to higher numbers of devices with the needs met fabric and higher numbers of ports and within your better so now I'm going to talk about that ASIC technology because you know there's been a lot of talk in the market that a lot of things that you can do today you can do it just merchants silicon and you know I don't think that's true there's some good things that you can do but just merchants so we can just go and buy it off the shelf but our ASIC and the design that we've done with our ASIC has allowed us to achieve all of these remarkable numbers that I've been talking about whether it's the low latency the ability to scale the ability to do all of this high performance in to connect and the reliability that I was talking about the ability to do the eight ports a frame spring across your is LS that's all built into our ASIC and that's unique to brocade no but it's on our roadmap to support via also you can see here from a lossless capability standpoint we can create a mannequin fabric here that allows you to have all the within this network like I said earlier low legacy 3.6 microseconds from a latency standpoint they're not that can give you that type of latency within an Ethernet network in a chassis based system that's any port to any port whether you're doing a layer two or layer three on the device we do our layer 2 and layer 3 lookup simultaneously on the device so that's one of ways you can get this low latency and we support all the traditional QoS eight levels of your West and priority flow control so you know finding a good balance of having an ASIC that's flexible enough to enable new types of features like the excellent and then also have low latency is not easy but our guys have did a great job at doing that so the bitch line wasn't a thought when we were designing this product but there's enough table space in there to support the excellent feature just by doing some software enhancements so if it's a program and talked a little bit about what we're going to do from a future standpoint so what I've been talking about here has primarily been around a fabric within a single data center what we want to do moving forward is give you capabilities that extend across data centers and I'm not just talking about the VX land capabilities where you're encapsulating layer 2 with in layer 3 but actually extending the intelligence with a fabric across data such as by leveraging that some very unique devices that we're going to be bringing the market so we'll be extending the fabric I mentioned earlier this service-oriented fabric concept the service-oriented fabric concept was layer 3 now where we're going to be going in the future is allowing you to do multi-tenancy type of functionality so vrf flight type of functionality within the fabric so now that you can have overlapping IPS higher numbers of VLANs support the question out there the air flights nice a minimum varied entry when will you support MPLS so you can have more carrier adoption in this so MPLS is done on our core product line what are the types of things you want to do in any less because like some of the things that I see my customers who are bringing ambulance internal to the network without any of this technology is some customs I'm talking to a broad MPLS internal network to get some of the traffic engineering benefits of MPLS a lot of that can be achieved some of the things they want to do can be achieved with with the fabric you know giving them the low latency separating things out and then still having a simple simple network and having the complexity in the core so absolutely traffic steering but when I think about especially in the context of the care and I really MPLS in the enterprise but the ability to maintain simplicity and my bad night my provider backbone right anytime I have to pop a tag higher up in the network are the whole notion of the BCS architecture is it's a core hedge architecture so the idea is if you want to keep all of your team as close to your compute as possible right at least what you're dealing with a lot of servers server traffic near dancer that's where that words really shine so we've tried to keep all of our our fabrics you know design essential essentially for access and – it still comes back to everything there's a larger discussion you know where should you pop a tag in the context of a service writer right you know you spew pop it as close to close your claim as possible now your client may be living very close that service layer or server layer and at the edge is appropriate to pop in a mountain Avenue of exp later rather you know I think there this is on over there yet we never seem to see where were you we're okay to take on that is how do you seamlessly extend a search I've never to the tenant multi-tenant cloud right that I think that story's being still developed but also the tenant is all over the place even with the new data center because of this new these new virtualization models so it says you know give me some more resources and that those resources go wherever they're available within the data center so now they need that layer to domain to communicate with each other so I think keeping the complexity of the core helps in that model and your mapping that VLAN off to the your core routing infrastructure or to whatever MPLS that goes back to their network I think that is very helpful when you're not sure where these these different virtual machines of like I said stories limited life but on the roadmap bomb what we're doing is this is a family that's going to family products and functionality that's going to continue to evolve and so we talked about 24 and 60 for 10 gigs and the chassis with ten gig and 40 game support there will be higher density pen game devices so there'll be a 1u device with higher density on it with 40 gig uplink so if you want a tech top-of-rack with higher density 10 gig that gives you 40 gig into the spine so leaf spine type of scenario we'll have that available based on the same type of technology also high-density 40 gig so if you want a 2-u device with a lot of 40 gig on it with high-speed up links off into the spine we'll have that for you 10 gig based e customers are asking me a lot of customers are asking for a road map to 10 gig based e we're going to support that in our platform both in a stackable and a chassis based flavor of that so whether they want to do an end of row type of deployment or they want to do a top of rack type of the point they'll have the ability to do 10 gig based E and we're also moving to on the top of rack devices a flex port type model we're based on the optic that you put into the product that will determine whether it's 10 gig Ethernet or 16 gig fiber channel and that way you'll be able to do convergence very well within the taça track that's on the hardware side on the software side like I said it's this whole service-oriented fabric model where we're going to enable things like multi tendency enable you to also extend your fabrics across datacenter and have the ability to do virtual host mobility at distance and so we're not worrying about that 5 millisecond window that you're allowed to do live virtual most post mobility you will be able to actually proxy that communication so you can do it at distance so that's in the near future well you'll hear a lot more about those capabilities and also we support a concept of automatic migration of port profiles today within our fabric meaning that when you create a port profile with the QoS settings and the VLAN settings our virtual host and that virtual host moves within that fabric that poor profile goes with them you don't have to go to the other switch and reconfigure it so since that moves within the fabric what we want to allow you to do that is it's then that between the fabrics without merging the fabrics together and so we're going to be leveraging some of our extension kick those and we already have on the same side bringing them over to the Ethernet side to give you some of this it'll make sense you're liking what you hear is this the right direction we're streaming this well that's why I covered the interconnect first we're gonna give you a solid foundation who can go on and then we can start growing all these other features on it and it also gives you a clear totally differentiates us from our competitors out there and how they do things we have a whole different mindset on how we're going to do these things but we know it has to be on a foundation that can support it and we think we have a good foundation and now we can start going out and exploring these things like Novi explained and you know other type of functionality or giving you alternatives to be excellent we're not we're not saying you have to do the excellent we will support it but we also won't keep you alternatives of eh land with a layer two extension capability between data centers that allows you to live migration which we think will be valuable touching on the X and the X plan is hot right now so of course we have to talk about it what it allows you to do is that you know layer two encapsulated in IP we want to make sure you have visibility into that so the platform has it has the ability to support it in hardware we're just gonna enable them in software you'll have full visibility of the the DX land we also within our platforms will allow you to terminate VX land in the devices and like I said later this afternoon like Lisa said will allow you to see some of the alternatives to Planeta teacher

One Reply to “Brocade VCS Fabric Overview”

  1. Brocade VCS….what absolute rubbish! The VCS software is a disaster area. Brocade, c'mon, guys, spare us the nonsense. Brocade Fibre Channel is great (not as good as Cisco's though), but Brocade SDN and Ethernet….? Puleeze!! 🙂 Brocade is desperately trying to reinvent itself but it's not working. And without a server and storage story for the data center, it's a a wonder Brocade has managed to stay somewhat relevant.

Leave a Reply

Your email address will not be published. Required fields are marked *