Fibre Channel over Ethernet has been back on my radar recently, especially as it was touted again at Storage Networking World in Orlando last week. Unfortunately I wasn’t there and didn’t see for myself, although I was in Orlando the week before on vacation. I can imagine if I’d extended or moved the holiday to include SNW that I’d be none too popular with Mrs E and my sons.
Any hoo, I looked back over my blog and I first briefly mentioned FCoE back in April 2007, a whole 12 months ago. Now, we know 12 months is a long time in the storage world (in which time iSCSI will have claimed another 3000% market share, EMC will have purchased another 50,000 storage companies of various and dubious value, HDS will have released nothing and IBM will have developed 2 or 3 new technologies which won’t see the light of day until I’m dead and buried). I expect then that FCoE should have moved on somewhat and it appears it almost has. Products are being touted, for example, Emulex with the LP21000 CNA card (not an HBA card, please note the new acronym) and Cisco with their Nexus 5000 switch (plus others).
At this stage I don’t believe the FCoE protocol has been fully ratified as a standard. I have been spending some time wading reading through the FC-BB-5 project documentation on the T11 website, covering FCoE to understand exactly how the protocol works in more detail and how it can be compared to native fibre channel, iSCSI, iFCP and FCIP. In the words of Cilla, here’s a quick reminder on storage protocols in case you’d forgotten.
Fibre channel and the Fibre Channel Protocol (FCP) provide a lossless, packet based data transmission protocol for moving data between a host (initiator) and a storage device (target). FCP implements SCSI over fibre channel. To date, fibre channel has been implemented on dedicated hardware from vendors including Cisco and McDATA/Brocade. iSCSI uses TCP/IP to exchange data between a host and storage device using the SCSI protocol. It therefore includes the overhead of TCP/IP but provides for lossy and long distance connectivity. iFCP and FCIP are two implementations which encapsulate FCP in TCP/IP packets. FCIP extends an existing fibre channel SAN, whereas iFCP allows data to be routed between fibre channel SANs.
FCoE will sit alongside fibre channel and allow the transmission of FCP packets at the Ethernet layer, removing the need for TCP/IP (and effectively allowing TCP/IP and FCP packets to exist on the same Ethernet network).
So hurrah, we have another storage protocol available in our armoury and the storage vendors are telling us that this is good because we can converge our IP and storage networks into one and save a few hundred dollars per server on HBA cards and SAN ports. But is it all good? Years back, I looked at using IP over fibre channel as a way to remove network interface cards from servers. The aim was to remove the NICs used for backup and put that traffic across the SAN using IPFC. I never did it. Not because I couldn’t; I’m sure technically it would have worked, but rather because the idea scared the willies out of “the management” for two reasons (a) we had no idea of the impact of two traffic types going over the same physical network and (b) the Network Team would have “sent the boys round” to sort us out.
Will this be any different with FCoE? Will anyone really be 100% happy mixing traffic? Will the politics allow the Networks teams to own SAN traffic entirely? Let’s face it, in large environments I currently advocate the separation of host, tape and replication traffic to separate fibre channel fabrics. I can’t imagine reversing my position and going back to single consolidated networks.
So then, is FCoE going to be better in smaller environments where the consolidation is more practical? Well, if that’s the case, then surely that makes FCoE just another niche player to FC, just like iSCSI.
It’s early days yet. There are a million-and-one questions which need to be answered, not least of which will be how FCoE will interoperate with standard FC, how drivers will interact with the existing storage protocol stack on a server and how performance/throughput will be managed. Some of these issues have been answered, however this blog entry is already far too long and rambling to include a discussion on these points this time and I will save them for another time.