Home | Uncategorized | Enterprise Computing: The Inevitable FCoE

Enterprise Computing: The Inevitable FCoE

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

The world of Twitter has been debating the merits of Fibre Channel over Ethernet again.  It seems that I’m one of only a few that sees little point in moving to FCoE.  The only positive I see mooted for FCoE is the ability to put traffic over a single Host Bus Adaptor or Network Interface Card (in this case a converged network adaptor, CNA) and therefore save some capital cost.  However, here’s a few pointers against:

  • FCoE needs new switches
  • FCoE needs new HBA adaptors
  • FCoE operates at 10Gb/s – do all your servers need this performance?
  • FCoE requires changes to the IP standards to implement; to handle congestion
  • FCoE will require additional thinking and planning to bring two different network architectures together
  • FCoE will require bringing together two different operating teams
  • How will FCoE handle traffic prioritisation?
  • FCoE will add additional complications to change control; data network changes will be even more impactful
  • FCoE will require additional training and consultants’ cost (difficult for me to include this one)

Here’s another thought; if converged network adaptors were such a good idea, then why didn’t IP over Fibre Channel take off?

FCoE is a Cisco strategy to own the data centre, nothing else.  As the recession bites, it would be a brave soul who could justify the disruption and additional spend, for very little gain.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • http://www.contemplatingit.com Tony Asaro

    Chris – the reason that IP over FC didn’t take off is because it was going in the wrong direction – against the commoditization tide. Every issue you present makes sense at this point in time. Over time FCoE will begin to make more and more sense and become cost effective. It isn’t just Cisco that is interested in FCoE but EMC has touted it in a big way. NetApp would like nothing more than have only Ethernet networks in the data center. QLogic are pushing FCoE in a big way. And the other vendors that are on the fence will be forced to follow. How many times have we’ve seen this movie?

    Does it make sense to have two different overarching network media – Ethernet and FC – in the data center? Or does it make more sense to converge – especially as 10 GbE becomes more affordable? The convergence does have value but it won’t happen over night. But I do think it will happen.

  • Jack Poller

    Chris

    Not having the time to peruse the twitter stream, I can only comment on what you say.

    Lets first look at this from the customer point of view. If I’m the customer, regardless of my storage strategy (block/file and enet/fc) I’m going to have a Ethernet/TCP/IP backbone in my network. Everything, and I mean everything, with the sole exception of storage, uses the same protocols and the same physical cabling and the same switches.

    Now, to this Ethernet/IP infrastructure that, as Tony mentioned, is the commodity, I add my storage infrastructure. If my storage is Ethernet/IP Based (regardless of protocol), it means that I use the same switches, same physical cabling, and most importantly, I use the same IT staff.

    On the other hand, if I use old FC, I need new switches, new cabling, and an FC expert. So, if I’m setting up a new data center, and I implement an Ethernet/IP infrastructure for storage, I have significant savings – dollars, people, resources, energy, time.

    Now, from the storage vendor’s point of view, I can see FCoE for two separate scenarios. The first is to transition my existing customers to an Ethernet/IP infrastructure. This allows them to transition to the single infrastructure environment, and realize savings over time.

    The second scenario is new customers. Specifically, I can grow the market to include those who don’t want (or can’t) implement FC in their data center. Once the data is across the HBA, it looks like FC, so there should be (relatively) little impact in code. Most of the impact is in specific device qualifications. So the vendor invests in this, and then enables new revenue streams. Sounds like a potentially good business decision. Especially if customers want this solution.

    From the HBA vendor’s point of view, again we’re adding a new revenue stream. If we can do it, why shouldn’t we.

    From the switch vendor’s point of view, again, we’re adding a new revenue stream. If we can do it, why shouldn’t we.

    Which brings us to your next question, re: Cisco: “Also, why would Cisco want customers to buy *less* network/FC ports?” Cisco wants you to run all your network traffic through a Cisco switch. At which point they’ll give you lots of additional benefits then when everything runs through many different vendor gear.

    Which leaves us with the real crux of your complaint: “FCoE is a Cisco strategy to own the data centre”.

    And this would be versus what? How about FC is a (name your favorite vendor hear) strategy to lock customers into an ancient, proprietary, closed, expensive strategy!

    The reality is that Ethernet/TCP vs. all other networking strategies was decided many years ago. The FC community is the last holdout. This *is* what the customers want.

  • http://blogs.cisco.com/datacenter Omar Sultan

    Chris:

    You bring up some fair questions–ones I am sure might have occurred to other folks, so lets see what I can do to clarify things a bit.

    So FCoE can require host adaptors (CNAs), although their is a software stack available from Intel. FCoE is also defined for 10GbE and the upstream switch does need to support DCE/CEE/DCB to provide lossless transport and traffic prioritization. One of the underlying dynamics we see is a shift to 10GbE driven by higher VM densities that make 10GbE a cost-effective solution. With the 10GbE in place, FCoE becomes a natural option to simplify infrastructure (more on that later). While we don’t expect people to necessarily upgrade existing servers, we do expect them to start spec-ing 10GbE or CNAs as more production x86 workloads are moved onto VMs.

    It is also true that FCoE will drive some increased level of cooperation between the storage and network team, but I think you will agree that is not necessarily a bad thing. :) in fact, we see this as an inevitable result of continued data center virtualization efforts–the blurring or organizational roles. From a practical perspective, Cisco Nexus switches have roles-based access control, so administrative functions can be separated (i.e. network admin can be prevented from accessing storage config). I am not convinced this will make things an more complicated than they currently are, in Cisco’s model at least. Using the Nexus 5000 as example, network admins can manage like any other LAN switch, while storage admins can manage it just like any other fabric switch. From a Cisco perspective, at least, we have worked hard to make this operationally non-disruptive.

    As to why do this, there are a number of benefits for customers. As you probably know, a typical enterprise server can easily have five or more interfaces (multiple NICs, pair of HBAs, dedicated interfaces for backup, Vmotion, etc). A unified fabric allows this to collapsed into two 10GbE links. This eliminates the costs of those extra interfaces, related cabling and the upstream switch ports they are connected to. Fewer interfaces also allows the deployment of smaller server form factors. Beyond the capex savings from elimination of all this infrastructure, reduced power and cooling will contribute to opex savings.

    From a functional perspective, a unified fabric makes VMotion simpler, since all servers attached have access to a consistent set of network and storage services, so you don’t have to worry if a destination server has the right LUNs configured, is in the right zones, etc. A unified fabric also means that 100% of the attached servers now have SAN access, which will allow customers to further leverage their SANs and continue to consolidate storage.

    Finally, FCoE is not a Cisco-only effort. Cisco originally launched the FCoE solution in partnership with Emulex, Intel, QLogic, and VMware and EMC and NetApp have certified the Nexus 5000 and have committed to FCoE storage devices. The actual FCoE standard is further backed by a broad cross-section of the industry.

    I hope this helped clarify things a bit–if not, feel free to ping me.

    Regards,

    Omar
    Cisco Systems

    PS If you check out my blog, I recently posted something on the journey to a unified fabric

    • Chris Evans

      Tony, thanks for the comment. I understand and see what you say, however I guess what hacks me off about the FCoE discussion is not that the long term direction for the datacentre should be a single converged traffic “conduit” but that the *only* argument put forward today is the saving on network adaptors. NICs are currently cheap (n-1 generatation) HBA cards are cheap. CNAs will attract a premium, so the adaptor saving argument doesn’t wash. Also, why would Cisco want customers to buy *less* network/FC ports? Surely they’ve a vested interest in having more ports in use in the datacentre – unless of course they want you locked into their proprietary transport protocol so they can sell you their new shiny server hardware.

    • Chris Evans

      Tony

      Let me qualify the IP over FC comment. Everywhere I’ve seen FC deployed, there’s been a backup solution in place. I could see no problem with using the FC network at night (with IP over FC) to provide the transport layer for backup data. It would be creating a dedicated private backup network. In fact, it would have been possible to remove or dispense with some backup-dedicated NICs. Although technically possible, it was always dismissed as an idea because the conflict it would cause with the Network team – never the technology.

      Chris

  • http://etherealmind.com Etherealmind

    The requirement for FCoE is purely to create an “opportunity” for legacy FC storage to connect to existing Ethernet and IP networks.

    FCoE advantages such as
    - higher throughput (by using Ethernet instead of IP which has less overhead but doesn’t scale)
    - reduced CPU (by using Ethernet which does not need CRC checks etc but requires new Ethernet standards to make reliable)
    - reduced latency (because encapsulating in IP takes milliseconds longer but new silicon makes this point moot)

    are all arguments for HPC environments and not relevant to 99% of customers.

    Further, I would suggest that recents advances to develop iSCSI HBA that have TCP offload capabilities in a fully hardware accelerated environment will mitigate all of the so-called “advantages” of FCoE.

    Remember, Cisco paid $250 million to acquire FCoE from Nuova and have significantly wasted money on it since then. They will be looking for a return on that investment by shoving it down our throats.

    Now that VMware and other VM software support features over iSCSI and NFS, does FC have any advantages other than market inertia ?

    The path to FCoE is not certain and is likely to be optional for the vast majority of the market.

  • Chris Evans

    Jack/Omar

    Thanks for the responses you’ve taken the time to post. I have a few comments;

    Jack:

    “On the other hand, if I use old FC” – I like your assumption that FC is “old”. Are you trying to tell me that Ethernet is the new kid on the block?

    I also like your assumption that putting FC over Ethernet removes the cost associated with that technology. That is hardly true. What FCoE is offering is merely a replacement transport layer (Ethernet as opposed to Fiber Channel). You’ll still have all the skills issues associated with traditional FC, but within a different platform. In addition, for those customers who don’t do greenfield builds (i.e. most of them), there would be a transition state between FC and FCoE which would require signficant additional skills to manage.

    You then get to the heart of the matter – you’re creating new revenue streams for the vendors of HBAs and switches. Essentially both FC and Ethernet switches have become commodity. There’s no justification for big margins any longer. I have sitting on my desk today an Emulex LPe12002-M8 and an older LP9002 – one £1200 and one £10 from Ebay (new, unopened) – which is best? Neither, the former simply provides more bandwidth (4x per port) but at 120x the cost. There’s no need for customers to upgrade to the latest HBAs in 90% of cases. So where to the vendors go next? Easy, invent a new protocol, which will “simplify” configurations, but require the customer to replace all their existing equipment with new hardware at higher margin.

    What “additional benefits” will Cisco provide me if all my traffic is running through their hardware?

    How is FC more “ancient, proprietary, closed, expensive” than DCE will be, or even Cisco VSANs are today?

    If Ethernet/TCP versus other strategies was decided years ago, why have we seen a complete failure of iSCSI to displace Fibre Channel?

    Omar:

    You refer to the software stack for FCoE. This is open source (http://www.open-fcoe.org). I see no vendor support or guarantees for this software. If there’s no vendor guarantee, then it is useless for a production environment.

    You’ve commented and I’ve seen lots of references to the benefits of using FCoE in virtual environments, helping to reduce cost. This to me seems the last place the savings should be made, as the incremental difference in removing a couple of HBA cards will be so small compared to the overall cost of the hardware. As servers are collapsed into virtual environments, there will already be significant savings made in hardware, including HBA cards, IP and FC ports for those physical servers which are removed. Surely that initial saving is far outweighed by then saving a few additional ports on the VM server.

    If reducing form factors and eliminating HBA cards was the aim, then why not use hybid IP/FC cards, with one port of each on a card? Why not put FC on server motherboards like IP? Inventing and implementing a new protocol seems a much more complex solution with considerable additional effort and cost.

    To your last point; FCoE may not be a Cisco-only effort, but the line-up of Emulex, Intel, Qlogic, Vmware, EMC and Netapp is hardly an industry-wide adoption. What about IBM, HP, Sun, Dell? What about HDS? A new protocol with new HBA cards is precisely in the best interests of Emulex, Intel and Qlogic. If I were those companies, sure I’d be signing up to support it. For EMC and Netapp, they have no choice; they can’t afford not to support a new protocol.

    If I were a customer today, I wouldn’t see deployment of a new, untested, proprietary technology as top of my list for saving money. I’d be focusing on the technology I know, making it last and ensuring I was getting every drop of value out of it.

  • Jack Poller

    Chris -

    A side note – don’t get distracted by the “old FC” terminology – it was my (very poor) way of distinguishing FC vs. FCoE. Sorry for the confusion.

    As to why iSCSI isn’t displacing FC? Maybe it is starting to displace FC, but not from the direction you think. Take a look at the SMB market and SMB solutions.

    How will FC play in “cloud” storage?

    Jack

  • tonyasaro

    Chris – I understand the issue that you raise on IP over FC -there were practical uses but internal politics trumped utility. But again – it just shows that the “network” guys will win this battle. This time the battle is going in the right direction or perhaps the “winning” direction. Additionally, there weren’t enough market forces driving its adoption. That is not the case with FCoE.

    I don’t agree with your comment on iSCSI failing to replace FC. I think that it has in midsize environments for both FC replacements and new SANs that would have been FC otherwise. But the big FC shops have too much invested in FC – not in terms of capital FC equipment – they will cycle out stuff in three to five years anyway and much of that will be timed with the maturity of FCoE. Rather, there is a resource investment to FC (and an emotional one) and rather than converting to iSCSI it is better to go with FCoE. I contend that if EMC, HDS and IBM pushed for iSCSI in their high-end products than there would have been much more adoption at the high-end – but it didn’t make sense there. However, these three will support FCoE.

    iSCSI and FCoE address different segments and different storage products. However, you will be able to use the same network media and infrastructure to support both. Another point for FCoE.

    As far as the network politics – this raises a big issue and we are seeing cultural shifts happening. What we have seen is that many of the storage guys don’t want to bother with the network and would rather just hand that off. In really large shops that do have specialized FC network guys – they will perhaps be the biggest political wall for FCoE. I suggest those guys either retire or think about a new career over the next three to five years.

    The long term convergence will eventually make sense but we have to start at a point where it only kinda makes sense.

  • ced

    Hi Chris,

    Just some toughts but not as extensive as my predecessor.

    I do see another benefits which is convergence of one team (network /san) that could lead to operational expenses.

    Finding a storage/san consultant is way much more expensive (because very limited, less installed based) to finding a network consultant.

    You could argue that DCE is not going to be deployed evrrywhere but i don’t see that as a big effort for a network consultant to understand the congestion control implemented by the FCoE protocol. The basis remains the same. So at the end, i do see one team managing the converged network and having the storage admins take care ‘only’ of the storage arrays. –> Reduced OPEX

    In terms of management tools as well convergence and reduced cost, one management platform for the lan and the san. –> Reduced OPEX

    Over the future, i see also integration of the CNA on the motherboard itself with competitors like intel and not only qlogic or emulex. –> Reduced OPEX.

    Reduced number of Adapaters so less power consumption –> Again reduced OPEX.

    These are already some benefits.

    I agree with you that it is a clever strategy from Cisco to make on OPA on the SAN due to its strong presence in the LAN and especially on Data Center Network.

    Now i’m asking myself, what’s next ? Why keeping the legacy FC on top of ethernet. Why not having scsi on top of ethernet when the arrays provides FCoE connectivity ? The transition might be easier to S(csi)oE :)

  • ced

    Masking ? Not really. Masking remains a pure storage arrays operation ? No ?

    Zoning ? Yes. What is this necessary skills to create zoning ? this nothing less than a security ACL. Things that are done today in ip as well.

    I’ve been in both world (LAN and SAN) and when i look to features and complexities in network world (think about mpls, ospf, isis, bgp…) and all these complex routing protocols, the san compared is ‘just’ a simple flat network. Nothing really complex.

    Why is so different ? For me, FCoE will be much more accessible and will not require specialised hardware (SAN switches) for network people to train and understand how it behaves.

    Also look at the LAN tool available freely on the market that will become accessible for the SAN as well for better troubleshooting and understanding of the protocol.

    FCoE will open much more the market for new type of network consultants.

    Again, this comment is only true for the pure transport (aka the san) of storage. Managing storage and designing your storage is complex and will remain within SAN admin resp.

  • Chris Evans

    Sorry Ced, but I don’t get it – your comments are counter intuitive – you say FC is simpler than IP but staff for FC are more expensive.

    In my experience, FC hardware is the least complicated part of the arrangement – you plug it in and turn it on. If I look at the configuration options for say an MDS9513, they’re huge – non-standard in some places – lots of things to set up and understand.

    So how can Network consultants be so cheap and FC so expensive? I suspect because the FC consultants are not *pure* FC consultants. They’re storage consultants. They manage multiple pieces of technology. Moving forward that isn’t going to change. Understanding the best way to configure storage will still require a knowledge of the transport layer. Those expensive SAN guys aren’t going to go away.

  • http://blogs.cisco.com/datacenter Omar Sultan

    Chris:

    Quite the interesting thread you have going here.

    As far as the software FCoE stack, I am not sure you can simply dismiss it because it is open source–however, that is a whole different debate that I don’t think we need to get into here. At the very least, it gives you the oppty to try out FCoE with minimal investment in your test and dev environment.

    As far as the cost savings go, I think you are underestimating the cost savings–it is more than just a couple of HBAs–the capex and opex costs to support multiple parallel network is significant when scales across hundreds or thousands of servers. We have an ROI calculator on cisco.com that you can use to understand the cost impact–I can provide an URL if you want.

    Why is FC over Ethernet winning our over IP over FC. First, the economics of Ethernet are more compelling, second layering IP on top of FC is adding a lost of cost and complexity without commensurate benefits.

    As far as the standard itself, FCoE is governed by the FC-BB-5 working group under the T11 committee within INCITIS. The standard passed a letter ballot by a vote of 29-4, which represented broad agreement across the industry (Dell, HP and IBM voted “yes”, while Sun abstained). The standard is in the home stretch for completion.

    As for the question of protocols, I still say you will see a mix of FC, iSCSI and FCoE in the enterprise. For a mid market customer with no FC in place, iSCSI makes a lot of sense. On the other hand a large enterprise customer with a significant FC SAN investment, FCoE makes a lot of sense–it allows convergence at the server access layer, where there is the greatest oppty for cost savings, while not being disruptive to the existing FC SAN, which is probably happily chugging along.

    Finally, as @ced points out, this is really still a discussion about transport–the need to architect, implement, and operate the storage environment still remains in place.

    I have a couple of URLs I could post that might be helpful–let me know if you are OK with this and I will add them in a subsequent comment.

    Regards,

    Omar

  • http://snig.wordpress.com/ snig

    Great post Chris. I posted about this exact same thing almost a year ago. http://blogs.rupturedmonkey.com/?p=156

    I totally agree with you. This is Cisco’s way of getting FC under their control because they couldn’t beat Brocade head to head. All the other vendors have jumped on board to simply increase their margins for a bit and make people purchase new equipment or to keep up.

    There is no technical reason that anyone can give today for a customer moving from FC to FCoE. Just like they couldn’t give any technical reason for a customer to move from FC to iSCSI.

  • Pingback: A Collection of Viewpoints on Cisco UCS - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers

  • Pingback: Cinetica Blog » FCoE fa comodo solo a CISCO!

  • http://www.ethernetstorageguy.com Trey Layton

    Chris

    Thanks for taking the time to make the post. I attempted to read the vast number of comments but felt the direction of many of those responses weren’t addressing your original questions. If they were I apologize to those who have pointed them out.

    * FCoE needs new switches
    Absolutely correct, their are however fundamental advantages to the evolution of technology. The concept is to make things better and the new lossless switches to support data center Ethernet and thus transport FCoE are designed in kind to evolve technology and continue to push the bar higher for technology and ultimately the benefit of consumers.

    * FCoE needs new HBA adaptors
    I wouldn’t call an FCoE CNA an HBA, it is far from it. I have built some very large data centers in the last 5 years and one thing is sure that we are increasing the quantity of cables and interfaces that we are jamming into servers. All of those interfaces are not being used efficiently and that makes it troublesome to get into the server to perform any maintenance and a nightmare to troubleshoot if someone moves a cable.
    So yes, the technology requires new interfaces but in the spirit of consolidation. No one has ever said “hey I get the value of VMware, but it requires me to buy new bigger servers to consolidate all my physical machines on, I don’t want to do that.” If they did well they probably aren’t leading the strategy for technology in that company.

    * FCoE operates at 10Gb/s – do all your servers need this performance?
    This is probably my biggest gripe with the technology at this point. Everyone assumes that it operates at 10Gbps and they could not be more wrong. All of the HBAs that are being sold today almost exclusively run the FCoE portion of the adapter at a max speed of 4Gbps. The entire value of FCoE today is consolidation not performance increase. The future is eventually see hardware asics on the HBAs which run at higher speeds or the promise of hardware assisted initiators is being designed into a few offerings. Read the fine print on those CNAs out there, they run at 4Gbps for FCoE traffic. 10GBps for the IP stack (which is separate –read on)

    * FCoE requires changes to the IP standards to implement; to handle congestion
    This is my biggest gripe with the lack of education on the protocol. FCoE has absolutely zero nothing nodda to do with IP. FCoE doesn’t even use the traditional Ethernet forwarding mechanisms. The mechanism for forwarding a FCoE frame on a Ethernet fabric is through the use of traditional zoning as with traditional fibre channel SAN networks. This is what makes the two technologies so complementary. You can sit a SAN person in front of the tools being released by Cisco to deploy FCoE environments and if they were able to zone a MDS, they can zone a Nexus 5000 FCoE switch.

    * FCoE will require additional thinking and planning to bring two different network architectures together
    See last point, one of the reasons why the standard has taken so long to adopt is because of the incredible conversations that have went on to make sure that management security and every other element of traditional FC fabrics are entirely compatible with FCoE.

    * FCoE will require bringing together two different operating teams
    There is some truth to this but that combination is already occurring in most organizations that are small to medium size, which account for a majority of the businesses on the planet. Those few large organizations (by comparison) which have dedicated SAN teams and Network teams will be required to collaborate together. However, because FCoE does not use the Ethernet forwarding mechanisms, it does not use IP and it only operates within the data center on the same layer 2 segment, the only thing the san team will be required to do is make sure there is link light and the zone is configured properly.

    * How will FCoE handle traffic prioritization?
    This again is built into the Data Center Ethernet Technology through the use of a few new technologies to the Ethernet world.

    The first is lossless Ethernet; this is the ability to provide a lossless service to a traffic class. Thus when the switch is configured, you will state that FCoE is granted the lossless service. What this ensures is that any FCoE frame that arrives at a switch or host interfaces will be processed prior to that of any other type of frame. Those other frames will be buffered and ultimately dropped if they cannot be processed. The reason why that is okay is because those other protocols will likely have a higher layer capability of dealing with retransmissions. FC traffic does not like discarded frames, thus the requirement for a lossless service was needed and thus created.

    The remaining are 802.1Qbb Priority Based Flow Control
    802.1Qaz Class of Service Bandwidth Management
    802.1Qau Congestion Management
    L2 Multi-Path

    Of the above L2 multi-path is the one that is not completely baked yet as this brings FC excellent ability to multipath to Ethernet which has the evil spanning-tree lurking to block redundant links from the forwarding path.

    * FCoE will add additional complications to change control; data network changes will be even more impactful
    This is not a true statement. The same process that one goes through today to schedule and ultimately change a zone will apply with FCoE tomorrow. Any change by the network team to routing or ip addresses etc… will have ZERO impact on FCoE traffic. The only thing that will impact FCoE traffic is the physical shutdown of the switch or port or removal of the cable. There is no dependency on the Ethernet forwarding fabric thus there are no further complications than what exists today.

    * FCoE will require additional training and consultants’ cost (difficult for me to include this one)
    FCoE will require everyone to grasp the fact that everything is running on a single cable, yet the skills that were used to zone a SAN will be used in the FCoE world, they haven’t changed.

    I actually blogged about this very topic a few weeks ago.

    http://ethernetstorageguy.blogspot.com/2009/03/fcoe-wow-is-there-confusion-on-this-out.html

  • http://www.colinmcnamara.com colinmcnamara

    IP over fibre channel didn’t take off for a couple reasons, but most importantly lack of multicast support in IPFC.

    • Chris Evans

      Thanks for all the comments. Here’s one thing I still don’t get – even in storage, many sites complain about multi-vendor environments as they’ve got to skill up to two or more platforms. Now, correct me if I’m wrong, but FCoE is *still* Fibre Channel – still zoning, masking etc. So in the future, the Network guys won’t be just doing IP, they will be doing FC too. Today’s highly paid FC consultant will just become tomorrow’s highly paid FCoE consultant – the underlying skills will still be needed. Am I missing something?

  • Pingback: Enterprise Computing: Brocade Announced FCoE Converged Switch « The Storage Architect

  • http://etherealmind.com Etherealmind

    What Trey says is not entirely correct. When FC is encapsulated in Ethernet to become FCoE then it is entirely co-dependent on how Ethernet works and performsn.

    Howver, the point that FC remains the same protocol (with all its flaws and strengths) is true. Zones LUNS etc are all identical configuration and concepts and remain unchanged.

    The real art is building the Ethernet fabric to be lossless and low latency. But once this is done, Storage over IP (NFS or iSCSI) becomes a winning technology because it solves the weaknesses of all storage protocols.

    Chris is right. FC is now a legacy protocol, FCoE is the migration path to IP Storage. Choose NFS or iSCSI or something yet to come but FC isn’t likely to survive for long.

  • Chris Evans

    Thanks Trey

    Great and comprehensive response. Despite my negativity towards FCoE, I am looking forward to seeing how the technology develops and where it fits in the greater scheme of things.

    Chris

  • Chris Evans

    Cheers Snig

    Chris

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×