All of a sudden, data centre networking has gone from ho-hum to hot.
The status quo — that hierarchical, three-tier network that has dominated the data centre since the late 1990s – is being threatened, as enterprises move toward a virtualized, service-oriented, scaled-out, converged infrastructure.
The question facing enterprise IT architects and planners today is whether to stick with three tiers or to flatten out the network in order to improve performance and ultimately save money.
From the vendor perspective, companies like Brocade, HP and Juniper Networks are pushing hard for the flat network. After all, if enterprises decide to go flat, they will need to buy new data centre switches and that opens up an opportunity to displace Cisco, which for years has been all but synonymous with the data center network.
“Even companies with Cisco Catalyst switches have to re-qualify their data centres and, to go into flat network mode, they’ll have to look at all brand new equipment. This is a huge opportunity for others to come and take the data center network away from Cisco” says Andre Kindness, senior analyst for enterprise networking at Forrester Research.
Cisco, of course, has next-generation plans of its own, built around its Nexus switches. And going head-to-head against industry powerhouse Cisco might also backfire against some of the more vulnerable network vendors.
Kindness says the survivability of some of these vendors could even be at stake. As servers, storage and the network converge, if these companies don’t get themselves into the data centre network, the rest of their businesses could be affected, he adds.
From the IT side, the allure of the flat network is being able to deploy products purpose-built for automation, convergence and virtualization. The promise is a much simplified, fabric-based architecture from which enterprises will enjoy dramatic performance improvements as well as streamlined operational chores, and expenses.
“The data centre network hasn’t been this exciting in probably 10 years,” Kindness says. And enterprise network planners are sitting up and taking notice.
“This is a brand-new game, with the latest and greatest new technology and everybody has their specialties. The data center network definitely warrants a fresh look. We can’t rely on old assumptions anymore,” says John Turner, director of networks and systems at Brandeis University, in Waltham, Mass.
“Clearly all the switch vendors are racing to keep up with what’s happening in the data center – and nobody has it quite right just yet, Turner says. “Some of these products are literally being released to customers with the paint still wet on them.”
Jim Metzler, vice president of IT consulting firm Ashton, Metzler & Associates, says industry trade show Interop is a barometer of how the buzz around flat architectures has put the excitement back into networking.
In 2008, Metzler organized a session entitled, “Is There Anything Left to Say about the Local Area Network.” The answer: Not a whole lot.
This spring, Metzler moderated a session he called, “Why Networking Must Fundamentally Change.” “I had 212 people in the audience and vendors lined up angry that they couldn’t get on the panel. Why? – because networking is fun again,” he says.
Perhaps so, but flattening the network is no mere amusement, experts warn.
“Truth is, if we look at data center networking trends from the demand side, meaning how the structure and mission of data centers are changing, then there’s definitely a logic to rethinking how we look at data centre networks,” says Tom Nolle, president of CIMI, a technology assessment firm.
But network executives would be wrong in thinking a flat network is always best for the new data centre, he adds.
“What really matters is what’s happening in the data center, such as an increase in inter-processor communications, and how that affects the behavior of traffic and requirements placed on the network,” Nolle says. “If you decide to flatten your network without looking at the entire chain, then you run the risk of either investing prematurely or in the wrong thing,” Nolle says.
Many enterprises, caught up in the marketing furor around re-architecting, will end up undertaking and then aborting flat network projects over the years, Nolle says he fears.
“As we start to promote the notion of flattening the data center network, we’re taking what should be a project-based decision and pulling it out of that context,” he explains. “If a project is expected to cause performance issues, and you’re considering flattening the network to solve those problems, than that’s a reasonable decision. But the notion of going out and flattening the network just because it’s the cool thing to do is like buying a car because you think it’ll make you socially successful.”
Flattening the network will require a major investment in new equipment. Network executives must be able to tell a good financial story about how flattening the network will increase revenues or reduce costs outside of the data centre. That’ll be hard for the average enterprise to do, Nolle says.
“The benefit case won’t prove out … and flat network projects will be aborted. Hopefully this will happen early enough in the cycle that relatively little will have been procured,” he adds.
To flatten or not needs to come down to a price/performance evaluation.
“At end of the day, I don’t care if the data center network is two or three or however many tiers. We’ve got to get our arms around the cost,” Metzler says.
“You have to remember why we went to three tiers in the first place,” says Robin Layland, head of Layland Consulting, which specializes in network architecture and strategy for enterprise customers. “We couldn’t get the density we needed to with two tiers. It was just too expensive, so we went to three tiers. That was the cheapest way to get the performance we needed. There was no inherent goodness in a three-tier architecture,” he adds.
Data centers are changing, but the decision points for network managers remain the same. “If a two-tier network can give you great performance at a better price than a three-tier design, it wins because theoretically it should have lower latency and even better reliability too,” Layland says.
The decision doesn’t need hurried, but a careful study of options and setting of expectations, Kindness says.
“This isn’t going to happen for two to five years out and even then not as one big changeover at once,” he says. “This is going to be an evolution, and no one should expect their vendors – server, storage or networking – to enable this to happen automatically.”
“None of these vendors is going to be able to produce the whole thing end to end perfectly in the beginning,” Layland says.
So in the short run, they’ll differentiate themselves on how they handle the new networking requirements brought on by server virtualization. “It’d be nice to use all possible connections at least from the servers to the first switch or top of rack, because that’s where the problem first shows up and where we have a lot to gain. Plus, the first storage integrations will take place at those top-of-rack switches, too – it won’t mix all across the network at first,” he adds.
“We’ll have to see who can pull this off and how well they handle the execution,” Layland says. “This is where you see the most opportunity for proprietary magic.”