Tuesday, March 24, 2009

Cisco's UCS: The industry reacts

March 24, 2009 -- The IT world has had about a week to digest, mull and question the ins-and-outs of Cisco's newly announced "game-changer," the Unified Computing System. And the industry certainly has questions for Cisco.

Several competitors are questioning whether Cisco's UCS – the platform that combines compute, network, storage access, and virtualization resources in a single system based on a new line of blade servers developed by Cisco – features a truly open architecture.

Brocade's CEO Mike Klayko made his opinion known yesterday in a video posted to the Brocade YouTube Channel.

Klayko does not believe large enterprise customers will put mission critical applications on a version one product, referring to Cisco's new blade servers.

Brocade has also issued an official statement to the media in response to Cisco's UCS launch. It reads:

"A dynamic and virtualized data center holds the promise of many compelling benefits for end-users including increased server utilization, decrease in power footprint and more efficient operations in general. However, achieving this goal is a complex challenge that can be best tackled by a broad ecosystem of industry partners and not based on a proprietary, singular architecture of one company.

In contrast, Brocade is already helping customers address these challenges by integrating our networking solutions with a range of mature computing, management and storage technologies from some of the strongest companies in the world. These partnerships are leveraging open interfaces/standards, co-developed technology, and products that are available today, which will lower costs and maximize return on investment for customers."

BLADE Network Technologies president and CEO Vikram Mehta also took aim at Cisco in a recent blog entry where he lists 10 reasons why Cisco's Unified Computing strategy is nothing more than a way to lock customers into a proprietary world while locking out vendors like HP and IBM.

Cisco begs to differ. Rob Lloyd, executive vice president designate, Worldwide Operations for Cisco, explained that Cisco has "built an open ecosystem of industry leaders" in support of the UCS even going as far as to refer to UCS supporters as a "dream team of capable partners."

Cisco is collaborating with a wide range of hardware and software vendors to develop systems and applications that work with the platform. Specifically, Cisco is teaming up with technology partners BMC Software, EMC, Emulex, Intel, Microsoft, NetApp, Novell, Oracle, QLogic, Red Hat, and VMware and has expanded strategic relationships with Accenture, CSC, Tata Consultancy Services (TCS), and Wipro.

Noticeably absent from the partner list are the server vendors. However, Lloyd told media and analysts in last week's UCS conference call that Cisco does not view the UCS as a blade server.

"The UCS will be shipped and configured as a system. That's why we don't think we're competing on a blade platform, but on a new system form factor," he said.

Wednesday, March 4, 2009

Is 2009 the year of unified fabrics?

Tight budgets invite tough decisions. Some storage projects will undoubtedly be shelved this year as end users look to drive cost out of the data center. As a part of those consolidation efforts, network fabrics could get a makeover.

Enterprise Strategy Group (ESG) analyst Bob Laliberte believes all organizations are in uncharted economic territory and 2009 will clearly be a challenging year for IT budgets.

However, according to his research, the majority of organizations surveyed by ESG expect that their storage spending will increase slightly in 2009.

ESG estimates that storage capital spending will grow at a modest rate of 2.9% from 2008 to 2009, outpacing most expectations of overall IT spending growth. Spending increases will be centered among the largest, most data-intensive organizations and will be tied to specific business initiatives such as Web 2.0 projects, improved business intelligence, and globalization.

If the main players in the push for unified networking technologies are to be believed, the economic climate creates a big opportunity for unified networking technologies. Both Brocade and Cisco say they are seeing success with their newest products as users are well on their way to adopting the core platforms necessary for supporting the unified fabrics of the future.

"Our DCX Backbone is the fastest ramping and most widely successful product line we've ever had," said Brocade's senior director of product marketing, Marty Lans.

Though Brocade doesn't break out specific numbers for public consumption, the company cites internal metrics and general market acceptance as the measure of success for the DCX. The company bases its claims on the number of units shipped and port density.

Cisco is also enjoying success as it positions the capabilities of its Nexus platform as necessary for virtual data centers.

"The implementation of a unified fabric infrastructure allows for combining storage and data traffic on a single, unified Ethernet network. As virtualization becomes a stronger design influence in the data center, these features become a requirement to support virtual environments," said Cisco's Dante Malagrino, director of product marketing for data center emerging technology.

Cisco touts more than 250 customers for its new flagship product, the Nexus 7000, which began shipping in January of 2008.

According to Laliberte, server virtualization is also driving the need for faster, more advanced storage networking technologies.

"Our research indicates that all networked storage is increasing, Fibre Channel SAN, iSCSI SAN and NAS. With multiple virtual machines there is a need for additional throughput," he said.

Laliberte thinks the concept of consolidated fabrics will continue to gain acceptance this year.

"As long as organizations continue to consolidate data centers and infrastructure – the ability to consolidate FC directors onto a backbone should resonate – saves on power, cooling and space and the new virtual fabric technology ensures secure segmentation of the SAN," he said.

Friday, December 12, 2008

Survey says…

It’s that time of year again. Major product announcements are scarce as we head into the holiday season, but the storage vendors are attempting to fill the December news void with a series of surveys that gauge the challenges facing end users in 2009.

How do you stack up against your peers when it comes to storage planning for next year?

Enterprise users

Virtual tape library (VTL) and de-duplication vendor SEPATON recently conducted a survey of IT pros in U.S.-based corporations to get a feel for what challenges they will face around data protection, business objectives and technology requirements for enterprise data centers in 2009.

Of the 145 respondents – all from enterprise companies with at least 1,000 employees and a minimum of 50TB of primary data to protect – 52% say their data protection is insufficient, citing a “lack of budget to keep pace with technology” as the cause.

The research also reveals that backup is still the scourge of many enterprise organizations. Fifty-three percent of respondents need more than 20 hours to complete a full backup, while 37% say they need more than 24 hours to complete a full backup.

According to the SEPATON survey, users are planning to turn to new technologies such as data de-duplication in order to maintain service levels and regulatory compliance.

More than 90% of respondents are either using de-dupe now or want to use it. Of those who do not have de-dupe, 55% are allocating dollars for the technology in 2009.

In addition, a majority of the respondents are using physical tape, but fewer than 50% expect to be using tape one year from now as they increase their use of disk-based technologies like disk-to-disk, VTL appliances, or VTL gateways.

SMBs

Backup pains aren’t just a problem for big IT shops. Small and medium-sized businesses (SMBs) also rate backup as a top priority and an all-around pain in the neck, according to a recent study commissioned by Symantec and conducted by Rubicon Consulting.

Backup ranks as the second-highest computing priority for SMBs, after defense against viruses and other malware, according to responses IT decision-makers at several hundred small businesses (with fewer than 250 employees).

Ninety-two percent of companies poled have deployed some form of data backup technology, yet 50% of those respondents have lost data. Of the companies that lost data, roughly a third have lost sales, 20% have lost customers and 25% say the data loss caused severe disruptions to the company.

Some of the results were disconcerting, given how destructive data loss can be to SMBs. Approximately 25% of SMBs don’t backup their PCs at all and 13% do only informal backups where employees decide the frequency and which files are protected, according to Rubicon. Additionally, about 20% of SMBs conduct no server backups.

CIOs

Hewlett-Packard (HP) recently revealed the results of its own commissioned survey of chief information officers (CIOs) conducted by Hansa |GCR.

The Web survey of 600 technology decision-makers from medium-sized organizations to enterprises across the globe shows that 84% of tech organizations plan to “transform” their data centers in the next 12 months as they look to lower operating costs and reduce business risks through technology.

So-called "data center transformation" projects typically include consolidation, virtualization and business continuity initiatives.

According to the study, 31% of respondents say reducing cost is a top priority for ’09, while 29% plan to enhance data security. The decision-makers also say that technology needs – not business needs – are prompting these investments.

The survey also shows that 95% of organizations are implementing or planning for data center consolidation next year, while 93% and 91% are embarking on business continuity and virtualization projects, respectively.

The research may be sponsored by vendors, but, for the most part, it is in line with a lot of the third party research covered on InfoStor.com. Stay tuned as we track these predictions over the next several months.

Friday, November 21, 2008

The clouds are forming

There’s a perfect storm of cheap hardware, massively scalable architectures and automated data management developing. Cloud-based storage is here.

Actually, cloud platforms have been around for a while (see Amazon’s S3 service and products from companies such as Bycast, Nirvanix and ParaScale as examples), but now EMC has stepped into the fray with its Atmos platform. A move that has, in the minds of many, simultaneously given credibility to the technology and officially established the market.

Even some of the unflappable experts in the industry have been taken aback by the amount of buzz drummed up by the Atmos launch earlier this month. Personally, I have been inundated with media pitches and interview requests from every vendor that can in some way tie the term cloud computing to their technology. They’re coming out of the woodwork.

It begs the question: Are cloud infrastructures and resulting cloud-based storage services all hype or are we truly entering a new era?

Jeff Boles, a senior analyst and director of validation services at the Taneja Group research and consulting firm and an InfoStor contributor, is convinced that cloud storage will change IT strategies in many ways.

In a recent series of articles on the topic, he makes three pretty bold predictions about the impact cloud-based storage will have on the industry. He writes:

1.) Users will expect cheaper storage, as user self-service makes storage in the cloud less expensive to deliver.

2.) Users will expect more responsive and scalable storage, because hosted providers can respond and scale on demand.

3.) Users will expect to access and manage their data in ways that were not possible before.

It’s looking like 2009 is set up to be the year that the technology will begin to change user expectations and it’s a safe bet that we’ll be tracking this segment of the storage market. It will be interesting to find out how many end users actually have their heads in the clouds.

Friday, November 7, 2008

Whatever happened to SMI-S?

Rooting through the press packets and marketing materials left over from the Storage Networking World conference can sometimes help in developing story ideas, as vendors tend to include press releases from the show, technology white papers and company backgrounders. As I was flipping through the materials from the Storage Networking Industry Association (SNIA), I came across a press release that I hadn’t noticed before and it made me wonder whatever happened to the Storage Management Initiative Specification (SMI-S).

Apparently, there have been some developments in the spec. The SNIA has made version 1.3 of the SMI-S available with support for some new features and functions. For those who don’t know, SMI-S was introduced years ago under the SNIA’s Storage Management Initiative (SMI) as an interoperable management interface for multi-vendor storage networking products.

The SMI-S describes available information from storage hardware and software to a WBEM Client from an SMI-S compliant CIM Server and an object-oriented, XML-based interface. That information provides a foundation for identifying the attributes and properties of storage devices and facilitates discovery, security, virtualization, performance, and fault reporting.

The newly available version 1.3 features new support for more advanced storage architectures and functions like storage virtualization, VTLs, SAN security and RAID controller cards. The spec also now accommodates support for Fibre Channel switches to improve SMI-S solutions by speeding up discovery and monitoring larger device configurations.

That’s fine, but how much does it really matter? The SNIA and its participating vendors have made many claims since the inception of the SMI-S project. It was supposed to be a stepping-stone to interoperability. Some even claimed that users would make SMI-S a checklist item and would eventually require it as feature of any storage device or product going forward.

I have to agree with the opinions of Jon Toigo, CEO and managing principal of Toigo Partners International. In a two-part column earlier this year for InfoStor, Toigo stated that SMI-S has not caught on in the mainstream. In fact, I think that’s an understatement.

Slowly but surely vendor noise around the spec has died down and now it seems to have completely disappeared. Mentions of SMI-S conformance have vanished from vendor PowerPoint presentations and I can’t remember the last time a storage exec highlighted SMI-S conformance as a product feature.

The SNIA has recently turned its attention to other projects like the Solid State Storage Initiative (SSSI), but SMI-S development continues to roll on. In conjunction with release of version 1.3, SNIA has also launched supporting conformance tests and the first of the SMI-S committed vendors to pass the SNIA Conformance Testing Program (SNIA-CTP) provider suite for SMI-S version 1.3 storage management include EMC, HDS and HP.

According to Paul von Behren, chair of the Storage Management Initiative Governing Board, SMI-S now “contains sufficient breadth and depth of functionality such that the Storage industry can use the technology as the reference interface for managing enterprise storage solutions.”

That may be true, but after six years of development and investment how has SMI-S changed multi-vendor storage management? Given that proprietary management software still rules the day, I’d say the SMI-S has fallen short on delivering on the promise of being a panacea for open storage management.

Wednesday, October 29, 2008

Users get "excited" over storage vendors, technologies

Which vendors or technologies come to mind when you think about “exciting” storage products and services? According to IT industry research firm TheInfoPro (TIP), storage professionals in Fortune 1000 and midsize enterprises definitely have an answer to that question.


The firm’s latest research on storage adoption plans, management strategies, and vendor performance has been released and more than 250 Fortune 1000 and midsize end users say they are turning their attention to vendors that provide de-duplication, thin provisioning, virtualized provisioning, and capacity optimization technologies, according to TIP’s managing director of storage research, Robert Stevenson.


As a result, NetApp and Data Domain have seen the largest increase in mentions. Both vendors offer data de-duplication technologies and coincidentally each has pushed further into the de-dupe market over the past couple of days.


NetApp, which already offers de-duplication for primary storage via its Data ONTAP operating system announced this week that de-dupe is now available for its family of NetApp Virtual Tape Library (VTL) systems. Also, Data Domain this week entered a partnership with F5 Networks to co-market a joint solution that automates the movement of static and archive data from primary storage to a secondary storage tier. The joint offering will combine the de-dupe capabilities of Data Domain’s disk-based storage systems with the F5 ARX series of file virtualization devices.


Stevenson says his “Wave 11 Time Series Storage Study” shows that end users are looking for SAN architectures that are more active in managing storage utilization. It makes sense since the top technology in end users plans is once again data de-duplication, which has been dominating TIP’s list for more than a year.

Overall, Fortune 1000 users gave EMC the nod as the most exciting storage vendor followed by NetApp, Data Domain, IBM and 3PAR, while midsize users surveyed listed Data Domain, 3PAR, Compellent, EMC and HDS as their top five most exciting vendors.


Friday, October 24, 2008

Dell mulls FCoE support

After shelling out $1.4 billion to buy SAN maker EqualLogic last year, it is safe to say that Dell has a hefty stake in the success and growth of the iSCSI storage market. Given all of the recent noise in the industry around Fibre Channel over Ethernet (FCoE) being the preferred storage protocol of the future, Dell held a conference call with media and analysts this morning to offer its two cents on the topic.

What it boils down to is that Dell’s storage folks believe converged networks based on lossless Ethernet technology will float all storage boats. According to Eric Endebrock, senior manager for Dell’s storage product group, iSCSI is here to stay and Fibre Channel storage will bolster FCoE as a way to connect legacy FC systems over 10GbE networks (and eventually 100GbE networks).

"Dell is a big believer in unifying the fabric, but that is long-term," said Endebrock. "We are not looking to take our customers and forklift them away from the environments they have today, but they will soon have to start making some choices."

Dell’s official stance is that unified fabrics make the most sense financially for customers in the long-term.

"We are going to support 10GbE and Data Center Ethernet (also known as Converged Enhanced Ethernet) in our EqualLogic PS arrays. Today our PS arrays support iSCSI and will continue to support iSCSI in the future," said Endebrock. "We are not changing now, but protocol flexibility is going to be a key to our success. EqualLogic equals iSCSI is not the best way to think about our investment in that area."

In other words, the company is not ruling out support for FCoE in its Dell EqualLogic PS5000 Series iSCSI SAN arrays.

At last week’s Storage Networking World conference, I asked Dell’s director of enterprise storage, Praveen Asthana, for his take on FCoE and how it might fit into Dell’s product plans going forward.

Asthana said FCoE has already been successful in one respect. It has prompted customers to start thinking about the future. However, he maintained that FCoE requires a networking overhaul and iSCSI is still less expensive overall. He also referred to FCoE as "a stop on the way to iSCSI."

It is no surprise that Dell, like its competitors, is keeping its options open. In the end, customers will ultimately dictate which protocol will dominate or whether FCoE and iSCSI will truly coexist in converged networks.