Friday, December 12, 2008
Survey says…
How do you stack up against your peers when it comes to storage planning for next year?
Enterprise users
Virtual tape library (VTL) and de-duplication vendor SEPATON recently conducted a survey of IT pros in U.S.-based corporations to get a feel for what challenges they will face around data protection, business objectives and technology requirements for enterprise data centers in 2009.
Of the 145 respondents – all from enterprise companies with at least 1,000 employees and a minimum of 50TB of primary data to protect – 52% say their data protection is insufficient, citing a “lack of budget to keep pace with technology” as the cause.
The research also reveals that backup is still the scourge of many enterprise organizations. Fifty-three percent of respondents need more than 20 hours to complete a full backup, while 37% say they need more than 24 hours to complete a full backup.
According to the SEPATON survey, users are planning to turn to new technologies such as data de-duplication in order to maintain service levels and regulatory compliance.
More than 90% of respondents are either using de-dupe now or want to use it. Of those who do not have de-dupe, 55% are allocating dollars for the technology in 2009.
In addition, a majority of the respondents are using physical tape, but fewer than 50% expect to be using tape one year from now as they increase their use of disk-based technologies like disk-to-disk, VTL appliances, or VTL gateways.
SMBs
Backup pains aren’t just a problem for big IT shops. Small and medium-sized businesses (SMBs) also rate backup as a top priority and an all-around pain in the neck, according to a recent study commissioned by Symantec and conducted by Rubicon Consulting.
Backup ranks as the second-highest computing priority for SMBs, after defense against viruses and other malware, according to responses IT decision-makers at several hundred small businesses (with fewer than 250 employees).
Ninety-two percent of companies poled have deployed some form of data backup technology, yet 50% of those respondents have lost data. Of the companies that lost data, roughly a third have lost sales, 20% have lost customers and 25% say the data loss caused severe disruptions to the company.
Some of the results were disconcerting, given how destructive data loss can be to SMBs. Approximately 25% of SMBs don’t backup their PCs at all and 13% do only informal backups where employees decide the frequency and which files are protected, according to Rubicon. Additionally, about 20% of SMBs conduct no server backups.
CIOs
Hewlett-Packard (HP) recently revealed the results of its own commissioned survey of chief information officers (CIOs) conducted by Hansa |GCR.
The Web survey of 600 technology decision-makers from medium-sized organizations to enterprises across the globe shows that 84% of tech organizations plan to “transform” their data centers in the next 12 months as they look to lower operating costs and reduce business risks through technology.
So-called "data center transformation" projects typically include consolidation, virtualization and business continuity initiatives.
According to the study, 31% of respondents say reducing cost is a top priority for ’09, while 29% plan to enhance data security. The decision-makers also say that technology needs – not business needs – are prompting these investments.
The survey also shows that 95% of organizations are implementing or planning for data center consolidation next year, while 93% and 91% are embarking on business continuity and virtualization projects, respectively.
The research may be sponsored by vendors, but, for the most part, it is in line with a lot of the third party research covered on InfoStor.com. Stay tuned as we track these predictions over the next several months.
Friday, November 21, 2008
The clouds are forming
Actually, cloud platforms have been around for a while (see Amazon’s S3 service and products from companies such as Bycast, Nirvanix and ParaScale as examples), but now EMC has stepped into the fray with its Atmos platform. A move that has, in the minds of many, simultaneously given credibility to the technology and officially established the market.
Even some of the unflappable experts in the industry have been taken aback by the amount of buzz drummed up by the Atmos launch earlier this month. Personally, I have been inundated with media pitches and interview requests from every vendor that can in some way tie the term cloud computing to their technology. They’re coming out of the woodwork.
It begs the question: Are cloud infrastructures and resulting cloud-based storage services all hype or are we truly entering a new era?
Jeff Boles, a senior analyst and director of validation services at the Taneja Group research and consulting firm and an InfoStor contributor, is convinced that cloud storage will change IT strategies in many ways.
In a recent series of articles on the topic, he makes three pretty bold predictions about the impact cloud-based storage will have on the industry. He writes:
1.) Users will expect cheaper storage, as user self-service makes storage in the cloud less expensive to deliver.
2.) Users will expect more responsive and scalable storage, because hosted providers can respond and scale on demand.
3.) Users will expect to access and manage their data in ways that were not possible before.
It’s looking like 2009 is set up to be the year that the technology will begin to change user expectations and it’s a safe bet that we’ll be tracking this segment of the storage market. It will be interesting to find out how many end users actually have their heads in the clouds.
Friday, November 7, 2008
Whatever happened to SMI-S?
Apparently, there have been some developments in the spec. The SNIA has made version 1.3 of the SMI-S available with support for some new features and functions. For those who don’t know, SMI-S was introduced years ago under the SNIA’s Storage Management Initiative (SMI) as an interoperable management interface for multi-vendor storage networking products.
The SMI-S describes available information from storage hardware and software to a WBEM Client from an SMI-S compliant CIM Server and an object-oriented, XML-based interface. That information provides a foundation for identifying the attributes and properties of storage devices and facilitates discovery, security, virtualization, performance, and fault reporting.
The newly available version 1.3 features new support for more advanced storage architectures and functions like storage virtualization, VTLs, SAN security and RAID controller cards. The spec also now accommodates support for Fibre Channel switches to improve SMI-S solutions by speeding up discovery and monitoring larger device configurations.
That’s fine, but how much does it really matter? The SNIA and its participating vendors have made many claims since the inception of the SMI-S project. It was supposed to be a stepping-stone to interoperability. Some even claimed that users would make SMI-S a checklist item and would eventually require it as feature of any storage device or product going forward.
I have to agree with the opinions of Jon Toigo, CEO and managing principal of Toigo Partners International. In a two-part column earlier this year for InfoStor, Toigo stated that SMI-S has not caught on in the mainstream. In fact, I think that’s an understatement.
Slowly but surely vendor noise around the spec has died down and now it seems to have completely disappeared. Mentions of SMI-S conformance have vanished from vendor PowerPoint presentations and I can’t remember the last time a storage exec highlighted SMI-S conformance as a product feature.
The SNIA has recently turned its attention to other projects like the Solid State Storage Initiative (SSSI), but SMI-S development continues to roll on. In conjunction with release of version 1.3, SNIA has also launched supporting conformance tests and the first of the SMI-S committed vendors to pass the SNIA Conformance Testing Program (SNIA-CTP) provider suite for SMI-S version 1.3 storage management include EMC, HDS and HP.
According to Paul von Behren, chair of the Storage Management Initiative Governing Board, SMI-S now “contains sufficient breadth and depth of functionality such that the Storage industry can use the technology as the reference interface for managing enterprise storage solutions.”
That may be true, but after six years of development and investment how has SMI-S changed multi-vendor storage management? Given that proprietary management software still rules the day, I’d say the SMI-S has fallen short on delivering on the promise of being a panacea for open storage management.
Wednesday, October 29, 2008
Users get "excited" over storage vendors, technologies
Which vendors or technologies come to mind when you think about “exciting” storage products and services? According to IT industry research firm TheInfoPro (TIP), storage professionals in Fortune 1000 and midsize enterprises definitely have an answer to that question.
The firm’s latest research on storage adoption plans, management strategies, and vendor performance has been released and more than 250 Fortune 1000 and midsize end users say they are turning their attention to vendors that provide de-duplication, thin provisioning, virtualized provisioning, and capacity optimization technologies, according to TIP’s managing director of storage research, Robert Stevenson.
As a result, NetApp and Data Domain have seen the largest increase in mentions. Both vendors offer data de-duplication technologies and coincidentally each has pushed further into the de-dupe market over the past couple of days.
NetApp, which already offers de-duplication for primary storage via its Data ONTAP operating system announced this week that de-dupe is now available for its family of NetApp Virtual Tape Library (VTL) systems. Also, Data Domain this week entered a partnership with F5 Networks to co-market a joint solution that automates the movement of static and archive data from primary storage to a secondary storage tier. The joint offering will combine the de-dupe capabilities of Data Domain’s disk-based storage systems with the F5 ARX series of file virtualization devices.
Stevenson says his “Wave 11 Time Series Storage Study” shows that end users are looking for
Overall, Fortune 1000 users gave
Friday, October 24, 2008
Dell mulls FCoE support
What it boils down to is that Dell’s storage folks believe converged networks based on lossless Ethernet technology will float all storage boats. According to Eric Endebrock, senior manager for Dell’s storage product group, iSCSI is here to stay and Fibre Channel storage will bolster FCoE as a way to connect legacy FC systems over 10GbE networks (and eventually 100GbE networks).
"Dell is a big believer in unifying the fabric, but that is long-term," said Endebrock. "We are not looking to take our customers and forklift them away from the environments they have today, but they will soon have to start making some choices."
Dell’s official stance is that unified fabrics make the most sense financially for customers in the long-term.
"We are going to support 10GbE and Data Center Ethernet (also known as Converged Enhanced Ethernet) in our EqualLogic PS arrays. Today our PS arrays support iSCSI and will continue to support iSCSI in the future," said Endebrock. "We are not changing now, but protocol flexibility is going to be a key to our success. EqualLogic equals iSCSI is not the best way to think about our investment in that area."
In other words, the company is not ruling out support for FCoE in its Dell EqualLogic PS5000 Series iSCSI SAN arrays.
At last week’s Storage Networking World conference, I asked Dell’s director of enterprise storage, Praveen Asthana, for his take on FCoE and how it might fit into Dell’s product plans going forward.
Asthana said FCoE has already been successful in one respect. It has prompted customers to start thinking about the future. However, he maintained that FCoE requires a networking overhaul and iSCSI is still less expensive overall. He also referred to FCoE as "a stop on the way to iSCSI."
It is no surprise that Dell, like its competitors, is keeping its options open. In the end, customers will ultimately dictate which protocol will dominate or whether FCoE and iSCSI will truly coexist in converged networks.
Tuesday, October 21, 2008
Time to optimize?
My colleague Dave Simpson recently listed his top 5 “hot” technologies from the floor of the Storage Networking World conference, giving the nod to Fibre Channel over Ethernet (FCoE) and server virtualization as the top-two tech topics. Both are consolidation plays with FCoE solving network and cabling complexity and virtual machines reducing server hardware requirements. However, I find his third pick – storage efficiency technologies – to be the most intriguing segment of the storage market.
Also referred to overall as storage optimization technologies, data de-duplication, compression and thin provisioning are moving up the stack from secondary storage applications to primary systems.
It is in this area that new companies like Ocarina Networks and Storwize may now get a seat at the table as customers look for ways to squeeze more out of their existing hardware investments, especially given the near term purchasing plans of IT buyers.
The latest wave of research from TheInfoPro (TIP) shows that technology refresh purchases during the middle of 2008 are offsetting typical year-end purchases and will lead to a significant drop in storage spending for the fourth quarter of 2008.
Ocarina’s products identify patterns and use a blend compression and de-duplication to apply file-specific algorithms to optimize data and how it is stored. Storwize offers a high-performance compression appliance that drops into existing networks to shrink primary storage requirements. NetApp is also offering de-dupe for primary storage as a free option in its Data ONTAP operating system and Riverbed is ramping up for next year’s debut of an appliance that also eliminates redundant data on primary storage systems.
These approaches seem to be worthy of a look. Data is not going to stop growing, but the amount users spend on storage capacity can be controlled using these types of technologies.
For a complete overview of the optimization market and the vendors involved, check out InfoStor contributor and Taneja Group analyst Eric Burgener’s article “It’s time for primary storage optimization.”
Thursday, October 16, 2008
SNW and unanswered questions
Sometimes I need to tie a string around my finger to remember to eat.
There are a lot of trade-offs at trade shows. Face-time with storage vendors and analysts often usurps time that could be spent talking to end users.
I did manage to have a few interesting conversations with some of the SNW attendees in-between running to the press room and vendor briefings. When asked about new technologies like solid-state disks (SSDs) and Fibre Channel over Ethernet (FCoE), 100% of those kind enough to indulge my curiosity responded in the same way: “I can see their benefits, but talk to me when they can help me with the problems I have now.”
So what are those problems? Backup windows, playing catch-up with unexpected capacity growth, grappling with ILM strategies and, the big one, figuring out how to support virtual servers as they multiply like rabbits.
The breakout sessions and tutorials at SNW are informative and covered all of the above issues to a degree, but they tend to be general their scope. The users seemed to need much more information and advice specific to their challenges.
I drop a business card on users once I’m done picking their brains and invite them to e-mail or call me any time they can’t get straight answers to their questions in the hopes that I can help. Given the frequency with which the InfoStor team talks to storage vendors, it only makes sense to ask these companies real-world questions from real-world users.
Consider this a virtual business card and an invitation to do the same. If a vendor tap dances around your questions, feel free to shoot them off to me at kevink@pennwell.com. I’ll ask them for you and post the questions and responses in this blog.
The better the questions, the better the coverage. This way there are no trade-offs and I might even remember to eat breakfast.