December 24, 2009 -- Many of you are already following InfoStor on Twitter for up-to-the-minute breaking news and information about the data storage industry. What you may not know is that InfoStor now has a LinkedIn Group up and running where end users and experts can meet to discuss trends, technologies and the issues facing storage professionals.
Join the InfoStor Group on LinkedIn for daily news updates, lab reviews, guest blogs, and to add your two cents to the discussion threads.
Speaking of guest blogs, Editor-in-Chief Dave Simpson is now soliciting end user bloggers for InfoStor.com. More information on guest blogger opportunities can be found on LinkedIn. Feel free to drop Dave a line and make your voice heard!
Happy Holidays from the InfoStor team!
Thursday, December 24, 2009
Wednesday, December 16, 2009
Dell, KOM Networks are turning old storage into food
December 16, 2009 -- KOM Networks is teaming up with Dell and recycling partner the Technology Conservation Group (TCG) to turn optical jukeboxes and other storage gear into food for needy children.
The companies have announced the "Junk-A-Juke" program, which provides free archive and storage systems in exchange for donated end-of-life optical jukeboxes or legacy storage devices.
Under the program, the vendors will collect and recycle obsolete and legacy storage equipment and donate all the money generated from raw materials to Feed The Children.
In exchange for the older equipment, KOM offers a new Dell Powered KOMpliance Archive (based on the Dell PowerVault NX3000 NAS) with equal capacity, an enterprise class server and archive solution free-of-charge with a three year maintenance agreement.
The goal, according to KOM, is to collect and recycle enough hardware to feed one million children.
TCG will pick up and track each piece of equipment through destruction to ensure that nothing ends up in a landfill. TCG is an ISO registered recycler of electronic scrap and a member of NAID, the National Association for Information Destruction, a trade association providing the standards and ethics for the information destruction industry to ensure total compliant destruction of functional drives.
The companies have announced the "Junk-A-Juke" program, which provides free archive and storage systems in exchange for donated end-of-life optical jukeboxes or legacy storage devices.
Under the program, the vendors will collect and recycle obsolete and legacy storage equipment and donate all the money generated from raw materials to Feed The Children.
In exchange for the older equipment, KOM offers a new Dell Powered KOMpliance Archive (based on the Dell PowerVault NX3000 NAS) with equal capacity, an enterprise class server and archive solution free-of-charge with a three year maintenance agreement.
The goal, according to KOM, is to collect and recycle enough hardware to feed one million children.
TCG will pick up and track each piece of equipment through destruction to ensure that nothing ends up in a landfill. TCG is an ISO registered recycler of electronic scrap and a member of NAID, the National Association for Information Destruction, a trade association providing the standards and ethics for the information destruction industry to ensure total compliant destruction of functional drives.
Thursday, December 3, 2009
Gartner: External disk storage market recovering
December 3, 2009 -- Storage vendors have something to be thankful for as yet another indicator that the storage market is rebounding from the economic downturn has emerged. Gartner's latest research shows there are signs of recovery in the external controller-based disk storage market.
According to Gartner, worldwide external controller-based (ECB) disk storage revenue totaled more than $3.9 billion in the third quarter of 2009, a 7.3% decline from the same period last year.
In a statement from principal research analyst for Gartner's global Storage Quarterly Statistics program, Donna Taylor, the economic downturn's impact on the disk array storage market is slowly subsiding.
She says, "The year-over-year decline of 7.3% indicates that the economic downturn's impact on the disk array storage market is loosening its grip. The prior two quarters in 2009 showed declines in the double digits. This is good news for storage vendors, because it's the first sign of a light at the end of the tunnel."
EMC still leads the pack with 26.7% revenue market share. IBM takes second place followed by HP, Hitachi and Dell.
For the full list of market leaders, check out Gartner's website.
IDC issued its 2Q numbers in September, which revealed similar signs of recovery in both the storage hardware and software markets.
IDC's Worldwide Quarterly Storage Software Tracker showed year-over-year growth in the second quarter of 2009 (2Q09) with revenues of $2.8 billion, representing –9.8% growth over the same quarter one year ago.
On the hardware front, worldwide external disk storage systems factory revenues posting a year-over-year decline of 18.3% in the 2Q09, totaling $4.1 billion, according to the IDC Worldwide Disk Storage Systems Quarterly Tracker.
In addition, a recent survey of 47 enterprise VARs conducted by Robert W. Baird & Co. showed that VARs are upbeat about fourth quarter prospects. See Dave Simpson's blog "VARs upbeat about Q4."
According to Gartner, worldwide external controller-based (ECB) disk storage revenue totaled more than $3.9 billion in the third quarter of 2009, a 7.3% decline from the same period last year.
In a statement from principal research analyst for Gartner's global Storage Quarterly Statistics program, Donna Taylor, the economic downturn's impact on the disk array storage market is slowly subsiding.
She says, "The year-over-year decline of 7.3% indicates that the economic downturn's impact on the disk array storage market is loosening its grip. The prior two quarters in 2009 showed declines in the double digits. This is good news for storage vendors, because it's the first sign of a light at the end of the tunnel."
EMC still leads the pack with 26.7% revenue market share. IBM takes second place followed by HP, Hitachi and Dell.
For the full list of market leaders, check out Gartner's website.
IDC issued its 2Q numbers in September, which revealed similar signs of recovery in both the storage hardware and software markets.
IDC's Worldwide Quarterly Storage Software Tracker showed year-over-year growth in the second quarter of 2009 (2Q09) with revenues of $2.8 billion, representing –9.8% growth over the same quarter one year ago.
On the hardware front, worldwide external disk storage systems factory revenues posting a year-over-year decline of 18.3% in the 2Q09, totaling $4.1 billion, according to the IDC Worldwide Disk Storage Systems Quarterly Tracker.
In addition, a recent survey of 47 enterprise VARs conducted by Robert W. Baird & Co. showed that VARs are upbeat about fourth quarter prospects. See Dave Simpson's blog "VARs upbeat about Q4."
Wednesday, November 18, 2009
What are the market drivers for virtualization and cloud computing?
November 18, 2009 -- Everyone is on board the virtualization train and it seems IT vendors are slapping the "cloud" tag on every storage platform and service they come up with, but what drives the end user towards virtualization and cloud storage?
Recent research from IT automation specialist Shavlik Technologies outlines the market drivers behind virtualization and cloud computing initiatives. Shavlik conducted a survey of more than 290 IT pros and the results reveal that data, server and licensing consolidation and disaster recovery functionality are the leading drivers behind new investments in virtualization technology.
According to Shavlik, an overwhelming 93% of IT organizations are using virtual machine technology. Seventy-five percent of those organizations have more than half of their production servers as virtual machines.
Fifty-three percent of survey respondents say server and licensing consolidation is the driving force behind their virtualization deployments, while backup ranked as the second major driver, reported by 52% of those polled.
The principal lure of cloud computing seems to be TCO. The survey revealed that the reduced IT costs associated with cloud computing is the principal reason IT managers are turning to the cloud for the delivery of IT services.
Cloud computing is being examined for adoption by 58 percent of survey respondents, according to the Shavlik research.
Recent research from IT automation specialist Shavlik Technologies outlines the market drivers behind virtualization and cloud computing initiatives. Shavlik conducted a survey of more than 290 IT pros and the results reveal that data, server and licensing consolidation and disaster recovery functionality are the leading drivers behind new investments in virtualization technology.
According to Shavlik, an overwhelming 93% of IT organizations are using virtual machine technology. Seventy-five percent of those organizations have more than half of their production servers as virtual machines.
Fifty-three percent of survey respondents say server and licensing consolidation is the driving force behind their virtualization deployments, while backup ranked as the second major driver, reported by 52% of those polled.
The principal lure of cloud computing seems to be TCO. The survey revealed that the reduced IT costs associated with cloud computing is the principal reason IT managers are turning to the cloud for the delivery of IT services.
Cloud computing is being examined for adoption by 58 percent of survey respondents, according to the Shavlik research.
Tuesday, November 3, 2009
EMC, Cisco, Vmware cause waves with cloud coalition
November 3, 2009 -- Competitors are already calling the EMC – Cisco – VMware Virtual Computing Environment coalition and its Vblock compute systems a veiled approach to vendor lock-in, but the trio begs to differ.
EMC, Cisco and VMware caused a commotion when the companies announced the Virtual Computing Environment (VCE) coalition and a new set of systems that operate as building blocks for virtualized cloud computing infrastructures.
The companies have been collaborating to create a virtualized, cloud infrastructure platform based on their respective technologies. The result is a series of integrated "Vblock Infrastructure" packages comprised of storage and networking systems and server and storage virtualization software.
We've been fielding comments from across the industry and it didn't take long for the competition to react.
EMC rival NetApp fired a shot at the VCE by classifying Vblocks as nothing more than a reference architecture rather than a full stack of server, network, storage, and virtualization technologies.
Jay Kidd, vice president Storage Solutions group and chief marketing officer, NetApp:
"We view today's announcement as a clever attempt by Cisco to sell UCS servers into EMC's install base. We also feel that this announcement further validates the trend that we're seeing as more and more enterprises move to a virtualized dynamic data center infrastructure. NetApp has been at the forefront in helping enterprises realize this shift through our close partnerships with Cisco and VMware. With VMware we have virtualized large data centers for customers like T-Systems, BT, and Sprint, and have expanded on these architectures with several integration partners to include Cisco UCS servers. Open partnerships, not closed coalitions, are what customers need and want to make the transformation to a virtualized data center."
More of Jay's thoughts on the VCE and Vblocks can be found in his latest blog post.
Dell – a huge EMC partner – calls the VCE/Vblock news an attempt to lock users into proprietary technologies.
Dell's vice president of enterprise storage and networking, Praveen Asthana, says, "The VMware, Cisco and EMC joint venture assumes that customers are looking for closed technology architectures that lock them into a restricted vendor stack. This proprietary implementation of industry standard architectures is a throwback to the 1990's and creates complete vendor lock-in. As the leading provider of Cloud infrastructure, Dell knows from its customers' insights that cloud compute workloads are best served by open, standards-based solutions – not by repackaging high-cost infrastructure as a cloud solution."
The coalition members beg to differ. They are in lock step with a message of openness.
VMware's president and CEO, Paul Maritz, says Cisco, EMC and VMware all remain committed to working in an "open way."
"We maintain our commitment to working in an open way with existing partners by making our technologies available to other parties who want to put together solutions," says Maritz. "There is no need or reason for our relationships to change. At the same time, we are adding to the options for our customers and not removing them."
EMC's CEO Joe Tucci claims the VCE and the Vblock systems offer customers more "choice."
"On the choice side, we know this is an open world and we are committed to openness," he says. "We are still offering an a la carte menu. For example, you can take EMC storage and choose another server. We are not removing choice."
However, open does not mean the ability to use just any technology to create a Vblock.
"We are not substituting on the Vblock side. If you want to use somebody else's storage you have to buy from the [a la carte] side of the menu, but you're not buying a Vblock," says Tucci. "That's the distinction. You give up certain things if you don't order from the fixed menu."
EMC, Cisco and VMware caused a commotion when the companies announced the Virtual Computing Environment (VCE) coalition and a new set of systems that operate as building blocks for virtualized cloud computing infrastructures.
The companies have been collaborating to create a virtualized, cloud infrastructure platform based on their respective technologies. The result is a series of integrated "Vblock Infrastructure" packages comprised of storage and networking systems and server and storage virtualization software.
We've been fielding comments from across the industry and it didn't take long for the competition to react.
EMC rival NetApp fired a shot at the VCE by classifying Vblocks as nothing more than a reference architecture rather than a full stack of server, network, storage, and virtualization technologies.
Jay Kidd, vice president Storage Solutions group and chief marketing officer, NetApp:
"We view today's announcement as a clever attempt by Cisco to sell UCS servers into EMC's install base. We also feel that this announcement further validates the trend that we're seeing as more and more enterprises move to a virtualized dynamic data center infrastructure. NetApp has been at the forefront in helping enterprises realize this shift through our close partnerships with Cisco and VMware. With VMware we have virtualized large data centers for customers like T-Systems, BT, and Sprint, and have expanded on these architectures with several integration partners to include Cisco UCS servers. Open partnerships, not closed coalitions, are what customers need and want to make the transformation to a virtualized data center."
More of Jay's thoughts on the VCE and Vblocks can be found in his latest blog post.
Dell – a huge EMC partner – calls the VCE/Vblock news an attempt to lock users into proprietary technologies.
Dell's vice president of enterprise storage and networking, Praveen Asthana, says, "The VMware, Cisco and EMC joint venture assumes that customers are looking for closed technology architectures that lock them into a restricted vendor stack. This proprietary implementation of industry standard architectures is a throwback to the 1990's and creates complete vendor lock-in. As the leading provider of Cloud infrastructure, Dell knows from its customers' insights that cloud compute workloads are best served by open, standards-based solutions – not by repackaging high-cost infrastructure as a cloud solution."
The coalition members beg to differ. They are in lock step with a message of openness.
VMware's president and CEO, Paul Maritz, says Cisco, EMC and VMware all remain committed to working in an "open way."
"We maintain our commitment to working in an open way with existing partners by making our technologies available to other parties who want to put together solutions," says Maritz. "There is no need or reason for our relationships to change. At the same time, we are adding to the options for our customers and not removing them."
EMC's CEO Joe Tucci claims the VCE and the Vblock systems offer customers more "choice."
"On the choice side, we know this is an open world and we are committed to openness," he says. "We are still offering an a la carte menu. For example, you can take EMC storage and choose another server. We are not removing choice."
However, open does not mean the ability to use just any technology to create a Vblock.
"We are not substituting on the Vblock side. If you want to use somebody else's storage you have to buy from the [a la carte] side of the menu, but you're not buying a Vblock," says Tucci. "That's the distinction. You give up certain things if you don't order from the fixed menu."
Labels:
Cisco,
Cloud Computing,
EMC,
VCE,
Virtual Computing Environment coalition,
VMware
Friday, October 30, 2009
Storage pro on the lam
October 30, 2009 -- There is an IT operations manager/storage professional on the run this Halloween and he has 30 hours to get as far away from Manchester University as possible.
Simon Painter, an IT engineer for storage vendor BlueArc, is participating in a "jailbreak" this weekend in an effort to raise funds for KidsCan, a pediatric cancer treatment research organization based in the UK.
Painter and his friend David Wood are competing with 100 others in a race to get as far away from the starting point as they can without spending any money. They may raise some eyebrows, as they will be donning orange prison-like jumpsuits and flip-flops for the trip.
Aside from travel documents and an emergency credit card, the duo will be toting a mobile phone with them as they beg, borrow and "blag" their way across Europe. The team will use the phone to update their location on Twitter and Facebook.
Painter hopes to make his way to Zimbabwe by the end of the contest. Participants have reportedly made it as far away as New Zealand and Australia in past races.
You can sponsor Painter's "escape," track his progress and see a live map of his location on the Jailbreak for KidsCan website.
Simon Painter, an IT engineer for storage vendor BlueArc, is participating in a "jailbreak" this weekend in an effort to raise funds for KidsCan, a pediatric cancer treatment research organization based in the UK.
Painter and his friend David Wood are competing with 100 others in a race to get as far away from the starting point as they can without spending any money. They may raise some eyebrows, as they will be donning orange prison-like jumpsuits and flip-flops for the trip.
Aside from travel documents and an emergency credit card, the duo will be toting a mobile phone with them as they beg, borrow and "blag" their way across Europe. The team will use the phone to update their location on Twitter and Facebook.
Painter hopes to make his way to Zimbabwe by the end of the contest. Participants have reportedly made it as far away as New Zealand and Australia in past races.
You can sponsor Painter's "escape," track his progress and see a live map of his location on the Jailbreak for KidsCan website.
Friday, October 23, 2009
One vendor too many?
October 23, 2009 -- The end users I speak with approach the buying process in different ways. Some opt for a single vendor – the so-called "one throat to choke" strategy. Others buy storage from multiple vendors to keep everyone honest. Most feel the multi-vendor approach is the way to go, but it's a slippery slope. How many vendors does it take before the pros outweigh the cons?
In a new report, "How Efficient Is Your Enterprise Storage Environment?," Forrester Research senior analyst Andrew Reichman outlines some best practices for "multisourcing" along with ways to measure key performance indicators (KPIs) for storage efficiency.
Reichman believes multisourcing storage can give customers the upper hand in negotiations and reduce vendor lock-in, but there is a risk to having too many vendors in the mix.
He says having too many vendors on hand can dramatically increase cost of management and reduce overall efficiency. For instance, managing different storage platforms can require different skill sets. More platforms in the data center require higher management and training costs.
He also says that negotiating a better price is a balancing act. Reichman writes, "Vendors often give deeper discounts to those who buy more of their gear. So, while negotiation power can be improved with competition, actually buying from many vendors can limit volumes and therefore discounts over time. Bids should be competitive, and exit strategies considered, but it makes sense from a pricing perspective to pool purchases with a smaller number of vendors once the negotiations are done."
No two environments are the same, but a good rule of thumb is to have no more than three different types of storage on the floor to keep costs under control and minimize complexity.
Multisourcing is one piece of the puzzle. Forrester's storage analysts also offer best practices for measuring capacity utilization and allocation, tier ratios, and staffing.
The full report can be found on Forrester's website.
In a new report, "How Efficient Is Your Enterprise Storage Environment?," Forrester Research senior analyst Andrew Reichman outlines some best practices for "multisourcing" along with ways to measure key performance indicators (KPIs) for storage efficiency.
Reichman believes multisourcing storage can give customers the upper hand in negotiations and reduce vendor lock-in, but there is a risk to having too many vendors in the mix.
He says having too many vendors on hand can dramatically increase cost of management and reduce overall efficiency. For instance, managing different storage platforms can require different skill sets. More platforms in the data center require higher management and training costs.
He also says that negotiating a better price is a balancing act. Reichman writes, "Vendors often give deeper discounts to those who buy more of their gear. So, while negotiation power can be improved with competition, actually buying from many vendors can limit volumes and therefore discounts over time. Bids should be competitive, and exit strategies considered, but it makes sense from a pricing perspective to pool purchases with a smaller number of vendors once the negotiations are done."
No two environments are the same, but a good rule of thumb is to have no more than three different types of storage on the floor to keep costs under control and minimize complexity.
Multisourcing is one piece of the puzzle. Forrester's storage analysts also offer best practices for measuring capacity utilization and allocation, tier ratios, and staffing.
The full report can be found on Forrester's website.
Friday, October 16, 2009
Brocade, Cisco eye mobile services market
October 16, 2009 -- Cisco jumped into the Fibre Channel market with the MDS family. Brocade jumped into the Ethernet market with the acquisition of Foundry Networks. Both companies are jockeying for position in the nascent converged networking (CEE and FCoE) market. And, it appears, the companies are escalating the fight in yet another area – wireless networking and mobile computing.
Brocade and Cisco each added to their respective mobile arsenals this week. Brocade took the partnership route, while Cisco opened up its wallet.
Cisco announced a deal to acquire Starent Networks, a supplier of IP-based mobile infrastructure solutions for mobile and converged carriers. Cisco paid roughly $2.9 billion for Starent and the acquisition is expected to close during the first half of calendar year 2010.
Starent's stock-in-trade is providing multimedia intelligence, core network functions and services to manage access from any 2.5G, 3G, and 4G radio network to a mobile operator's packet core network.
A quote from Cisco's official announcement:
"Cisco and Starent Networks share a common vision and bring complementary technologies designed to accelerate the transition to the Mobile Internet, where the network is the platform for Service Providers to launch, deliver and monetize the next generation of mobile multimedia applications and services," said Pankaj Patel, senior vice president/general manager for Cisco's Service Provider Business.
Cisco says service providers have been actively investing in the market as global mobile data traffic is expected to more than double every year through 2013, according to the Cisco Visual Networking Index.
Brocade has noticed the market potential as well. The company inked an OEM deal with the Enterprise Mobility Solutions business unit of Motorola this week to collaborate on wireless LAN (WLAN), voice-over-WLAN, mobile unified communications/fixed mobile convergence (FMC), cloud computing and wireless broadband technologies.
The companies established an OEM reseller agreement, through which, Brocade will rebrand and resell a number of Motorola's enterprise wireless LAN solutions and resell Motorola wireless security products as an extension of its own IP/Ethernet product portfolio.
According to the companies, "this collaboration also lays the foundation for a new category of wireless and mobility services delivered by service providers using cloud enabled infrastructure solutions from Motorola and Brocade."
The companies plan to use cloud computing architectures and enable voice, video and data applications to work over 3G, 4G or WiFi networks.
Brocade and Cisco each added to their respective mobile arsenals this week. Brocade took the partnership route, while Cisco opened up its wallet.
Cisco announced a deal to acquire Starent Networks, a supplier of IP-based mobile infrastructure solutions for mobile and converged carriers. Cisco paid roughly $2.9 billion for Starent and the acquisition is expected to close during the first half of calendar year 2010.
Starent's stock-in-trade is providing multimedia intelligence, core network functions and services to manage access from any 2.5G, 3G, and 4G radio network to a mobile operator's packet core network.
A quote from Cisco's official announcement:
"Cisco and Starent Networks share a common vision and bring complementary technologies designed to accelerate the transition to the Mobile Internet, where the network is the platform for Service Providers to launch, deliver and monetize the next generation of mobile multimedia applications and services," said Pankaj Patel, senior vice president/general manager for Cisco's Service Provider Business.
Cisco says service providers have been actively investing in the market as global mobile data traffic is expected to more than double every year through 2013, according to the Cisco Visual Networking Index.
Brocade has noticed the market potential as well. The company inked an OEM deal with the Enterprise Mobility Solutions business unit of Motorola this week to collaborate on wireless LAN (WLAN), voice-over-WLAN, mobile unified communications/fixed mobile convergence (FMC), cloud computing and wireless broadband technologies.
The companies established an OEM reseller agreement, through which, Brocade will rebrand and resell a number of Motorola's enterprise wireless LAN solutions and resell Motorola wireless security products as an extension of its own IP/Ethernet product portfolio.
According to the companies, "this collaboration also lays the foundation for a new category of wireless and mobility services delivered by service providers using cloud enabled infrastructure solutions from Motorola and Brocade."
The companies plan to use cloud computing architectures and enable voice, video and data applications to work over 3G, 4G or WiFi networks.
Friday, October 9, 2009
The future of storage is cloudy
October 9, 2009 -- Cloud computing and cloud storage are here to stay. The number of vendors with cloud offerings continues to multiply and I don't envy the end user trying to evaluate vendors and services.
Just this week we have seen a big push in the cloud storage market. IBM officially announced its cloud storage intentions with a declaration that it will enter the storage cloud space with the launch of the IBM Smart Business Storage solution, IBM Information Archive and new consulting services.
The IBM Smart Business Storage Cloud is a private cloud based on low-cost components with support for multiple petabytes of capacity, billions of files and scale-out performance. Big Blue's storage cloud is based on technologies including the IBM General Parallel File System have and storage and server technologies like XIV and BladeCenter.
Earlier in the week, Symantec released Veritas FileStore, a new clustered file system aimed at enterprise customers looking to build public or private storage clouds. FileStore is comprised of software-based appliances that run on commodity x86 server nodes and talk to clients using CIFS, FTP, HTTP or NFS. On the back-end, the FileStore nodes aggregate existing Fibre Channel and iSCSI SANs and JBODs as a shared storage pool. A FileStore system can scale up to 16 nodes and 2PB of total capacity.
Seagate also chimed in. Seagate's storage software arm, i365, announced a cloud storage-based replication service for medium-sized businesses as part of its push into the cloud storage space.
Terry Cunningham, i365's senior vice president and general manager, told me i365 is changing the way it approaches the cloud.
"Our offerings have been rip-and-replace in the past, and that is an unreasonable request for customers. Now we're agnostic and work with legacy backup packages," he said. "We can now get to the cloud without gutting the infrastructure."
All of these storage clouds are here or on the horizon and there are a few questions customers should be asking themselves as they try to pick a vendor. What types of metadata is required to ensure portability, compliance and security in the cloud? Can data be provided back to users in a format that can be ingested by a new service provider?
The storage industry is aware of some of the cloud confusion out there. It's a concern from both a perception and a technical standpoint.
"Cloud storage is not a fad like the one we may have witnessed with xSPs and storage service providers back in the year 2000 timeframe," says SNIA chairman Wayne Adams. "Cloud storage is here to stay and we need to develop common terminology and standards for building cloud infrastructures."
Stay tuned for more information about what the SNIA has in store for the cloud in our Cloud Storage topic center. The cloud news is sure to be fast and furious from next week's Storage Networking World conference.
Just this week we have seen a big push in the cloud storage market. IBM officially announced its cloud storage intentions with a declaration that it will enter the storage cloud space with the launch of the IBM Smart Business Storage solution, IBM Information Archive and new consulting services.
The IBM Smart Business Storage Cloud is a private cloud based on low-cost components with support for multiple petabytes of capacity, billions of files and scale-out performance. Big Blue's storage cloud is based on technologies including the IBM General Parallel File System have and storage and server technologies like XIV and BladeCenter.
Earlier in the week, Symantec released Veritas FileStore, a new clustered file system aimed at enterprise customers looking to build public or private storage clouds. FileStore is comprised of software-based appliances that run on commodity x86 server nodes and talk to clients using CIFS, FTP, HTTP or NFS. On the back-end, the FileStore nodes aggregate existing Fibre Channel and iSCSI SANs and JBODs as a shared storage pool. A FileStore system can scale up to 16 nodes and 2PB of total capacity.
Seagate also chimed in. Seagate's storage software arm, i365, announced a cloud storage-based replication service for medium-sized businesses as part of its push into the cloud storage space.
Terry Cunningham, i365's senior vice president and general manager, told me i365 is changing the way it approaches the cloud.
"Our offerings have been rip-and-replace in the past, and that is an unreasonable request for customers. Now we're agnostic and work with legacy backup packages," he said. "We can now get to the cloud without gutting the infrastructure."
All of these storage clouds are here or on the horizon and there are a few questions customers should be asking themselves as they try to pick a vendor. What types of metadata is required to ensure portability, compliance and security in the cloud? Can data be provided back to users in a format that can be ingested by a new service provider?
The storage industry is aware of some of the cloud confusion out there. It's a concern from both a perception and a technical standpoint.
"Cloud storage is not a fad like the one we may have witnessed with xSPs and storage service providers back in the year 2000 timeframe," says SNIA chairman Wayne Adams. "Cloud storage is here to stay and we need to develop common terminology and standards for building cloud infrastructures."
Stay tuned for more information about what the SNIA has in store for the cloud in our Cloud Storage topic center. The cloud news is sure to be fast and furious from next week's Storage Networking World conference.
Friday, October 2, 2009
SMB DR preparedness is not what it seems
October 2, 2009 -- Perception is not reality when it comes to disaster recovery preparedness in the small and medium-sized business (SMB) world.
In a former life, I stocked shelves and handled inventory for a large, upscale retail outfit. Being the geek that I am, I took notice of the IT setup in the store, including a small tape drive buried under boxes and irregular garments. It was painfully obvious that it was not being used properly. In fact, I doubted whether any of the staff knew what it was.
Eventually I asked a manager about it and was informed that yes, it was part of the store manager's job to perform daily tape backups of the store's transaction and sales information.
However, in my tenure as stock-boy extraordinaire that tape drive was never used. Not once. It boggled my mind. But it seems that some things never change.
According to the findings of Symantec's "2009 SMB Disaster Preparedness Survey," reveal that SMBs are confident in their DR plans. Eighty-two percent of respondents say they are somewhat/very satisfied with their disaster plans, and 84% say they feel somewhat/very protected in case of a disaster.
The reality of the situation, despite how confident they feel, is grim. According to the survey, SMBs do not back up their computer systems as frequently as they should: Only 23% backup their computer systems daily and less than half back up weekly.
The average SMB has experienced three outages within the past 12 months, with the leading causes being virus or hacker attacks, power outages or natural disasters. The approximate impact on the bottom line per outage is $15,000 per day. That's real money for small businesses.
The large retail chain I worked for is still in business. They continue to thrive. I can't speak to whether they have experienced outages or whether downtime has cost them cash or customers.
Perhaps the perception-reality gap is more evidence that consolidating and centralizing the backup process makes sense. Having a tape drive at a remote location doesn't ensure your data will be protected when an outage hits.
Symantec makes several useful recommendations to SMB customers in its report, which can be found on the company's website.
Keep up on the latest DR and business continuity news in our DR topic center.
In a former life, I stocked shelves and handled inventory for a large, upscale retail outfit. Being the geek that I am, I took notice of the IT setup in the store, including a small tape drive buried under boxes and irregular garments. It was painfully obvious that it was not being used properly. In fact, I doubted whether any of the staff knew what it was.
Eventually I asked a manager about it and was informed that yes, it was part of the store manager's job to perform daily tape backups of the store's transaction and sales information.
However, in my tenure as stock-boy extraordinaire that tape drive was never used. Not once. It boggled my mind. But it seems that some things never change.
According to the findings of Symantec's "2009 SMB Disaster Preparedness Survey," reveal that SMBs are confident in their DR plans. Eighty-two percent of respondents say they are somewhat/very satisfied with their disaster plans, and 84% say they feel somewhat/very protected in case of a disaster.
The reality of the situation, despite how confident they feel, is grim. According to the survey, SMBs do not back up their computer systems as frequently as they should: Only 23% backup their computer systems daily and less than half back up weekly.
The average SMB has experienced three outages within the past 12 months, with the leading causes being virus or hacker attacks, power outages or natural disasters. The approximate impact on the bottom line per outage is $15,000 per day. That's real money for small businesses.
The large retail chain I worked for is still in business. They continue to thrive. I can't speak to whether they have experienced outages or whether downtime has cost them cash or customers.
Perhaps the perception-reality gap is more evidence that consolidating and centralizing the backup process makes sense. Having a tape drive at a remote location doesn't ensure your data will be protected when an outage hits.
Symantec makes several useful recommendations to SMB customers in its report, which can be found on the company's website.
Keep up on the latest DR and business continuity news in our DR topic center.
Thursday, September 24, 2009
Survey: SMBs keeping data in-house
September 24, 2009 -- Some interesting tidbits from the small and medium-sized business (SMB) world. It would make sense that SMBs are a prime target for cloud computing services – storage included. But a new survey reveals that while SMBs are using the cloud in some way, most plan to keep their data in-house.
Spiceworks, a company that targets SMB users with free, ad-supported network monitoring and management software, recently released a market research report on current technology purchasing, usage and staffing trends among SMBs across the globe.
The company polled 1,130 SMB IT managers found that while 57% use one or more cloud computing service, 75% plan to store data on premise.
In fact, most SMBs are turning to the cloud for security and e-mail services. Among the aforementioned 57%, the three most popular cloud computing services in use or on the purchase list include anti-spam (43%), hosted email (25%), and online backup (20%), according to the report.
On the storage front, 25% of respondents are planning backup and recovery purchases within the next six months. Of these, 75% plan to store data on premise and 25% plan to utilize cloud-based storage solutions. In addition, 4% of SMB data will be stored on NAS or SAN devices, with 38% in DAS, 7% offsite and 13% on tape or other media.
The full Spiceworks report can be downloaded from the company's website.
Keep track of the latest cloud storage news in InfoStor's Cloud Storage Topic Center. There you will find a new analysis piece by Evaluator Group managing partner Russ Fellows, in which he outlines the technology hurdles that need to be resolved before cloud computing and cloud storage become a common part of the IT landscape.
Spiceworks, a company that targets SMB users with free, ad-supported network monitoring and management software, recently released a market research report on current technology purchasing, usage and staffing trends among SMBs across the globe.
The company polled 1,130 SMB IT managers found that while 57% use one or more cloud computing service, 75% plan to store data on premise.
In fact, most SMBs are turning to the cloud for security and e-mail services. Among the aforementioned 57%, the three most popular cloud computing services in use or on the purchase list include anti-spam (43%), hosted email (25%), and online backup (20%), according to the report.
On the storage front, 25% of respondents are planning backup and recovery purchases within the next six months. Of these, 75% plan to store data on premise and 25% plan to utilize cloud-based storage solutions. In addition, 4% of SMB data will be stored on NAS or SAN devices, with 38% in DAS, 7% offsite and 13% on tape or other media.
The full Spiceworks report can be downloaded from the company's website.
Keep track of the latest cloud storage news in InfoStor's Cloud Storage Topic Center. There you will find a new analysis piece by Evaluator Group managing partner Russ Fellows, in which he outlines the technology hurdles that need to be resolved before cloud computing and cloud storage become a common part of the IT landscape.
Friday, September 18, 2009
Come together? Not now...in IT
September 18, 2009 -- Apologies for the blog title, but the recent tsunami of Beatles media hype has Abbey Road rattling around in my head. All puns aside, TheInfoPro (TIP) just released some interesting research regarding the organizational dynamics in the data center. The most interesting bit may be that most users believe there is an upside to maintaining separate data and networking management groups.
It's interesting to me because experts have predicted that different management groups within IT will eventual merge as the lines between the server, storage and network domains blur. But, as technologies such as server virtualization and unified networking emerge, end users seem to be taking an opposing view.
In its first "Organizational Dynamics Study," TIP looked at the structural issues facing IT organizations. According to the firm, "the study gives insight into the impact that technology and financial considerations will have on the evolution of storage organizations and shows ranges and optimal cost levels of support staffing."
Some snapshots of the research reveal:
• 54%of study respondents see a significant or major impact on addressing storage needs because of server virtualization.
• 78% of respondents said they do not expect storage and networking teams to combine.
• 77% said they do not have a separate virtualization group.
• 60% of respondents said their organization sees major operational benefit in having a separate data management group.
Myron Kerstetter, TheInfoPro's Managing Director of Organizational Studies says: "Looking toward the future, we found that important shifts in the organizational structure will occur in the next three to five years, particularly in the larger storage groups. But despite the hype, many organizations did not expect the creation of formal virtualization teams or the merging of storage and networking groups."
We'd love to hear your take on the topic. Shoot us an e-mail with your opinions.
More information on TIP's latest research can be found on their website.
It's interesting to me because experts have predicted that different management groups within IT will eventual merge as the lines between the server, storage and network domains blur. But, as technologies such as server virtualization and unified networking emerge, end users seem to be taking an opposing view.
In its first "Organizational Dynamics Study," TIP looked at the structural issues facing IT organizations. According to the firm, "the study gives insight into the impact that technology and financial considerations will have on the evolution of storage organizations and shows ranges and optimal cost levels of support staffing."
Some snapshots of the research reveal:
• 54%of study respondents see a significant or major impact on addressing storage needs because of server virtualization.
• 78% of respondents said they do not expect storage and networking teams to combine.
• 77% said they do not have a separate virtualization group.
• 60% of respondents said their organization sees major operational benefit in having a separate data management group.
Myron Kerstetter, TheInfoPro's Managing Director of Organizational Studies says: "Looking toward the future, we found that important shifts in the organizational structure will occur in the next three to five years, particularly in the larger storage groups. But despite the hype, many organizations did not expect the creation of formal virtualization teams or the merging of storage and networking groups."
We'd love to hear your take on the topic. Shoot us an e-mail with your opinions.
More information on TIP's latest research can be found on their website.
Labels:
TheInfoPro,
TIP,
unified networking,
Virtualization
Friday, September 11, 2009
IDC: Sweet spots appearing in storage software, hardware markets
September 11, 2009 -- Growth in the storage software market is still on the decline, as are factory revenues for the worldwide external disk storage systems market, but there are some bright spots in both sectors, according to the latest research from International Data Corp. (IDC).
IDC's Worldwide Quarterly Storage Software Tracker shows another year-over-year growth in the second quarter of 2009 (2Q09) with revenues of $2.8 billion, representing –9.8% growth over the same quarter one year ago. But Michael Margossian, research analyst for Storage Software at IDC, says the storage software market is showing signs of recovery with positive growth over the first quarter of this year.
In addition, the replication market grew 5% compared to 1Q09 led by NetApp, which has been refocusing its efforts and grew 20% from the previous quarter, according to Margossian.
IDC puts EMC atop the overall market with 22.4% revenue share in 2Q09, followed by Symantec, IBM, NetApp and CA.
On the hardware front, things are revenues continue to slip with worldwide external disk storage systems factory revenues posting a year-over-year decline of 18.3% in the 2Q09, totaling $4.1 billion, according to the IDC Worldwide Disk Storage Systems Quarterly Tracker.
In a statement from IDC, Liz Conner, research analyst for Storage Systems, said, "The enterprise storage systems market continued to feel the impact of current economic conditions, posting its third straight year-over-year decline. However, certain sweet spots in the market continue to thrive. iSCSI SAN and FC SAN both showed strong year-over-year growth of 57.2% and 66.8%, respectively, in the entry level price bands as customers continue to demand enterprise level network storage at a more economically friendly price point. Similarly, midrange NAS enjoyed solid year-over-year growth of 20.7% as file-level data generation continues to be a hot topic for many customers."
IDC's data shows that EMC claimed the number one spot in the external disk storage systems market with 21.5% revenue share in the second quarter, followed by IBM and HP with Dell and NetApp in a statistical dead heat for the number four position.
EMC also led the way in market share in the total network disk storage market (NAS Combined with Open / iSCSI SAN) with 26%. The network disk storage market declined 15.3% year over year in the second quarter to more than $3.2 billion in revenues.
In the total worldwide disk storage systems market, IBM and HP finished the second quarter in a statistical tie with 17.3% each followed by EMC with 15.7% market share.
IDC's Worldwide Quarterly Storage Software Tracker shows another year-over-year growth in the second quarter of 2009 (2Q09) with revenues of $2.8 billion, representing –9.8% growth over the same quarter one year ago. But Michael Margossian, research analyst for Storage Software at IDC, says the storage software market is showing signs of recovery with positive growth over the first quarter of this year.
In addition, the replication market grew 5% compared to 1Q09 led by NetApp, which has been refocusing its efforts and grew 20% from the previous quarter, according to Margossian.
IDC puts EMC atop the overall market with 22.4% revenue share in 2Q09, followed by Symantec, IBM, NetApp and CA.
On the hardware front, things are revenues continue to slip with worldwide external disk storage systems factory revenues posting a year-over-year decline of 18.3% in the 2Q09, totaling $4.1 billion, according to the IDC Worldwide Disk Storage Systems Quarterly Tracker.
In a statement from IDC, Liz Conner, research analyst for Storage Systems, said, "The enterprise storage systems market continued to feel the impact of current economic conditions, posting its third straight year-over-year decline. However, certain sweet spots in the market continue to thrive. iSCSI SAN and FC SAN both showed strong year-over-year growth of 57.2% and 66.8%, respectively, in the entry level price bands as customers continue to demand enterprise level network storage at a more economically friendly price point. Similarly, midrange NAS enjoyed solid year-over-year growth of 20.7% as file-level data generation continues to be a hot topic for many customers."
IDC's data shows that EMC claimed the number one spot in the external disk storage systems market with 21.5% revenue share in the second quarter, followed by IBM and HP with Dell and NetApp in a statistical dead heat for the number four position.
EMC also led the way in market share in the total network disk storage market (NAS Combined with Open / iSCSI SAN) with 26%. The network disk storage market declined 15.3% year over year in the second quarter to more than $3.2 billion in revenues.
In the total worldwide disk storage systems market, IBM and HP finished the second quarter in a statistical tie with 17.3% each followed by EMC with 15.7% market share.
Tuesday, September 1, 2009
EMC scoops up FastScale Technology, Kazeon
September 1, 2009 -- The EMC acquisition machine has been busy this week. The company has inked two acquisition deals in as many days, snapping up FastScale Technology and Kazeon Systems.
Today EMC announced it has signed a definitive agreement to acquire privately-held eDiscovery software Kazeon. EMC plans to integrate Kazeon's technology into the EMC SourceOne product family. The transaction is expected to close in Q3 2009.
The full Kazeon announcement can be found here.
The Kazeon deal came fresh on the heels of yesterday's acquisition of FastScale Technology. EMC's press release states:
"Designed from the ground up to accelerate the journey from physical to virtual to private cloud, with the addition of FastScale, the EMC Ionix portfolio will simplify end-to-end management and maximize the performance, density and efficiency of applications and software deployed on unified infrastructures."
EMC also beefed up the Ionix software portfolio via an extended partnership with VMware. EMC and VMware announced a new reseller agreement whereby EMC is now reselling VMware's vCenter AppSpeed as part of the EMC Ionix portfolio.
Today EMC announced it has signed a definitive agreement to acquire privately-held eDiscovery software Kazeon. EMC plans to integrate Kazeon's technology into the EMC SourceOne product family. The transaction is expected to close in Q3 2009.
The full Kazeon announcement can be found here.
The Kazeon deal came fresh on the heels of yesterday's acquisition of FastScale Technology. EMC's press release states:
"Designed from the ground up to accelerate the journey from physical to virtual to private cloud, with the addition of FastScale, the EMC Ionix portfolio will simplify end-to-end management and maximize the performance, density and efficiency of applications and software deployed on unified infrastructures."
EMC also beefed up the Ionix software portfolio via an extended partnership with VMware. EMC and VMware announced a new reseller agreement whereby EMC is now reselling VMware's vCenter AppSpeed as part of the EMC Ionix portfolio.
Monday, August 31, 2009
EMC, VMware tighten software ties
August 31, 2009 -- EMC and VMware kicked off this week's VMworld conference with news of an expanded business and technology alliance that will have EMC reselling VMware's vCenter AppSpeed software and will tighten integration between VMware's vCenter product family and EMC's Ionix IT management software.
The pair is teaming up to nudge customers down the path toward migrating tier-one applications to VMware's vSphere 4 cloud operating system by using their respective software products to streamline configuration and compliance management and automate IT processes.
EMC will sell VMware's vCenter AppSpeed – a tool for monitoring application performance and dependencies across different tiers of virtual and physical infrastructures – as part of the EMC Ionix IT Management portfolio.
EMC reorganized its IT management software family last month by bringing all of its Smarts, nLayers, Voyence, Infra, ControlCenter and Configuresoft technologies together under the Ionix brand.
The Ionix software family consists of four main product sets. The first, EMC Ionix for Service Discovery and Mapping, identifies applications and their physical and virtual dependencies in support of Configuration Management Database (CMDB)/Configuration Management System (CMS) population, change management, and application troubleshooting. It also maps servers and applications prior to data center moves, consolidations, and virtualization migrations.
The second, Ionix for IT Operations Intelligence, provides automated root-cause and impact analysis and monitors services across both physical and virtual environments. The software allows users to view the relationships between virtual machines (VMs), the VMware ESX Servers they reside on, and the network.
The third is Ionix for Data Center Automation and Compliance. Aimed at compliance management across servers, storage, application dependencies and networks, Ionix for Data Center Automation and Compliance tracks configuration compliance against regulatory, best practices, and internal governance policies, including VMware vSphere 4 deployment guidelines and helps users remediate compliance violations across physical and virtual infrastructures.
The fourth and final product set, Ionix for Service Management, allows customers to deploy IT Infrastructure Library (ITIL) service management. Customers can use Ionix for Service Management to build a federated CMDB that is auto-populated with physical and virtual and dependencies.
The companies also announced new physical-to-virtual migration services offerings. The services will make use of VMware Capacity Planner, EMC Ionix Application Discovery Manager, and VMware vCenter AppSpeed to speed the vSphere migration process.
The specific offerings related to VMware vCenter AppSpeed include: Enhanced Candidate Selection for VMware vCenter AppSpeed, VMware vCenter AppSpeed Jumpstart, and VMware Infrastructure Performance Health Check.
The pair is teaming up to nudge customers down the path toward migrating tier-one applications to VMware's vSphere 4 cloud operating system by using their respective software products to streamline configuration and compliance management and automate IT processes.
EMC will sell VMware's vCenter AppSpeed – a tool for monitoring application performance and dependencies across different tiers of virtual and physical infrastructures – as part of the EMC Ionix IT Management portfolio.
EMC reorganized its IT management software family last month by bringing all of its Smarts, nLayers, Voyence, Infra, ControlCenter and Configuresoft technologies together under the Ionix brand.
The Ionix software family consists of four main product sets. The first, EMC Ionix for Service Discovery and Mapping, identifies applications and their physical and virtual dependencies in support of Configuration Management Database (CMDB)/Configuration Management System (CMS) population, change management, and application troubleshooting. It also maps servers and applications prior to data center moves, consolidations, and virtualization migrations.
The second, Ionix for IT Operations Intelligence, provides automated root-cause and impact analysis and monitors services across both physical and virtual environments. The software allows users to view the relationships between virtual machines (VMs), the VMware ESX Servers they reside on, and the network.
The third is Ionix for Data Center Automation and Compliance. Aimed at compliance management across servers, storage, application dependencies and networks, Ionix for Data Center Automation and Compliance tracks configuration compliance against regulatory, best practices, and internal governance policies, including VMware vSphere 4 deployment guidelines and helps users remediate compliance violations across physical and virtual infrastructures.
The fourth and final product set, Ionix for Service Management, allows customers to deploy IT Infrastructure Library (ITIL) service management. Customers can use Ionix for Service Management to build a federated CMDB that is auto-populated with physical and virtual and dependencies.
The companies also announced new physical-to-virtual migration services offerings. The services will make use of VMware Capacity Planner, EMC Ionix Application Discovery Manager, and VMware vCenter AppSpeed to speed the vSphere migration process.
The specific offerings related to VMware vCenter AppSpeed include: Enhanced Candidate Selection for VMware vCenter AppSpeed, VMware vCenter AppSpeed Jumpstart, and VMware Infrastructure Performance Health Check.
Tuesday, August 18, 2009
Xiotech wants to buy your old disks
August 18, 2009 -- What do vendors have to do to put you in a new array today? How about buying back capacity? That's Xiotech's plan under its new "Cash for Disk Clunkers" program.
The company is riding the coattails of the Obama administration's Cash for Clunkers auto industry stimulus program by offering cash incentives for old storage technology in favor of storage systems based on Xiotech's Intelligent Storage Element (ISE) architecture.
As part of the new program, which was announced last week and runs through September, customers can trade in "old, inefficient disk drives" for $1,000 per terabyte cash back toward the purchase of an equal amount of capacity on a Xiotech Emprise 7000, Emprise 7000 Edge, or Emprise 5000 system, or VM Storage Solution.
There is no limit to the amount of capacity organizations can trade in, and consequently no limit to the money they can save on their new storage systems, according to Xiotech's Cash for Disk Clunkers webpage.
You can find more info on Xiotech's website and read the Enterprise Strategy Group's (ESG) Lab Review on our website.
The company is riding the coattails of the Obama administration's Cash for Clunkers auto industry stimulus program by offering cash incentives for old storage technology in favor of storage systems based on Xiotech's Intelligent Storage Element (ISE) architecture.
As part of the new program, which was announced last week and runs through September, customers can trade in "old, inefficient disk drives" for $1,000 per terabyte cash back toward the purchase of an equal amount of capacity on a Xiotech Emprise 7000, Emprise 7000 Edge, or Emprise 5000 system, or VM Storage Solution.
There is no limit to the amount of capacity organizations can trade in, and consequently no limit to the money they can save on their new storage systems, according to Xiotech's Cash for Disk Clunkers webpage.
You can find more info on Xiotech's website and read the Enterprise Strategy Group's (ESG) Lab Review on our website.
Tuesday, August 4, 2009
Where are we with FCoE?
August 4, 2009 -- The Fibre Channel over Ethernet (FCoE) standard is fully baked, but how soon will customers deploy the technology beyond the testing phase?
The FCoE standard was finalized at the beginning of June. The FC-BB-5 working group of the T11 Technical Committee completed its work and unanimously approved a final standard for FCoE. As a result, the T11 Technical Committee plenary session approved forwarding the FC-BB-5 standard to INCITS for further processing as an ANSI standard.
According to the Fibre Channel Industry Association (FCIA), this milestone has a few implications:
1.) BB-5 Frame Format and Addressing schema are the heart and soul of FCoE
2.) Advances FCoE industry with no new spins required for FCoE silicon
3.) FCoE products in OEM qualification today are based on the completed standard
4.) Users benefit from fully baked standardized FCoE solutions from day one
Industry research firm TheInfoPro (TIP) believes FCoE is still in the very early stage of development within storage organizations at large enterprises.
In a recent paper, "Fibre Channel Over Ethernet (FCoE): Storage Pro Perspective," Rob Stevenson, TIP's managing director of storage, and Anders Lofgren, chief research officer write:
"It is clear that FCoE and server virtualization are becoming more tightly linked and end users are waiting for 10 Gigabit Ethernet to be fully deployed throughout the data center before moving forward with FCoE adoption, which we expect in two to three years."
TIP also believes FCoE adoption should start to accelerate following the implementation of 8Gbps Fibre Channel and 10 Gigabit Ethernet among storage organizations.
They continue: "All of the storage teams we speak with indicate that FCoE will be the dominant storage transport for the future, but the roles of host connectivity, FCoE initiator certification and topology management are still being debated."
The vendors are arming themselves to the teeth with FCoE-capable networking gear, most notably IBM. Big Blue beefed up its Fibre Channel, FCoE and Ethernet networking portfolio through a series of expanded OEM partnerships with Brocade, Cisco, and Juniper Networks just last month.
According to Cisco's latest data on FCoE, customers are "rapidly adopting" Cisco's FCoE-capable Nexus 5000 series switches and more than one-third of those customers are planning to implement FCoE.
Cisco claims that it "leads the FCoE market" based on shipments. The company now has more than 900 Nexus 5000 customers and has shipped more than 100,000 ports. The Nexus 5020, which is FCoE-capable, has been shipping since June of 2008.
Cisco says 35% of Nexus 5000 customers purchased systems with FCoE enabled, representing: government, information technology, healthcare, manufacturing, media, financial services, telcos, and service providers.
Brocade, another big player in the FCoE space, is taking a realistic view of the subject. In an interesting talk from Tech Day 2009, Brocade's CTO, Dave Stevens, said today's FCoE technologies are changing the landscape of the data center, but only in the first five feet of the network infrastructure. (See Brocade's video: FCoE Reality - Brocade CTO Dave Stevens from Tech Day 2009)
Stay up to date on all of the latest FCoE news by visiting our FCoE topic center page.
The FCoE standard was finalized at the beginning of June. The FC-BB-5 working group of the T11 Technical Committee completed its work and unanimously approved a final standard for FCoE. As a result, the T11 Technical Committee plenary session approved forwarding the FC-BB-5 standard to INCITS for further processing as an ANSI standard.
According to the Fibre Channel Industry Association (FCIA), this milestone has a few implications:
1.) BB-5 Frame Format and Addressing schema are the heart and soul of FCoE
2.) Advances FCoE industry with no new spins required for FCoE silicon
3.) FCoE products in OEM qualification today are based on the completed standard
4.) Users benefit from fully baked standardized FCoE solutions from day one
Industry research firm TheInfoPro (TIP) believes FCoE is still in the very early stage of development within storage organizations at large enterprises.
In a recent paper, "Fibre Channel Over Ethernet (FCoE): Storage Pro Perspective," Rob Stevenson, TIP's managing director of storage, and Anders Lofgren, chief research officer write:
"It is clear that FCoE and server virtualization are becoming more tightly linked and end users are waiting for 10 Gigabit Ethernet to be fully deployed throughout the data center before moving forward with FCoE adoption, which we expect in two to three years."
TIP also believes FCoE adoption should start to accelerate following the implementation of 8Gbps Fibre Channel and 10 Gigabit Ethernet among storage organizations.
They continue: "All of the storage teams we speak with indicate that FCoE will be the dominant storage transport for the future, but the roles of host connectivity, FCoE initiator certification and topology management are still being debated."
The vendors are arming themselves to the teeth with FCoE-capable networking gear, most notably IBM. Big Blue beefed up its Fibre Channel, FCoE and Ethernet networking portfolio through a series of expanded OEM partnerships with Brocade, Cisco, and Juniper Networks just last month.
According to Cisco's latest data on FCoE, customers are "rapidly adopting" Cisco's FCoE-capable Nexus 5000 series switches and more than one-third of those customers are planning to implement FCoE.
Cisco claims that it "leads the FCoE market" based on shipments. The company now has more than 900 Nexus 5000 customers and has shipped more than 100,000 ports. The Nexus 5020, which is FCoE-capable, has been shipping since June of 2008.
Cisco says 35% of Nexus 5000 customers purchased systems with FCoE enabled, representing: government, information technology, healthcare, manufacturing, media, financial services, telcos, and service providers.
Brocade, another big player in the FCoE space, is taking a realistic view of the subject. In an interesting talk from Tech Day 2009, Brocade's CTO, Dave Stevens, said today's FCoE technologies are changing the landscape of the data center, but only in the first five feet of the network infrastructure. (See Brocade's video: FCoE Reality - Brocade CTO Dave Stevens from Tech Day 2009)
Stay up to date on all of the latest FCoE news by visiting our FCoE topic center page.
Thursday, July 23, 2009
LSI to buy ONStor
July 23, 2009 -- LSI bolstered its portfolio of storage systems today with the news that it has inked a deal to buy NAS-maker ONStor for $25 million in cash.
ONStor, a privately held, Campbell, Calif.-based company, builds clustered network-attached storage (NAS) systems designed to store and manage unstructured data. ONStor's products include NAS gateways and systems and unified storage systems sold through the channel and OEM partners.
ONStor recently broke its own product mold when it announced the Pantera LS 2100, a unified storage system based on open-source software and the Zettabyte File System (ZFS).
The LS 2100 series is a family of unified IP storage systems that provide both iSCSI and NAS support in a single box. Targeting SMBs – a first for ONStor – the Pantera LS 2100 family also includes a variety of built-in data and storage management tools based on the OpenSolaris operating system and ZFS.
ONStor also sells the Bobcat and Cougar families of clustered NAS gateways.
"The rapid growth of unstructured data is creating significant challenges for enterprises in provisioning, protecting and managing their storage in an efficient and cost-effective manner," said Abhi Talwalkar, LSI president and CEO, in a press release issued earlier today. "With the addition of ONStor products and technology, LSI will be well positioned to offer a comprehensive set of storage solutions to help enterprise customers effectively manage both their unstructured and structured data with ease."
LSI's current product lineup includes a range of storage technologies from custom silicon ASICs to HBAs and its Engenio storage systems.
The transaction is expected to close within thirty days and is subject to satisfaction of customary closing conditions. LSI expects to provide further details on July 29 when it reports second quarter results.
EMC-Data Domain update
In other acquisition news, EMC announced this morning that it has successfully completed its tender offer for all outstanding shares of common stock of Data Domain.
EMC now controls approximately 94.2% of Data Domain shares outstanding and expects to effect a second-step merger and complete its acquisition of Data Domain today.
ONStor, a privately held, Campbell, Calif.-based company, builds clustered network-attached storage (NAS) systems designed to store and manage unstructured data. ONStor's products include NAS gateways and systems and unified storage systems sold through the channel and OEM partners.
ONStor recently broke its own product mold when it announced the Pantera LS 2100, a unified storage system based on open-source software and the Zettabyte File System (ZFS).
The LS 2100 series is a family of unified IP storage systems that provide both iSCSI and NAS support in a single box. Targeting SMBs – a first for ONStor – the Pantera LS 2100 family also includes a variety of built-in data and storage management tools based on the OpenSolaris operating system and ZFS.
ONStor also sells the Bobcat and Cougar families of clustered NAS gateways.
"The rapid growth of unstructured data is creating significant challenges for enterprises in provisioning, protecting and managing their storage in an efficient and cost-effective manner," said Abhi Talwalkar, LSI president and CEO, in a press release issued earlier today. "With the addition of ONStor products and technology, LSI will be well positioned to offer a comprehensive set of storage solutions to help enterprise customers effectively manage both their unstructured and structured data with ease."
LSI's current product lineup includes a range of storage technologies from custom silicon ASICs to HBAs and its Engenio storage systems.
The transaction is expected to close within thirty days and is subject to satisfaction of customary closing conditions. LSI expects to provide further details on July 29 when it reports second quarter results.
EMC-Data Domain update
In other acquisition news, EMC announced this morning that it has successfully completed its tender offer for all outstanding shares of common stock of Data Domain.
EMC now controls approximately 94.2% of Data Domain shares outstanding and expects to effect a second-step merger and complete its acquisition of Data Domain today.
Tuesday, July 21, 2009
TIP expects spending increase in 2H '09
July 21, 2009 -- On the eve of earnings for many major vendors, TheInfoPro (TIP) research firm is predicting a second half increase in technology spending.
According to TIP's customer research, which is based on interviews with thousands of Fortune 1000 and medium-sized enterprise end users, IBM, EMC and NetApp have been most affected by the tech spending slowdown of '09.
TIP claims that the best performing vendors have been those that compete on price or base their pitch on return on investment (ROI). CommVault, Data Domain and HP all fall into that category.
On the networking side, TIP predicts, "Cisco and Juniper will benefit from pent-up demand for increasing network capacity and performance, which will result in higher network equipment spending once economic conditions improve. Projects with a more immediate ROI will continue to be promoted for the remainder of 2009, benefiting WAN optimization providers such as Cisco, Riverbed and Blue Coat."
Data Domain was slated to release its Q2 earnings this Thursday, but nixed its concall after EMC announced yesterday that it has acquired majority ownership of Data Domain. EMC expects to complete its acquisition of DDUP by month's end and is slated to report its earnings Thursday morning.
Also on deck for earnings this week are Microsoft, F5, Riverbed, Juniper and VMware. Time to sit back, grab some popcorn and watch it all unfold.
Check out TIP's predictions and the firm's latest customer research at www.theinfopro.net.
According to TIP's customer research, which is based on interviews with thousands of Fortune 1000 and medium-sized enterprise end users, IBM, EMC and NetApp have been most affected by the tech spending slowdown of '09.
TIP claims that the best performing vendors have been those that compete on price or base their pitch on return on investment (ROI). CommVault, Data Domain and HP all fall into that category.
On the networking side, TIP predicts, "Cisco and Juniper will benefit from pent-up demand for increasing network capacity and performance, which will result in higher network equipment spending once economic conditions improve. Projects with a more immediate ROI will continue to be promoted for the remainder of 2009, benefiting WAN optimization providers such as Cisco, Riverbed and Blue Coat."
Data Domain was slated to release its Q2 earnings this Thursday, but nixed its concall after EMC announced yesterday that it has acquired majority ownership of Data Domain. EMC expects to complete its acquisition of DDUP by month's end and is slated to report its earnings Thursday morning.
Also on deck for earnings this week are Microsoft, F5, Riverbed, Juniper and VMware. Time to sit back, grab some popcorn and watch it all unfold.
Check out TIP's predictions and the firm's latest customer research at www.theinfopro.net.
Friday, July 10, 2009
Is Data Domain a good fit for EMC?
July 10, 2009 -- The experts are weighing in on EMC's pending acquisition of Data Domain and questions abound. Did EMC pay too much? How will it juggle its many data deduplication offerings? Did NetApp make the right move?
The price tag was just too high. EMC forced NetApp to bow out of its acquisition agreement with Data Domain earlier this week after upping the ante to $2.1 billion.
According to some analysts, this may have been a blessing in disguise for NetApp.
"NetApp just forced EMC to spend [more than $2 billion] for an asset that really doesn't fit and that EMC didn't want until it thought NetApp would get Data Domain," says David Vellante, co-founder and contributor to The Wikibon Project. "EMC-ers believe that dedupe is best done at the source. It's a culture clash of a serious nature."
Vellante believes NetApp's interest in acquiring Data Domain was based on the potential impact it could have on the bottom line.
"NetApp wanted Data Domain because it saw Data Domain as the path of least resistance to $5 billion in revenue. Personally, I think there are better ways to get there," he says.
Vellante's opinion echoes that of Enterprise Strategy Group (ESG) founder and senior analyst Steve Duplessie.
"I think the price was too high to begin with and nuts by the end," says Duplessie. "I think NetApp would have enjoyed a lot of synergies and opportunity with Data Domain, but at that price, there was simply no margin for error. I think it would have strapped them and put an unnecessary microscope on their every move that would deflect from the fact that they are a great company. I think they will be happy with their decision."
Now, he says, EMC will be under that microscope.
"EMC has more room to maneuver simply because of their size and assets, but that doesn't mean they won't be under the microscope. That's a mongo big price to pay for anyone to simply ignore it. They certainly have the muscle and brains to make it work, but it won't be easy," says Duplessie.
The price tag was just too high. EMC forced NetApp to bow out of its acquisition agreement with Data Domain earlier this week after upping the ante to $2.1 billion.
According to some analysts, this may have been a blessing in disguise for NetApp.
"NetApp just forced EMC to spend [more than $2 billion] for an asset that really doesn't fit and that EMC didn't want until it thought NetApp would get Data Domain," says David Vellante, co-founder and contributor to The Wikibon Project. "EMC-ers believe that dedupe is best done at the source. It's a culture clash of a serious nature."
Vellante believes NetApp's interest in acquiring Data Domain was based on the potential impact it could have on the bottom line.
"NetApp wanted Data Domain because it saw Data Domain as the path of least resistance to $5 billion in revenue. Personally, I think there are better ways to get there," he says.
Vellante's opinion echoes that of Enterprise Strategy Group (ESG) founder and senior analyst Steve Duplessie.
"I think the price was too high to begin with and nuts by the end," says Duplessie. "I think NetApp would have enjoyed a lot of synergies and opportunity with Data Domain, but at that price, there was simply no margin for error. I think it would have strapped them and put an unnecessary microscope on their every move that would deflect from the fact that they are a great company. I think they will be happy with their decision."
Now, he says, EMC will be under that microscope.
"EMC has more room to maneuver simply because of their size and assets, but that doesn't mean they won't be under the microscope. That's a mongo big price to pay for anyone to simply ignore it. They certainly have the muscle and brains to make it work, but it won't be easy," says Duplessie.
Monday, July 6, 2009
EMC raises bid as NetApp gets green light from regulators
July 6, 2009 -- If you thought EMC was out of the race for Data Domain – think again. Just as NetApp announced this morning that it has received the go ahead from federal regulators to take its acquisition proposal to a stockholder vote, EMC once again raised its offer to acquire Data Domain. The EMC bid now stands at more than $2 billion.
The Data Domain Board of Directors currently plans to hold a meeting of stockholders and a merger vote on August 14. EMC is hoping to spoil the party by forcing Data Domain’s stockholders to take a long, hard look at its latest offer.
Under its revised proposal, EMC has increased its offer to acquire all the outstanding common stock of Data Domain to $33.50 per share in cash, for a total value of approximately $2.1 billion, net of Data Domain’s cash. NetApp’s offer is currently $1.9 billion.
EMC CEO Joe Tucci outlined the offer today in a letter to Data Domain’s Board Chairman, Aneel Bhusri. Here is the full text of Tucci’s letter:
Dear Aneel:
On behalf of EMC, I am pleased to submit to you and your Board of Directors this revised proposal to acquire all outstanding Data Domain common stock for $33.50 per share in cash. This price represents a substantial premium to the cash and stock proposal of NetApp and is a Superior Proposal as defined in your merger agreement with NetApp. The Board of Directors of EMC has unanimously approved this proposal.
As with our prior proposal, EMC’s revised proposal is not subject to any financing or due diligence contingency, and we will use existing cash balances to finance the transaction. In addition, we have received all necessary regulatory approvals. We are amending our currently outstanding tender offer to acquire all of the outstanding shares of Data Domain to reflect our higher price.
We enclose a revised definitive agreement that has been executed on behalf of EMC and which reflects our new $33.50 per share, all cash offer. This agreement is substantially identical to the NetApp proposal except as to the fact that the EMC offer:
-- Is materially higher in price;
-- Reflects our faster two-step structure, which will enable you to close almost a month faster than under the NetApp proposal; and,
-- Very importantly, eliminates all deal protection provisions that could further impede the maximization of stockholder value, including the no solicitation section and the break-up fee obligation.
This last point is very significant to you and your stockholders. Data Domain does not have any justification for continuing deal protection provisions for NetApp or any other party given our willingness to proceed without them. It was questionable agreeing to deal protections in your initial agreement with NetApp, when you knew of our interest in acquiring the company. There is no basis for continuing with them now.
We strongly believe that the Data Domain Board of Directors should pledge to eliminate all deal protection provisions that could further impede maximizing stockholder value. Such a commitment would be the proper exercise of the Board's fiduciary duties to secure a transaction in the best interests of Data Domain stockholders, particularly in light of the EMC proposal described in this letter.
With the early termination last week of the waiting period under the Hart-Scott-Rodino Antitrust Improvements Act of 1976 concluding all regulatory conditions to this transaction, EMC could be in a position to close this transaction and deliver cash to your stockholders in as little as two weeks.
In comparison to your proposed transaction with NetApp, EMC’s proposal represents a far superior alternative for your stockholders.
EMC’s proposal provides higher absolute value for each Data Domain share.
As an all-cash offer, EMC’s proposal offers greater certainty of value.
EMC’s definitive agreement does not contain deal protection provisions that could further impede the maximization of stockholder value – including any termination fee – and is more favorable to the stockholders of Data Domain.
EMC’s transaction offers a faster time to close of almost a month.
We continue to believe that a business combination with EMC will deliver substantial and superior benefits to your company’s stockholders, customers, employees and partners. Since June 1st, when we submitted to you our prior proposal, we have received wholehearted support from many of your stockholders and customers validating our confidence in these benefits.
We encourage you to accept the merits of our proposal and look forward to your execution of the definitive agreement enclosed.
Very truly yours,
Joseph M. Tucci
Chairman, President and Chief Executive Officer
EMC Corporation
Further details on EMC’s latest offer are available on EMC’s website.
The Data Domain Board of Directors currently plans to hold a meeting of stockholders and a merger vote on August 14. EMC is hoping to spoil the party by forcing Data Domain’s stockholders to take a long, hard look at its latest offer.
Under its revised proposal, EMC has increased its offer to acquire all the outstanding common stock of Data Domain to $33.50 per share in cash, for a total value of approximately $2.1 billion, net of Data Domain’s cash. NetApp’s offer is currently $1.9 billion.
EMC CEO Joe Tucci outlined the offer today in a letter to Data Domain’s Board Chairman, Aneel Bhusri. Here is the full text of Tucci’s letter:
Dear Aneel:
On behalf of EMC, I am pleased to submit to you and your Board of Directors this revised proposal to acquire all outstanding Data Domain common stock for $33.50 per share in cash. This price represents a substantial premium to the cash and stock proposal of NetApp and is a Superior Proposal as defined in your merger agreement with NetApp. The Board of Directors of EMC has unanimously approved this proposal.
As with our prior proposal, EMC’s revised proposal is not subject to any financing or due diligence contingency, and we will use existing cash balances to finance the transaction. In addition, we have received all necessary regulatory approvals. We are amending our currently outstanding tender offer to acquire all of the outstanding shares of Data Domain to reflect our higher price.
We enclose a revised definitive agreement that has been executed on behalf of EMC and which reflects our new $33.50 per share, all cash offer. This agreement is substantially identical to the NetApp proposal except as to the fact that the EMC offer:
-- Is materially higher in price;
-- Reflects our faster two-step structure, which will enable you to close almost a month faster than under the NetApp proposal; and,
-- Very importantly, eliminates all deal protection provisions that could further impede the maximization of stockholder value, including the no solicitation section and the break-up fee obligation.
This last point is very significant to you and your stockholders. Data Domain does not have any justification for continuing deal protection provisions for NetApp or any other party given our willingness to proceed without them. It was questionable agreeing to deal protections in your initial agreement with NetApp, when you knew of our interest in acquiring the company. There is no basis for continuing with them now.
We strongly believe that the Data Domain Board of Directors should pledge to eliminate all deal protection provisions that could further impede maximizing stockholder value. Such a commitment would be the proper exercise of the Board's fiduciary duties to secure a transaction in the best interests of Data Domain stockholders, particularly in light of the EMC proposal described in this letter.
With the early termination last week of the waiting period under the Hart-Scott-Rodino Antitrust Improvements Act of 1976 concluding all regulatory conditions to this transaction, EMC could be in a position to close this transaction and deliver cash to your stockholders in as little as two weeks.
In comparison to your proposed transaction with NetApp, EMC’s proposal represents a far superior alternative for your stockholders.
EMC’s proposal provides higher absolute value for each Data Domain share.
As an all-cash offer, EMC’s proposal offers greater certainty of value.
EMC’s definitive agreement does not contain deal protection provisions that could further impede the maximization of stockholder value – including any termination fee – and is more favorable to the stockholders of Data Domain.
EMC’s transaction offers a faster time to close of almost a month.
We continue to believe that a business combination with EMC will deliver substantial and superior benefits to your company’s stockholders, customers, employees and partners. Since June 1st, when we submitted to you our prior proposal, we have received wholehearted support from many of your stockholders and customers validating our confidence in these benefits.
We encourage you to accept the merits of our proposal and look forward to your execution of the definitive agreement enclosed.
Very truly yours,
Joseph M. Tucci
Chairman, President and Chief Executive Officer
EMC Corporation
Further details on EMC’s latest offer are available on EMC’s website.
Labels:
data deduplication,
Data Domain,
EMC,
NetApp
Thursday, July 2, 2009
EPA seeks feedback on Energy Star storage specification
July 2, 2009 -- What will your refrigerator soon have in common with your storage array? It's not the crisper drawer. Well, not yet anyway. Someone could roll out a new unified SAN/NAS/Frigidaire system that stores your data and your produce. I guess anything is possible. What I'm talking about is the Energy Star program.
The Environmental Protection Agency (EPA) has begun work on a specification framework that will ultimately result in an energy efficiency program for enterprise storage systems. Translation: Energy Star stickers will eventually appear on your favorite storage devices.
Not to pat myself on the back, but this humble reporter predicted an Energy Star program for enterprise storage products a while back. I just didn't think it would take this long.
The specification is in draft form, but the EPA needs a little help with developing the framework. For example, David Floyer raises a key issue in his Wikibon blog. The EPA isn't considering software. He writes:
"Action Item: EPA should include software functionality in its specification for achieving Energy Star. This would allow a far more aggressive energy savings to be set as a standard for Energy Star certification. The vendor should be given the choice of how to achieve these energy savings against the base of a storage array with no software and poor power supplies. This approach will achieve higher levels of savings and enhance the EPA energy star brand."
Last call for comments on the Energy Star Enterprise Storage Draft Specification Framework is tomorrow, July 3. The Wikibon folks are currently collecting and consolidating reader feedback and plan to submit its collective opinion by the end of the week. Get in on the conversation here.
The Environmental Protection Agency (EPA) has begun work on a specification framework that will ultimately result in an energy efficiency program for enterprise storage systems. Translation: Energy Star stickers will eventually appear on your favorite storage devices.
Not to pat myself on the back, but this humble reporter predicted an Energy Star program for enterprise storage products a while back. I just didn't think it would take this long.
The specification is in draft form, but the EPA needs a little help with developing the framework. For example, David Floyer raises a key issue in his Wikibon blog. The EPA isn't considering software. He writes:
"Action Item: EPA should include software functionality in its specification for achieving Energy Star. This would allow a far more aggressive energy savings to be set as a standard for Energy Star certification. The vendor should be given the choice of how to achieve these energy savings against the base of a storage array with no software and poor power supplies. This approach will achieve higher levels of savings and enhance the EPA energy star brand."
Last call for comments on the Energy Star Enterprise Storage Draft Specification Framework is tomorrow, July 3. The Wikibon folks are currently collecting and consolidating reader feedback and plan to submit its collective opinion by the end of the week. Get in on the conversation here.
Thursday, June 18, 2009
TIP: SRM tech making a comeback
June 18, 2009 – According to the most recent Storage Study from independent research firm TheInfoPro (TIP), storage resource management (SRM) tools are in the midst of a comeback as enterprises attempt to boost utilization in tough economic times.
TheInfoPro's most recent study, which is based on data gathered from interviews with 250 IT pros at Fortune 1000 and medium-sized enterprise organizations, revealed that managing storage growth, capacity forecasting and storage reporting, and managing costs are the "top pain points" facing end users.
Enter: SRM.
A few tidbits from a press release on TIP's latest Storage Management Technology Heat Index – a barometer of the user needs and planned spending:
• Top technologies on the F1000 Storage Management Technology Heat Index include capacity planning and forecasting, storage performance monitoring and storage resource management – with capacity planning and forecasting jumping nine spots from number 11 six months ago to the number one technology on the index.
• Top technologies on the MSE Storage Management Technology Heat Index include capacity planning and forecasting, information lifecycle management, disk-to-disk and email archiving.
• Top Technologies on the European Storage Management Technology Heat Index include storage resource management, email archiving and thin provisioning.
TheInfoPro's most recent study, which is based on data gathered from interviews with 250 IT pros at Fortune 1000 and medium-sized enterprise organizations, revealed that managing storage growth, capacity forecasting and storage reporting, and managing costs are the "top pain points" facing end users.
Enter: SRM.
A few tidbits from a press release on TIP's latest Storage Management Technology Heat Index – a barometer of the user needs and planned spending:
• Top technologies on the F1000 Storage Management Technology Heat Index include capacity planning and forecasting, storage performance monitoring and storage resource management – with capacity planning and forecasting jumping nine spots from number 11 six months ago to the number one technology on the index.
• Top technologies on the MSE Storage Management Technology Heat Index include capacity planning and forecasting, information lifecycle management, disk-to-disk and email archiving.
• Top Technologies on the European Storage Management Technology Heat Index include storage resource management, email archiving and thin provisioning.
Labels:
SRM,
Storage resource management,
TheInfoPro,
TIP
Tuesday, June 9, 2009
Tucci appeals to Data Domain's rank and file
June 9, 2009 -- The acquisition agreement between Data Domain and NetApp precludes EMC from communicating with Data Domain directly, but no one says EMC can't state its case to the public or to Data Domain's employees.
EMC chairman, president and CEO Joe Tucci issued an open letter to Data Domain's personnel this morning in which he praises their achievements, congratulates them for their successes and highlights the impact of their data deduplication technologies are having in data centers across the globe.
He even writes, "In many ways, you remind us of EMC."
Tucci also promises Data Domain's employees an "exciting future" if they should become part of the "EMC family."
All flattery and promises aside, Tucci continues to make the financial argument that EMC's $30 per share all-cash tender offer to acquire all of the outstanding stock of Data Domain is the better deal than NetApp's part-stock, part-cash offer.
It appears that NetApp will win the day and acquire Data Domain, but its sure is fun to watch the day-to-day developments.
The full text of Tucci's open letter to the employees of Data Domain can be found on EMC's website.
EMC chairman, president and CEO Joe Tucci issued an open letter to Data Domain's personnel this morning in which he praises their achievements, congratulates them for their successes and highlights the impact of their data deduplication technologies are having in data centers across the globe.
He even writes, "In many ways, you remind us of EMC."
Tucci also promises Data Domain's employees an "exciting future" if they should become part of the "EMC family."
All flattery and promises aside, Tucci continues to make the financial argument that EMC's $30 per share all-cash tender offer to acquire all of the outstanding stock of Data Domain is the better deal than NetApp's part-stock, part-cash offer.
It appears that NetApp will win the day and acquire Data Domain, but its sure is fun to watch the day-to-day developments.
The full text of Tucci's open letter to the employees of Data Domain can be found on EMC's website.
Thursday, June 4, 2009
Update: Data Domain sides with NetApp
June 4, 2009 -- Another day, another development in the EMC-Data Domain-NetApp saga. Less than 12 hours after NetApp publicly raised its offer to buy Data Domain, the two companies have officially entered into a revised acquisition agreement.
The volleying has been worthy of a match at Roland Garros. NetApp responded to EMC's surprise bid for Data Domain yesterday morning by raising its offer. The price seems to have satisfied Data Domain, for now.
Late yesterday, the pair issued a joint press release stating that they have entered into a revised acquisition agreement under which NetApp will acquire all of the outstanding shares of Data Domain common stock for $30 per share in cash and stock in a transaction valued at approximately $1.9 billion, net of Data Domain's cash.
EMC is standing pat. The company issued a statement of its own on Wednesday, in which Joe Tucci, EMC chairman, president and CEO, said, "EMC's all-cash tender offer remains superior to NetApp's proposed part-stock merger transaction. We are proceeding with our superior cash tender offer, which is not subject to any financing or due diligence contingency. We do not believe that the Data Domain stockholders will approve the merger transaction with NetApp."
Tucci added, "EMC urges the Board of Directors of Data Domain to not take any actions that would further impede a transaction that is a superior alternative for Data Domain's shareholders."
Whether EMC counters the counter offer remains to be seen.
The volleying has been worthy of a match at Roland Garros. NetApp responded to EMC's surprise bid for Data Domain yesterday morning by raising its offer. The price seems to have satisfied Data Domain, for now.
Late yesterday, the pair issued a joint press release stating that they have entered into a revised acquisition agreement under which NetApp will acquire all of the outstanding shares of Data Domain common stock for $30 per share in cash and stock in a transaction valued at approximately $1.9 billion, net of Data Domain's cash.
EMC is standing pat. The company issued a statement of its own on Wednesday, in which Joe Tucci, EMC chairman, president and CEO, said, "EMC's all-cash tender offer remains superior to NetApp's proposed part-stock merger transaction. We are proceeding with our superior cash tender offer, which is not subject to any financing or due diligence contingency. We do not believe that the Data Domain stockholders will approve the merger transaction with NetApp."
Tucci added, "EMC urges the Board of Directors of Data Domain to not take any actions that would further impede a transaction that is a superior alternative for Data Domain's shareholders."
Whether EMC counters the counter offer remains to be seen.
Labels:
data deduplication,
Data Domain,
dedupe,
deduplication,
EMC,
NetApp
Wednesday, June 3, 2009
NetApp responds to EMC's bid for Data Domain
June 3, 2009 -- The bidding war for Data Domain has begun. NetApp has responded to EMC's surprise offer to buy the company by upping its offer to $1.9 billion and claiming that a combined NetApp-Data Domain has a bigger upside for both companies.
NetApp issued a revised offer this morning, raising the acquisition price to approximately $1.9 billion versus EMC's $1.8 billion offer earlier this week.
In a press release, NetApp's chairman and CEO, Dan Warmenhoven, said his company's "strategic rationale remains the same" and "the complementary nature of the Data Domain and NetApp product lines will result in higher aggregate growth compared to the redundancies that would result with the EMC product line."
Warmenhoven added, "The cultural compatibility between Data Domain and NetApp will maximize the potential for continued innovation from a creative and motivated employee base. This will not only create a meaningful choice for our customers but also lead to a complementary combination with no obstacles to an expeditious close of the acquisition. Therefore, we are as committed to this partnership now as we were when we first announced our intent to acquire Data Domain."
Mum's the word over at Data Domain as they company has yet to comment on the EMC-NetApp tug of war. The industry pundits, however, are keeping a close eye on the back and forth.
Enterprise Strategy Group analyst Lauren Whitehouse wonders whether EMC is just playing the spoiler, especially given its wealth of data deduplication technologies and OEM deals.
"I am having a hard time understanding why EMC wants the Data Domain technology. EMC has deduplication solutions through the Avamar product and its partnership with Quantum. I'm not sure what opportunities there are for technology integration with Avamar and EMC recently made a sizeable investment in Quantum," said Whitehouse. "The company has also promoted the benefits of the being able to replicate between Dell, EMC and Quantum solutions. What statement is EMC making about its investments in Avamar and Quantum by bidding for Data Domain?"
She continued, "Who can better leverage and integrate the Data Domain technology? EMC definitely has a better track record of doing acquisitions and leveraging technology purchases. Without really knowing the motivation for either company's bid, it's hard to judge who will leverage the technology better. It's just not obvious what the intentions are for either bidder. What a rollercoaster ride this has been."
David Vellante, co-founder and contributor to The Wikibon Project, believes EMC may have the edge.
"EMC plays for keeps. It doesn't mess around when it comes to competing. I think if EMC really wants Data Domain it will outbid NetApp for sure," he said.
So what does EMC's unsolicited bid for Data Domain say to the industry? Vellante sees it as a defensive move by EMC.
"It says to me that EMC recognizes it can't grow its core storage business organically and needs to acquire growth," Vellante said. "It says EMC is making a defensive move, albeit an aggressive one, to stop Data Domain from getting in NetApp's hands."
He also believes smaller vendors are fast becoming hot commodities.
"The market is continuing to consolidate and companies like CommVault, FalconStor, Sepaton and even 3PAR and Compellent are worth more today than they were yesterday," Vellante said.
NetApp issued a revised offer this morning, raising the acquisition price to approximately $1.9 billion versus EMC's $1.8 billion offer earlier this week.
In a press release, NetApp's chairman and CEO, Dan Warmenhoven, said his company's "strategic rationale remains the same" and "the complementary nature of the Data Domain and NetApp product lines will result in higher aggregate growth compared to the redundancies that would result with the EMC product line."
Warmenhoven added, "The cultural compatibility between Data Domain and NetApp will maximize the potential for continued innovation from a creative and motivated employee base. This will not only create a meaningful choice for our customers but also lead to a complementary combination with no obstacles to an expeditious close of the acquisition. Therefore, we are as committed to this partnership now as we were when we first announced our intent to acquire Data Domain."
Mum's the word over at Data Domain as they company has yet to comment on the EMC-NetApp tug of war. The industry pundits, however, are keeping a close eye on the back and forth.
Enterprise Strategy Group analyst Lauren Whitehouse wonders whether EMC is just playing the spoiler, especially given its wealth of data deduplication technologies and OEM deals.
"I am having a hard time understanding why EMC wants the Data Domain technology. EMC has deduplication solutions through the Avamar product and its partnership with Quantum. I'm not sure what opportunities there are for technology integration with Avamar and EMC recently made a sizeable investment in Quantum," said Whitehouse. "The company has also promoted the benefits of the being able to replicate between Dell, EMC and Quantum solutions. What statement is EMC making about its investments in Avamar and Quantum by bidding for Data Domain?"
She continued, "Who can better leverage and integrate the Data Domain technology? EMC definitely has a better track record of doing acquisitions and leveraging technology purchases. Without really knowing the motivation for either company's bid, it's hard to judge who will leverage the technology better. It's just not obvious what the intentions are for either bidder. What a rollercoaster ride this has been."
David Vellante, co-founder and contributor to The Wikibon Project, believes EMC may have the edge.
"EMC plays for keeps. It doesn't mess around when it comes to competing. I think if EMC really wants Data Domain it will outbid NetApp for sure," he said.
So what does EMC's unsolicited bid for Data Domain say to the industry? Vellante sees it as a defensive move by EMC.
"It says to me that EMC recognizes it can't grow its core storage business organically and needs to acquire growth," Vellante said. "It says EMC is making a defensive move, albeit an aggressive one, to stop Data Domain from getting in NetApp's hands."
He also believes smaller vendors are fast becoming hot commodities.
"The market is continuing to consolidate and companies like CommVault, FalconStor, Sepaton and even 3PAR and Compellent are worth more today than they were yesterday," Vellante said.
Labels:
data deduplication,
Data Domain,
dedupe,
deduplication,
EMC,
NetApp
Thursday, May 21, 2009
NetApp's competitors take aim at Data Domain deal
May 21, 2009 -- It didn't take long for NetApp's competition and industry experts to begin poking holes in NetApp's acquisition of Data Domain as questions abound less than 24 hours since the deal was announced.
There is no question the $1.5 billion deal to buy disk-based backup vendor and deduplication specialist Data Domain will immediately expand NetApp's market share and reach into the backup market. However, as the experts and competitors are quick to point out, NetApp's path is strewn with obstacles.
Wikibon president and co-founder Dave Vellante's blog on the topic raises some interesting questions. If NetApp can successfully integrate Data Domain's products and technologies (specifically deduplication), they will be poised to make serious inroads with customers seeking data reduction/Storage Capacity Optimization (SCO) technologies. However, he writes:
"This vision will take forever to execute. Meanwhile, IBM with Diligent and TSM; and EMC with Avamar and Quantum are further down the path. This will lower the time to value for NetApp, which I'm defining as the valuation being incremental."
Enterprise Strategy Group analyst Lauren Whitehouse says deduplication – one of Data Domain's strengths – is a feature, not a market.
"Having the feature on storage systems may help NetApp win business in segments of the secondary and archive storage markets where it wasn't as strong before," she says.
Her biggest issue with the acquisition is overlap between VTL-interface products, NetApp NearStore and the Data Domain DD series.
"The answers NetApp provided regarding technology integration and conflict were tentative. They deferred to the soon-to-be-formed integration team to address those issues at a later time. The focus was squarely on positioning the acquisition as increasing the business opportunity rather than a technology leverage move." She continues, "NetApp spent $11 million on the VTL acquisition (Alacritus) several years ago and has made investments in NearStore over the years; however, the product is lacking some features that make it competitive with others in its class."
She cites VTL-to-VTL replication as an example. "It's going to be hard to justify incremental investment in NearStore when they've just spent $1.5 billion on a similar solution with a few more advanced features," she says.
NetApp's positioning the acquisition as a business play, rather than a move to gobble up valuable intellectual property. As reported in our story about the deal, NetApp's chief marketing officer, Jay Kidd, said NetApp's rationale was based on an incremental growth opportunity for both companies.
"The overlap between NetApp's customers and Data Domain's customers was fairly small. The addition of Data Domain's products to our portfolio was a clear market expansion opportunity," said Kidd. "We are doing this for the expansion of the business opportunity and not to acquire technologies that would allow us to consolidate product lines."
Roughly 77% of Data Domain's business comes from North America. NetApp, however, has a global reach. Kidd said NetApp's global reach makes the acquisition a perfect match. "We have access to enterprise accounts that they are not in yet. Our [global presence] will accelerate the business that Data Domain already has," he says.
Competing vendors, of which there are many, began offering their two cents on the acquisition minutes after the news broke. Here is a sampling of the vendor reaction in their own words…
David West, vice president of marketing and business development at CommVault:
"We believe that deduplication is a feature and not a company. We also believe that to gain operational efficiencies and dramatically reduce data management and related storage expenses, a global embedded software-based approach to deduplication is the best option for customers.
Yesterday's announcement did little to address these fundamental customer needs. While we applaud NetApp's effort to capture more market share through deduplication, ultimately a feature-based approach, tightly integrated within an overall backup/archive strategy is the optimal way to reduce redundant data in your environment. Like minded companies will continue to pursue an embedded approach to dedupe and we anticipate additional adoption with key strategic partners as we continue to address customer needs."
Permabit's CEO, Tom Cook:
"This is one more outstanding execution move by the management of [Data Domain]. They needed to make a move and did.
This is a ‘worst fear' scenario for the likes of Dell, EMC, IBM and HP. The last thing in the world they wanted in the market was another NTAP. They all had [Data Domain] in their sights to acquire or beat in the marketplace. They will all spring to aggressive action.
This will disrupt the partner ecosystem. F5's (who partners effectively with [Data Domain]) play is to consolidate NAS – not exactly a NTAP objective and this places the combined organization in direct competition with the likes of CommVault and Symantec.
Finally – this is a huge positive for Permabit. In the market it enables us to contrast our offering more directly with NearStore rather than [Data Domain] near line FUD. Of course, Dell, EMC, IBM and HP will help us with this. And it puts a huge focus and premium on technologies and products that can compete on merit with the combined [NetApp/Data Domain] offerings."
Bill Andrews, president and CEO of ExaGrid:
"This is an interesting move for Data Domain as it started out targeting mid market and small enterprise customers with 1TB to 60TB of primary data to be backed up. Since then Data Domain has altered its course by targeting the large enterprise and was moving the company in that direction. NetApp is an enterprise play and therefore completes this large enterprise transition for Data Domain.
Today, ExaGrid competes with Data Domain in the mid market to small enterprise and was pleased to see Data Domain moving up market. This latest development is exciting for ExaGrid as it accelerates Data Domain's move to the enterprise and leaves a hole in the mid market to small enterprise. When competing, ExaGrid has won against Data Domain the majority of the time thanks to a faster and more scalable product at a better price and this latest development will only make the mid market to small enterprise segment a more significant opportunity for ExaGrid.
There is no question the $1.5 billion deal to buy disk-based backup vendor and deduplication specialist Data Domain will immediately expand NetApp's market share and reach into the backup market. However, as the experts and competitors are quick to point out, NetApp's path is strewn with obstacles.
Wikibon president and co-founder Dave Vellante's blog on the topic raises some interesting questions. If NetApp can successfully integrate Data Domain's products and technologies (specifically deduplication), they will be poised to make serious inroads with customers seeking data reduction/Storage Capacity Optimization (SCO) technologies. However, he writes:
"This vision will take forever to execute. Meanwhile, IBM with Diligent and TSM; and EMC with Avamar and Quantum are further down the path. This will lower the time to value for NetApp, which I'm defining as the valuation being incremental."
Enterprise Strategy Group analyst Lauren Whitehouse says deduplication – one of Data Domain's strengths – is a feature, not a market.
"Having the feature on storage systems may help NetApp win business in segments of the secondary and archive storage markets where it wasn't as strong before," she says.
Her biggest issue with the acquisition is overlap between VTL-interface products, NetApp NearStore and the Data Domain DD series.
"The answers NetApp provided regarding technology integration and conflict were tentative. They deferred to the soon-to-be-formed integration team to address those issues at a later time. The focus was squarely on positioning the acquisition as increasing the business opportunity rather than a technology leverage move." She continues, "NetApp spent $11 million on the VTL acquisition (Alacritus) several years ago and has made investments in NearStore over the years; however, the product is lacking some features that make it competitive with others in its class."
She cites VTL-to-VTL replication as an example. "It's going to be hard to justify incremental investment in NearStore when they've just spent $1.5 billion on a similar solution with a few more advanced features," she says.
NetApp's positioning the acquisition as a business play, rather than a move to gobble up valuable intellectual property. As reported in our story about the deal, NetApp's chief marketing officer, Jay Kidd, said NetApp's rationale was based on an incremental growth opportunity for both companies.
"The overlap between NetApp's customers and Data Domain's customers was fairly small. The addition of Data Domain's products to our portfolio was a clear market expansion opportunity," said Kidd. "We are doing this for the expansion of the business opportunity and not to acquire technologies that would allow us to consolidate product lines."
Roughly 77% of Data Domain's business comes from North America. NetApp, however, has a global reach. Kidd said NetApp's global reach makes the acquisition a perfect match. "We have access to enterprise accounts that they are not in yet. Our [global presence] will accelerate the business that Data Domain already has," he says.
Competing vendors, of which there are many, began offering their two cents on the acquisition minutes after the news broke. Here is a sampling of the vendor reaction in their own words…
David West, vice president of marketing and business development at CommVault:
"We believe that deduplication is a feature and not a company. We also believe that to gain operational efficiencies and dramatically reduce data management and related storage expenses, a global embedded software-based approach to deduplication is the best option for customers.
Yesterday's announcement did little to address these fundamental customer needs. While we applaud NetApp's effort to capture more market share through deduplication, ultimately a feature-based approach, tightly integrated within an overall backup/archive strategy is the optimal way to reduce redundant data in your environment. Like minded companies will continue to pursue an embedded approach to dedupe and we anticipate additional adoption with key strategic partners as we continue to address customer needs."
Permabit's CEO, Tom Cook:
"This is one more outstanding execution move by the management of [Data Domain]. They needed to make a move and did.
This is a ‘worst fear' scenario for the likes of Dell, EMC, IBM and HP. The last thing in the world they wanted in the market was another NTAP. They all had [Data Domain] in their sights to acquire or beat in the marketplace. They will all spring to aggressive action.
This will disrupt the partner ecosystem. F5's (who partners effectively with [Data Domain]) play is to consolidate NAS – not exactly a NTAP objective and this places the combined organization in direct competition with the likes of CommVault and Symantec.
Finally – this is a huge positive for Permabit. In the market it enables us to contrast our offering more directly with NearStore rather than [Data Domain] near line FUD. Of course, Dell, EMC, IBM and HP will help us with this. And it puts a huge focus and premium on technologies and products that can compete on merit with the combined [NetApp/Data Domain] offerings."
Bill Andrews, president and CEO of ExaGrid:
"This is an interesting move for Data Domain as it started out targeting mid market and small enterprise customers with 1TB to 60TB of primary data to be backed up. Since then Data Domain has altered its course by targeting the large enterprise and was moving the company in that direction. NetApp is an enterprise play and therefore completes this large enterprise transition for Data Domain.
Today, ExaGrid competes with Data Domain in the mid market to small enterprise and was pleased to see Data Domain moving up market. This latest development is exciting for ExaGrid as it accelerates Data Domain's move to the enterprise and leaves a hole in the mid market to small enterprise. When competing, ExaGrid has won against Data Domain the majority of the time thanks to a faster and more scalable product at a better price and this latest development will only make the mid market to small enterprise segment a more significant opportunity for ExaGrid.
Friday, April 24, 2009
Hype vs. reality – A Q&A with Wells Fargo's head of IT
April 24, 2009 -- I recently had an opportunity to have a conversation with Scott Dillon, head of technology infrastructure services at Wells Fargo & Co. The discussion covered a range of topics including the bank's storage priorities and needs, how he plans to extend the life of his legacy gear through storage virtualization, and his take on emerging technologies like solid-state disk (SSD) drives and Fibre Channel over Ethernet (FCoE).
Like many large enterprise organizations, Wells Fargo is dealing with massive amounts of storage and all of the management, migration and data protection tasks that come with it. Dillon says he has about 5PB of storage deployed in production. Storage infrastructures of that size require a pragmatic management approach. That's why the Wells Fargo IT philosophy is "standardize and optimize," while keeping clear of IT's bleeding edge.
To that end, Dillon's main goals are driving up utilization and enhancing availability and storage virtualization is the linchpin in the process.
"Virtualization is something that we are committed to and we are deploying it across our environment. It helps our cost models because it allows us to have heterogeneous [storage] providers behind virtualization devices. With virtualization, we don't have to throw out one infrastructure to bring in a new one," says Dillon. "We are big on leveraging what we already have."
He says virtualization has helped streamline a number of complex tasks, including capacity provisioning, data migration and storage tiering. He also credits storage virtualization with speeding service delivery to customers.
As for his take on vendors, Dillon would not name his storage suppliers, but he does hint at what Wells Fargo is looking for going forward.
"A lot of the large storage providers are starting to make their play into the end-to-end space. They are putting it all together, which is how we look at the big picture. We would like to see these organizations driving their products toward IEEE standards so that we don't get locked in [to any one vendor]," he says.
Dillon stresses the importance of the customer-provider relationship in his decision-making process. "The quality, availability and resiliency of a product in an industrial enterprise setting are incredibly important to us. I want the vendor engaged and I want the sales team to have as much incentive to deliver on their commitment as they do in selling me their next product. If the product is good and you deliver on your commitment you are going to sell me a lot more stuff," he says.
"What's amazing to me is how many people are just focused on the sale. I need to know they are going to be there for the long term. When times are tough it's about who is going to be there focused on your optimization and driving up utilization," Dillon says.
Dillon is also keeping on eye on several emerging storage technologies.
On SSDs: "There is a lot of initial hype. The value proposition is there. What's intriguing is reduced power consumption. But there are a lot of questions. How many times can you write to the drive? What about availability? I don't see [SSDs] as something we would deploy in production in the near future, but the promise is there and we see it."
On data de-duplication: "We have deployed some data de-duplication technologies in our environment. We are realizing some very good lift in [our de-dupe implementation]. There is a lot of promise, but the technology needs to mature."
On FCoE: "We continue to watch it very closely. We are, in general, very interested in any technology that fits with our pragmatic and customer-centric philosophy. Directionally, I think the concept of unified networking is great."
Once the aforementioned technologies mature, Dillon will weave them into his infrastructure when and if they make business sense.
"It all starts and ends with the customer experience. You can't do technology for the sake of doing technology. It has to improve the customer's experience," he says.
Like many large enterprise organizations, Wells Fargo is dealing with massive amounts of storage and all of the management, migration and data protection tasks that come with it. Dillon says he has about 5PB of storage deployed in production. Storage infrastructures of that size require a pragmatic management approach. That's why the Wells Fargo IT philosophy is "standardize and optimize," while keeping clear of IT's bleeding edge.
To that end, Dillon's main goals are driving up utilization and enhancing availability and storage virtualization is the linchpin in the process.
"Virtualization is something that we are committed to and we are deploying it across our environment. It helps our cost models because it allows us to have heterogeneous [storage] providers behind virtualization devices. With virtualization, we don't have to throw out one infrastructure to bring in a new one," says Dillon. "We are big on leveraging what we already have."
He says virtualization has helped streamline a number of complex tasks, including capacity provisioning, data migration and storage tiering. He also credits storage virtualization with speeding service delivery to customers.
As for his take on vendors, Dillon would not name his storage suppliers, but he does hint at what Wells Fargo is looking for going forward.
"A lot of the large storage providers are starting to make their play into the end-to-end space. They are putting it all together, which is how we look at the big picture. We would like to see these organizations driving their products toward IEEE standards so that we don't get locked in [to any one vendor]," he says.
Dillon stresses the importance of the customer-provider relationship in his decision-making process. "The quality, availability and resiliency of a product in an industrial enterprise setting are incredibly important to us. I want the vendor engaged and I want the sales team to have as much incentive to deliver on their commitment as they do in selling me their next product. If the product is good and you deliver on your commitment you are going to sell me a lot more stuff," he says.
"What's amazing to me is how many people are just focused on the sale. I need to know they are going to be there for the long term. When times are tough it's about who is going to be there focused on your optimization and driving up utilization," Dillon says.
Dillon is also keeping on eye on several emerging storage technologies.
On SSDs: "There is a lot of initial hype. The value proposition is there. What's intriguing is reduced power consumption. But there are a lot of questions. How many times can you write to the drive? What about availability? I don't see [SSDs] as something we would deploy in production in the near future, but the promise is there and we see it."
On data de-duplication: "We have deployed some data de-duplication technologies in our environment. We are realizing some very good lift in [our de-dupe implementation]. There is a lot of promise, but the technology needs to mature."
On FCoE: "We continue to watch it very closely. We are, in general, very interested in any technology that fits with our pragmatic and customer-centric philosophy. Directionally, I think the concept of unified networking is great."
Once the aforementioned technologies mature, Dillon will weave them into his infrastructure when and if they make business sense.
"It all starts and ends with the customer experience. You can't do technology for the sake of doing technology. It has to improve the customer's experience," he says.
Labels:
FCoE,
InfoStor,
SSD,
storage virtualization,
Wells Fargo
Tuesday, April 21, 2009
VMware's vSphere of influence
April 21, 2009 -- Today's release of VMware's vSphere 4 operating system – a new OS for building internal clouds – has brought with it a tsunami of support from dozens of storage vendors.
The vSphere 4 OS aggregates and manages large pools of infrastructure resources – processors, storage and networking – as a dynamic operating environment. VMware claims vSphere 4 will "bring the power of cloud computing to the datacenter, slashing IT costs while dramatically increasing IT responsiveness." VMware also touts vSphere as a path to delivering cloud services that are compatible with customers' internal cloud infrastructures. VMware plans to build in support for dynamic federation between internal and external clouds, enabling "private" cloud environments that span multiple datacenters and/or cloud providers.
Big, bad virtual machines
Using the vSphere OS, users can build bigger, faster virtual computing environments. According to VMware's published specs, the platform can pool together up to:
32 physical servers with up to 2048 processor cores
1,280 virtual machines
32TB of RAM
16PB of storage
8,000 network ports
It also creates bigger, faster virtual machines (VMs) with up to:
2x the number of virtual processors per virtual machine (from 4 to 8)
2.5x more virtual NICs per virtual machine (from 4 to 10)
4x more memory per virtual machine (from 64 GB to 255GB)
3x increase in network throughput (from 9 Gbps to 30Gbps)
3x increase in the maximum recorded I/O operations per second (to over 300,000)
New maximum recorded number of transactions per second - 8,900
Data protection and migration
VMware also claims vSphere offers zero downtime and zero data loss protection against hardware failures with VMware Fault Tolerance and minimized planned downtime due to storage maintenance and migrations with VMware Storage VMotion, which provides live migration of virtual machine disk files across heterogeneous networked storage types.
vSphere 4 also features integrated disk-based backup and recovery for all applications via VMware Data Recovery and VMware vStorage Thin Provisioning, which keeps capacity-hungry VMs in check.
Storage vendors on board
The announcements are coming fast and furious from the storage community as, so far, 3PAR, Akorri, CA, Compellent Technologies, CommVault, Dell, Double-Take Software, EMC, Emulex, FalconStor Software, Hitachi Data Systems, HP, IBM, LSI, NetApp, Nexenta, StoneFly, Sun Microsystems, Symantec and Vizioncore have all pledged support for vSphere 4.
Read on for the details we have so far…
3PAR
3PAR's InServ Storage Servers are on the VMware Hardware Compatibility List (HCL) for VMware vSphere 4. In addition, 3PAR and VMware are investing in joint engineering projects. For example, the 3PAR already supports the VMware vStorage initiative and the recently released adaptive queuing technology that became available in VMware Infrastructure 3.5 and is included in VMware vSphere 4.
Akorri
Akorri's BalancePoint software will support VMware vSphere by the end of 2009. BalancePoint is available on a VMware certified virtual appliance and assists in cross-domain virtualized data center management, managing virtual and physical server and storage infrastructure from a single console.
Compellent Technologies
Compellent Technologies announced that its Storage Center SAN supports VMware vSphere. Compellent's Storage Center has completed the VMware Hardware Certification Program testing criteria and is now listed on the VMware HCL for use with vSphere.
EMC
EMC announced new high-availability advancements for next-generation virtual data centers with the new EMC PowerPath/VE software. The PowerPath/VE software provides path management, load balancing and fail-over capabilities for VMware vSphere 4.
Emulex
Emulex's LightPulse host bus adapters (HBAs) and converged network adapters (CNAs) are fully supported with VMware in-box drivers as part of VMware vSphere 4. The LightPulse 8Gbps Fibre Channel HBAs and 10Gbps Fibre Channel over Ethernet (FCoE) CNAs deliver more than double the IOPS performance in VMware vSphere 4 environments over the previous release, according to Emulex.
FalconStor Software
FalconStor Software's NSS-S12 storage array supports vSphere and on the vSphere HCL. FalconStor's Network Storage Server (NSS) technology integrates storage virtualization and provisioning across multiple disk arrays and connection protocols to create a scalable iSCSI or Fibre Channel SAN.
HP
Hewlett-Packard announced the integration of vSphere 4 into its HP Adaptive Infrastructure (AI) portfolio. The interoperability of VMware vSphere 4 with HP's portfolio includes hardware compatibility for a range of HP ProLiant and BladeSystem servers and StorageWorks systems and software integration of HP's Insight software with vSphere 4.
NetApp
NetApp also announced the integration and certification of its storage platforms with vSphere 4. NetApp storage platforms and software products such as SANscreen VM Insight and MultiStore are certified for vSphere 4 and available now. The NetApp Virtualization Guarantee Program for vSphere is also available immediately.
StoneFly
IP SAN maker StoneFly announced completion of VMware vSphere certification across its entire SAN product line. StoneFly IP SANs supporting VMware vSphere, including the StoneFly Voyager, Integrated Storage Concentrator and OptiSAN product lines, are now available.
The vSphere 4 OS aggregates and manages large pools of infrastructure resources – processors, storage and networking – as a dynamic operating environment. VMware claims vSphere 4 will "bring the power of cloud computing to the datacenter, slashing IT costs while dramatically increasing IT responsiveness." VMware also touts vSphere as a path to delivering cloud services that are compatible with customers' internal cloud infrastructures. VMware plans to build in support for dynamic federation between internal and external clouds, enabling "private" cloud environments that span multiple datacenters and/or cloud providers.
Big, bad virtual machines
Using the vSphere OS, users can build bigger, faster virtual computing environments. According to VMware's published specs, the platform can pool together up to:
32 physical servers with up to 2048 processor cores
1,280 virtual machines
32TB of RAM
16PB of storage
8,000 network ports
It also creates bigger, faster virtual machines (VMs) with up to:
2x the number of virtual processors per virtual machine (from 4 to 8)
2.5x more virtual NICs per virtual machine (from 4 to 10)
4x more memory per virtual machine (from 64 GB to 255GB)
3x increase in network throughput (from 9 Gbps to 30Gbps)
3x increase in the maximum recorded I/O operations per second (to over 300,000)
New maximum recorded number of transactions per second - 8,900
Data protection and migration
VMware also claims vSphere offers zero downtime and zero data loss protection against hardware failures with VMware Fault Tolerance and minimized planned downtime due to storage maintenance and migrations with VMware Storage VMotion, which provides live migration of virtual machine disk files across heterogeneous networked storage types.
vSphere 4 also features integrated disk-based backup and recovery for all applications via VMware Data Recovery and VMware vStorage Thin Provisioning, which keeps capacity-hungry VMs in check.
Storage vendors on board
The announcements are coming fast and furious from the storage community as, so far, 3PAR, Akorri, CA, Compellent Technologies, CommVault, Dell, Double-Take Software, EMC, Emulex, FalconStor Software, Hitachi Data Systems, HP, IBM, LSI, NetApp, Nexenta, StoneFly, Sun Microsystems, Symantec and Vizioncore have all pledged support for vSphere 4.
Read on for the details we have so far…
3PAR
3PAR's InServ Storage Servers are on the VMware Hardware Compatibility List (HCL) for VMware vSphere 4. In addition, 3PAR and VMware are investing in joint engineering projects. For example, the 3PAR already supports the VMware vStorage initiative and the recently released adaptive queuing technology that became available in VMware Infrastructure 3.5 and is included in VMware vSphere 4.
Akorri
Akorri's BalancePoint software will support VMware vSphere by the end of 2009. BalancePoint is available on a VMware certified virtual appliance and assists in cross-domain virtualized data center management, managing virtual and physical server and storage infrastructure from a single console.
Compellent Technologies
Compellent Technologies announced that its Storage Center SAN supports VMware vSphere. Compellent's Storage Center has completed the VMware Hardware Certification Program testing criteria and is now listed on the VMware HCL for use with vSphere.
EMC
EMC announced new high-availability advancements for next-generation virtual data centers with the new EMC PowerPath/VE software. The PowerPath/VE software provides path management, load balancing and fail-over capabilities for VMware vSphere 4.
Emulex
Emulex's LightPulse host bus adapters (HBAs) and converged network adapters (CNAs) are fully supported with VMware in-box drivers as part of VMware vSphere 4. The LightPulse 8Gbps Fibre Channel HBAs and 10Gbps Fibre Channel over Ethernet (FCoE) CNAs deliver more than double the IOPS performance in VMware vSphere 4 environments over the previous release, according to Emulex.
FalconStor Software
FalconStor Software's NSS-S12 storage array supports vSphere and on the vSphere HCL. FalconStor's Network Storage Server (NSS) technology integrates storage virtualization and provisioning across multiple disk arrays and connection protocols to create a scalable iSCSI or Fibre Channel SAN.
HP
Hewlett-Packard announced the integration of vSphere 4 into its HP Adaptive Infrastructure (AI) portfolio. The interoperability of VMware vSphere 4 with HP's portfolio includes hardware compatibility for a range of HP ProLiant and BladeSystem servers and StorageWorks systems and software integration of HP's Insight software with vSphere 4.
NetApp
NetApp also announced the integration and certification of its storage platforms with vSphere 4. NetApp storage platforms and software products such as SANscreen VM Insight and MultiStore are certified for vSphere 4 and available now. The NetApp Virtualization Guarantee Program for vSphere is also available immediately.
StoneFly
IP SAN maker StoneFly announced completion of VMware vSphere certification across its entire SAN product line. StoneFly IP SANs supporting VMware vSphere, including the StoneFly Voyager, Integrated Storage Concentrator and OptiSAN product lines, are now available.
Tuesday, April 14, 2009
Symmetrix V-Max: EMC’s big play for big data centers
April 14, 2009 -- There has been a fair amount of speculation that EMC would launch a new Symmetrix DMX-5 system, but while the company's latest high-end array shares the Symmetrix moniker, it's a completely different platform with an architecture built for virtualized data centers.
InfoStor's coverage of the EMC Virtual Matrix Architecture and Symmetrix V-Max Storage System launch outlines the technology, EMC's plans and how it all relates to cloud computing.
The architecture combines scale-up and scale-out capabilities with centralized management and (forthcoming) automated tiering of SSDs, Fibre Channel and SATA drives. The Symmetrix V-Max is significantly bigger and faster than the DMX-4, but has been specifically designed to support enormous cloud computing and virtual data center infrastructures.
David Vellante, co-founder and contributor to The Wikibon Project, says customers should take this announcement very seriously, especially if they have existing Symmetrix processes in place.
"To the extent EMC delivers on its vision, the V-Max will bring incremental strategic value to many customers and will represent a longer term investment platform. Specifically, the possibility of doing automated tiered storage within a federated Symmetrix infrastructure could be very cost competitive and advantageous if EMC can ship enough volume and – very importantly – ship software that automates the placement of data on the most cost-effective tier," says Vellante. "This software is not here today and that's important."
The software – EMC's Fully Automated Storage Tiering (FAST) technology – is expected to debut later this year, according to EMC. It is touted as a feature that will automatically move data to appropriate tiers of storage within the Virtual Matrix Architecture. This is especially significant as EMC tries to speed the adoption of solid-state disk (SSD) drives as "tier zero" storage for frequently accessed data in high performance applications.
"The problem folks are having is they really don't have an automated way to move data between T1 and T2. So if EMC can give them a way to do that all within a single architecture from Tier 0 down to Tier 3 with high capacity SATA that gets interesting. But again, the software to do this is not here today, the [Virtual Matrix Architecture announcement] is the first step," says Vellante.
The industry reaction is sure to come fast and furious as the details of V-Max reverberate through the storage landscape. Stay with InfoStor's coverage of the announcement as Editor-in-Chief Dave Simpson adds his two cents to the discussion.
We have also posted a V-Max Lab Review from Enterprise Strategy Group to our ESG Lab Validation section found here.
InfoStor's coverage of the EMC Virtual Matrix Architecture and Symmetrix V-Max Storage System launch outlines the technology, EMC's plans and how it all relates to cloud computing.
The architecture combines scale-up and scale-out capabilities with centralized management and (forthcoming) automated tiering of SSDs, Fibre Channel and SATA drives. The Symmetrix V-Max is significantly bigger and faster than the DMX-4, but has been specifically designed to support enormous cloud computing and virtual data center infrastructures.
David Vellante, co-founder and contributor to The Wikibon Project, says customers should take this announcement very seriously, especially if they have existing Symmetrix processes in place.
"To the extent EMC delivers on its vision, the V-Max will bring incremental strategic value to many customers and will represent a longer term investment platform. Specifically, the possibility of doing automated tiered storage within a federated Symmetrix infrastructure could be very cost competitive and advantageous if EMC can ship enough volume and – very importantly – ship software that automates the placement of data on the most cost-effective tier," says Vellante. "This software is not here today and that's important."
The software – EMC's Fully Automated Storage Tiering (FAST) technology – is expected to debut later this year, according to EMC. It is touted as a feature that will automatically move data to appropriate tiers of storage within the Virtual Matrix Architecture. This is especially significant as EMC tries to speed the adoption of solid-state disk (SSD) drives as "tier zero" storage for frequently accessed data in high performance applications.
"The problem folks are having is they really don't have an automated way to move data between T1 and T2. So if EMC can give them a way to do that all within a single architecture from Tier 0 down to Tier 3 with high capacity SATA that gets interesting. But again, the software to do this is not here today, the [Virtual Matrix Architecture announcement] is the first step," says Vellante.
The industry reaction is sure to come fast and furious as the details of V-Max reverberate through the storage landscape. Stay with InfoStor's coverage of the announcement as Editor-in-Chief Dave Simpson adds his two cents to the discussion.
We have also posted a V-Max Lab Review from Enterprise Strategy Group to our ESG Lab Validation section found here.
Labels:
EMC,
Symmetrix V-Max,
V-Max,
V-Max Engine,
Virtual Matrix Architecture
Tuesday, April 7, 2009
SNW: Day two recap
Brocade is now shipping an FCoE switch and adapters to OEMs, Symantec has added DR testing software to its product line via a partnership and solid-state specialist Fusion-io just bagged close to $50 million funding.
Day two of Storage Networking World was uneventful from a news perspective, but we were able to track down some industry insiders and SNIA members to explain some of this week's announcements.
First up, a keynote from Symantec's new CEO, Enrique Salem, during which he said:
"Stop buying storage."
Not a surprising statement when you consider it came from a software company, but Salem says data reduction technologies and better management can defray the cost of additional hardware through better utilization.
"In many companies there are differences in storage hardware, and often islands of storage. One department might have plenty of free storage while another is adding arrays," Salem told a standing-room crowd this morning. "You need to identify and reclaim what you've bought but aren't using. Find that orphan storage, and bring it home. The hardware vendors will tell you they can show you how your existing storage is being used. Remember, their ultimate goal is to sell you more hardware."
Salem says storage resource management (SRM), thin provisioning, data de-duplication, and intelligent archiving can all bring those orphans home.
On the cloud storage front, I was able to sit down with Storage Networking Industry Association Chairman Emeritus and member of the Board of Directors Vincent Franceschini to discuss the Association's formation of a Technical Work Group (TWG) for cloud storage.
"It has become very clear that we need to clarify the definitions and terminology surrounding cloud storage," said Franceschini. "We believe we can help the market overall by delivering reference models to describe different solutions and cloud frameworks."
He also said industry collaboration is a must if cloud storage is going to be a viable option for enterprise storage in the future.
"We are going to be collaborating with other industry groups. There is no way it is going to work if [cloud platforms] are not integrated," he said.
The SNIA has also set up a Google group in an effort to maintain a "public face" on the Cloud Storage TWG's work.
Day two of Storage Networking World was uneventful from a news perspective, but we were able to track down some industry insiders and SNIA members to explain some of this week's announcements.
First up, a keynote from Symantec's new CEO, Enrique Salem, during which he said:
"Stop buying storage."
Not a surprising statement when you consider it came from a software company, but Salem says data reduction technologies and better management can defray the cost of additional hardware through better utilization.
"In many companies there are differences in storage hardware, and often islands of storage. One department might have plenty of free storage while another is adding arrays," Salem told a standing-room crowd this morning. "You need to identify and reclaim what you've bought but aren't using. Find that orphan storage, and bring it home. The hardware vendors will tell you they can show you how your existing storage is being used. Remember, their ultimate goal is to sell you more hardware."
Salem says storage resource management (SRM), thin provisioning, data de-duplication, and intelligent archiving can all bring those orphans home.
On the cloud storage front, I was able to sit down with Storage Networking Industry Association Chairman Emeritus and member of the Board of Directors Vincent Franceschini to discuss the Association's formation of a Technical Work Group (TWG) for cloud storage.
"It has become very clear that we need to clarify the definitions and terminology surrounding cloud storage," said Franceschini. "We believe we can help the market overall by delivering reference models to describe different solutions and cloud frameworks."
He also said industry collaboration is a must if cloud storage is going to be a viable option for enterprise storage in the future.
"We are going to be collaborating with other industry groups. There is no way it is going to work if [cloud platforms] are not integrated," he said.
The SNIA has also set up a Google group in an effort to maintain a "public face" on the Cloud Storage TWG's work.
Labels:
Cloud storage,
SNIA,
SNW,
Storage Networking World,
Symantec
Monday, April 6, 2009
SNW: Day one recap
April 06, 2009 -- The Storage Networking World (SNW) conference is under way and the InfoStor team is in Orlando to keep you up-to-date on news and announcements from the show.
A few product announcements trickled out of SNW this morning, including FalconStor Software’s release of the Backup Accelerator option for its Virtual Tape Library (VTL) product, 3PAR’s launch of a quad-controller storage array for midrange customers, the debut of cloud storage services startup Zetta, and the availability of Netgear’s newest NAS/unified storage system with a cloud storage option for SMBs.
Speaking of the cloud – and that’s all we seem to be speaking about lately – the Storage Networking Industry Association (SNIA) today announced the creation of the Cloud Storage Technical Work Group (TWG) aimed at developing SNIA Architectures and best practices related to cloud storage technology. The initial TWG charter includes the focus to produce a set of specifications and to drive the consistency of interface standards across the various cloud storage related efforts.
The Cloud Storage TWG is also soliciting proposals for standard interfaces and is looking to engage vendors and other “Cloud industry parties” in its efforts. The group plans to release a reference model for Cloud Storage with associated terminology definitions to aid in further work on the standards. Cloud service and storage interface definitions are expected in draft form later this year and anticipated to be adopted starting in 2010.
The SNIA is also refocusing its efforts on the IP storage front. The Association announced an expansion in the charter of the SNIA IP Storage Forum, which is reflected in its new name – the SNIA Ethernet Storage Forum (ESF). The EFS has been tasked with driving the broad adoption of all Ethernet–connected storage networking solutions.
The ESF will consist of two Special Interest Groups - the iSCSI SIG and the NFS SIG. The iSCSI SIG will focus on continuing the IP Storage Forum agenda to evangelize the benefits and best practices related to iSCSI. Member companies include Compellent, Dell, HP, Intel, Microsoft, NEC, NetApp and Sun.
The new NFS SIG will be focused on NFS-based NAS solutions, particularly emerging technologies, such as pNFS. The founding members of the NFS SIG include EMC, NetApp, Panasas and Sun.
Additionally, the group also plans to form a Special Interest Group focused on the CIFS/SMB protocol and ecosystem.
Hifn made news with the launch of its BitWackr 250 and 255, which are aimed at server OEMs, Microsoft Partners and white-box server builders looking to add hardware-assisted data de-duplication and compression with thin provisioning to Windows Servers.
According to Hifn, BitWackr provides real-time, in-line de-dupe and compression, reducing the amount of data written to disk. The cards combine the company’s BitWackr block-based de-dupe software with a Hifn Express DR 250 PCI-x or 255 PCIe card that employs specialized hardware to perform data compression and de-dupe hashing operations.
The BitWackr 250 and 255 products are priced at $995 with general availability slated for the third quarter of this year.
InfoStor’s Editor-in-Chief, Dave Simpson, and I will be blogging/reporting from the conference all this week. Check out the Infostor homepage for the latest industry news & analysis from SNW Orlando. There is some news from Symantec on the horizon and Brocade has called a press conference for tomorrow afternoon. Stay tuned…
A few product announcements trickled out of SNW this morning, including FalconStor Software’s release of the Backup Accelerator option for its Virtual Tape Library (VTL) product, 3PAR’s launch of a quad-controller storage array for midrange customers, the debut of cloud storage services startup Zetta, and the availability of Netgear’s newest NAS/unified storage system with a cloud storage option for SMBs.
Speaking of the cloud – and that’s all we seem to be speaking about lately – the Storage Networking Industry Association (SNIA) today announced the creation of the Cloud Storage Technical Work Group (TWG) aimed at developing SNIA Architectures and best practices related to cloud storage technology. The initial TWG charter includes the focus to produce a set of specifications and to drive the consistency of interface standards across the various cloud storage related efforts.
The Cloud Storage TWG is also soliciting proposals for standard interfaces and is looking to engage vendors and other “Cloud industry parties” in its efforts. The group plans to release a reference model for Cloud Storage with associated terminology definitions to aid in further work on the standards. Cloud service and storage interface definitions are expected in draft form later this year and anticipated to be adopted starting in 2010.
The SNIA is also refocusing its efforts on the IP storage front. The Association announced an expansion in the charter of the SNIA IP Storage Forum, which is reflected in its new name – the SNIA Ethernet Storage Forum (ESF). The EFS has been tasked with driving the broad adoption of all Ethernet–connected storage networking solutions.
The ESF will consist of two Special Interest Groups - the iSCSI SIG and the NFS SIG. The iSCSI SIG will focus on continuing the IP Storage Forum agenda to evangelize the benefits and best practices related to iSCSI. Member companies include Compellent, Dell, HP, Intel, Microsoft, NEC, NetApp and Sun.
The new NFS SIG will be focused on NFS-based NAS solutions, particularly emerging technologies, such as pNFS. The founding members of the NFS SIG include EMC, NetApp, Panasas and Sun.
Additionally, the group also plans to form a Special Interest Group focused on the CIFS/SMB protocol and ecosystem.
Hifn made news with the launch of its BitWackr 250 and 255, which are aimed at server OEMs, Microsoft Partners and white-box server builders looking to add hardware-assisted data de-duplication and compression with thin provisioning to Windows Servers.
According to Hifn, BitWackr provides real-time, in-line de-dupe and compression, reducing the amount of data written to disk. The cards combine the company’s BitWackr block-based de-dupe software with a Hifn Express DR 250 PCI-x or 255 PCIe card that employs specialized hardware to perform data compression and de-dupe hashing operations.
The BitWackr 250 and 255 products are priced at $995 with general availability slated for the third quarter of this year.
InfoStor’s Editor-in-Chief, Dave Simpson, and I will be blogging/reporting from the conference all this week. Check out the Infostor homepage for the latest industry news & analysis from SNW Orlando. There is some news from Symantec on the horizon and Brocade has called a press conference for tomorrow afternoon. Stay tuned…
Labels:
3PAR,
FalconStor,
Hifn,
Netgear,
SNIA,
SNW,
Storage Networking World,
Zetta
Tuesday, March 24, 2009
Cisco's UCS: The industry reacts
March 24, 2009 -- The IT world has had about a week to digest, mull and question the ins-and-outs of Cisco's newly announced "game-changer," the Unified Computing System. And the industry certainly has questions for Cisco.
Several competitors are questioning whether Cisco's UCS – the platform that combines compute, network, storage access, and virtualization resources in a single system based on a new line of blade servers developed by Cisco – features a truly open architecture.
Brocade's CEO Mike Klayko made his opinion known yesterday in a video posted to the Brocade YouTube Channel.
Klayko does not believe large enterprise customers will put mission critical applications on a version one product, referring to Cisco's new blade servers.
Brocade has also issued an official statement to the media in response to Cisco's UCS launch. It reads:
Cisco begs to differ. Rob Lloyd, executive vice president designate, Worldwide Operations for Cisco, explained that Cisco has "built an open ecosystem of industry leaders" in support of the UCS even going as far as to refer to UCS supporters as a "dream team of capable partners."
Cisco is collaborating with a wide range of hardware and software vendors to develop systems and applications that work with the platform. Specifically, Cisco is teaming up with technology partners BMC Software, EMC, Emulex, Intel, Microsoft, NetApp, Novell, Oracle, QLogic, Red Hat, and VMware and has expanded strategic relationships with Accenture, CSC, Tata Consultancy Services (TCS), and Wipro.
Noticeably absent from the partner list are the server vendors. However, Lloyd told media and analysts in last week's UCS conference call that Cisco does not view the UCS as a blade server.
"The UCS will be shipped and configured as a system. That's why we don't think we're competing on a blade platform, but on a new system form factor," he said.
Several competitors are questioning whether Cisco's UCS – the platform that combines compute, network, storage access, and virtualization resources in a single system based on a new line of blade servers developed by Cisco – features a truly open architecture.
Brocade's CEO Mike Klayko made his opinion known yesterday in a video posted to the Brocade YouTube Channel.
Klayko does not believe large enterprise customers will put mission critical applications on a version one product, referring to Cisco's new blade servers.
Brocade has also issued an official statement to the media in response to Cisco's UCS launch. It reads:
"A dynamic and virtualized data center holds the promise of many compelling benefits for end-users including increased server utilization, decrease in power footprint and more efficient operations in general. However, achieving this goal is a complex challenge that can be best tackled by a broad ecosystem of industry partners and not based on a proprietary, singular architecture of one company.
In contrast, Brocade is already helping customers address these challenges by integrating our networking solutions with a range of mature computing, management and storage technologies from some of the strongest companies in the world. These partnerships are leveraging open interfaces/standards, co-developed technology, and products that are available today, which will lower costs and maximize return on investment for customers."
BLADE Network Technologies president and CEO Vikram Mehta also took aim at Cisco in a recent blog entry where he lists 10 reasons why Cisco's Unified Computing strategy is nothing more than a way to lock customers into a proprietary world while locking out vendors like HP and IBM.In contrast, Brocade is already helping customers address these challenges by integrating our networking solutions with a range of mature computing, management and storage technologies from some of the strongest companies in the world. These partnerships are leveraging open interfaces/standards, co-developed technology, and products that are available today, which will lower costs and maximize return on investment for customers."
Cisco begs to differ. Rob Lloyd, executive vice president designate, Worldwide Operations for Cisco, explained that Cisco has "built an open ecosystem of industry leaders" in support of the UCS even going as far as to refer to UCS supporters as a "dream team of capable partners."
Cisco is collaborating with a wide range of hardware and software vendors to develop systems and applications that work with the platform. Specifically, Cisco is teaming up with technology partners BMC Software, EMC, Emulex, Intel, Microsoft, NetApp, Novell, Oracle, QLogic, Red Hat, and VMware and has expanded strategic relationships with Accenture, CSC, Tata Consultancy Services (TCS), and Wipro.
Noticeably absent from the partner list are the server vendors. However, Lloyd told media and analysts in last week's UCS conference call that Cisco does not view the UCS as a blade server.
"The UCS will be shipped and configured as a system. That's why we don't think we're competing on a blade platform, but on a new system form factor," he said.
Labels:
Brocade,
Cisco,
UCS,
Unified Computing System,
Virtualization
Subscribe to:
Posts (Atom)