Friday, April 24, 2009

Hype vs. reality – A Q&A with Wells Fargo's head of IT

April 24, 2009 -- I recently had an opportunity to have a conversation with Scott Dillon, head of technology infrastructure services at Wells Fargo & Co. The discussion covered a range of topics including the bank's storage priorities and needs, how he plans to extend the life of his legacy gear through storage virtualization, and his take on emerging technologies like solid-state disk (SSD) drives and Fibre Channel over Ethernet (FCoE).

Like many large enterprise organizations, Wells Fargo is dealing with massive amounts of storage and all of the management, migration and data protection tasks that come with it. Dillon says he has about 5PB of storage deployed in production. Storage infrastructures of that size require a pragmatic management approach. That's why the Wells Fargo IT philosophy is "standardize and optimize," while keeping clear of IT's bleeding edge.

To that end, Dillon's main goals are driving up utilization and enhancing availability and storage virtualization is the linchpin in the process.

"Virtualization is something that we are committed to and we are deploying it across our environment. It helps our cost models because it allows us to have heterogeneous [storage] providers behind virtualization devices. With virtualization, we don't have to throw out one infrastructure to bring in a new one," says Dillon. "We are big on leveraging what we already have."

He says virtualization has helped streamline a number of complex tasks, including capacity provisioning, data migration and storage tiering. He also credits storage virtualization with speeding service delivery to customers.

As for his take on vendors, Dillon would not name his storage suppliers, but he does hint at what Wells Fargo is looking for going forward.

"A lot of the large storage providers are starting to make their play into the end-to-end space. They are putting it all together, which is how we look at the big picture. We would like to see these organizations driving their products toward IEEE standards so that we don't get locked in [to any one vendor]," he says.

Dillon stresses the importance of the customer-provider relationship in his decision-making process. "The quality, availability and resiliency of a product in an industrial enterprise setting are incredibly important to us. I want the vendor engaged and I want the sales team to have as much incentive to deliver on their commitment as they do in selling me their next product. If the product is good and you deliver on your commitment you are going to sell me a lot more stuff," he says.

"What's amazing to me is how many people are just focused on the sale. I need to know they are going to be there for the long term. When times are tough it's about who is going to be there focused on your optimization and driving up utilization," Dillon says.

Dillon is also keeping on eye on several emerging storage technologies.

On SSDs: "There is a lot of initial hype. The value proposition is there. What's intriguing is reduced power consumption. But there are a lot of questions. How many times can you write to the drive? What about availability? I don't see [SSDs] as something we would deploy in production in the near future, but the promise is there and we see it."

On data de-duplication: "We have deployed some data de-duplication technologies in our environment. We are realizing some very good lift in [our de-dupe implementation]. There is a lot of promise, but the technology needs to mature."

On FCoE: "We continue to watch it very closely. We are, in general, very interested in any technology that fits with our pragmatic and customer-centric philosophy. Directionally, I think the concept of unified networking is great."

Once the aforementioned technologies mature, Dillon will weave them into his infrastructure when and if they make business sense.

"It all starts and ends with the customer experience. You can't do technology for the sake of doing technology. It has to improve the customer's experience," he says.

Tuesday, April 21, 2009

VMware's vSphere of influence

April 21, 2009 -- Today's release of VMware's vSphere 4 operating system – a new OS for building internal clouds – has brought with it a tsunami of support from dozens of storage vendors.

The vSphere 4 OS aggregates and manages large pools of infrastructure resources – processors, storage and networking – as a dynamic operating environment. VMware claims vSphere 4 will "bring the power of cloud computing to the datacenter, slashing IT costs while dramatically increasing IT responsiveness." VMware also touts vSphere as a path to delivering cloud services that are compatible with customers' internal cloud infrastructures. VMware plans to build in support for dynamic federation between internal and external clouds, enabling "private" cloud environments that span multiple datacenters and/or cloud providers.

Big, bad virtual machines

Using the vSphere OS, users can build bigger, faster virtual computing environments. According to VMware's published specs, the platform can pool together up to:

32 physical servers with up to 2048 processor cores
1,280 virtual machines
32TB of RAM
16PB of storage
8,000 network ports

It also creates bigger, faster virtual machines (VMs) with up to:

2x the number of virtual processors per virtual machine (from 4 to 8)
2.5x more virtual NICs per virtual machine (from 4 to 10)
4x more memory per virtual machine (from 64 GB to 255GB)
3x increase in network throughput (from 9 Gbps to 30Gbps)
3x increase in the maximum recorded I/O operations per second (to over 300,000)
New maximum recorded number of transactions per second - 8,900

Data protection and migration

VMware also claims vSphere offers zero downtime and zero data loss protection against hardware failures with VMware Fault Tolerance and minimized planned downtime due to storage maintenance and migrations with VMware Storage VMotion, which provides live migration of virtual machine disk files across heterogeneous networked storage types.

vSphere 4 also features integrated disk-based backup and recovery for all applications via VMware Data Recovery and VMware vStorage Thin Provisioning, which keeps capacity-hungry VMs in check.

Storage vendors on board

The announcements are coming fast and furious from the storage community as, so far, 3PAR, Akorri, CA, Compellent Technologies, CommVault, Dell, Double-Take Software, EMC, Emulex, FalconStor Software, Hitachi Data Systems, HP, IBM, LSI, NetApp, Nexenta, StoneFly, Sun Microsystems, Symantec and Vizioncore have all pledged support for vSphere 4.

Read on for the details we have so far…

3PAR

3PAR's InServ Storage Servers are on the VMware Hardware Compatibility List (HCL) for VMware vSphere 4. In addition, 3PAR and VMware are investing in joint engineering projects. For example, the 3PAR already supports the VMware vStorage initiative and the recently released adaptive queuing technology that became available in VMware Infrastructure 3.5 and is included in VMware vSphere 4.

Akorri

Akorri's BalancePoint software will support VMware vSphere by the end of 2009. BalancePoint is available on a VMware certified virtual appliance and assists in cross-domain virtualized data center management, managing virtual and physical server and storage infrastructure from a single console.

Compellent Technologies

Compellent Technologies announced that its Storage Center SAN supports VMware vSphere. Compellent's Storage Center has completed the VMware Hardware Certification Program testing criteria and is now listed on the VMware HCL for use with vSphere.

EMC

EMC announced new high-availability advancements for next-generation virtual data centers with the new EMC PowerPath/VE software. The PowerPath/VE software provides path management, load balancing and fail-over capabilities for VMware vSphere 4.

Emulex

Emulex's LightPulse host bus adapters (HBAs) and converged network adapters (CNAs) are fully supported with VMware in-box drivers as part of VMware vSphere 4. The LightPulse 8Gbps Fibre Channel HBAs and 10Gbps Fibre Channel over Ethernet (FCoE) CNAs deliver more than double the IOPS performance in VMware vSphere 4 environments over the previous release, according to Emulex.

FalconStor Software

FalconStor Software's NSS-S12 storage array supports vSphere and on the vSphere HCL. FalconStor's Network Storage Server (NSS) technology integrates storage virtualization and provisioning across multiple disk arrays and connection protocols to create a scalable iSCSI or Fibre Channel SAN.

HP

Hewlett-Packard announced the integration of vSphere 4 into its HP Adaptive Infrastructure (AI) portfolio. The interoperability of VMware vSphere 4 with HP's portfolio includes hardware compatibility for a range of HP ProLiant and BladeSystem servers and StorageWorks systems and software integration of HP's Insight software with vSphere 4.

NetApp

NetApp also announced the integration and certification of its storage platforms with vSphere 4. NetApp storage platforms and software products such as SANscreen VM Insight and MultiStore are certified for vSphere 4 and available now. The NetApp Virtualization Guarantee Program for vSphere is also available immediately.

StoneFly

IP SAN maker StoneFly announced completion of VMware vSphere certification across its entire SAN product line. StoneFly IP SANs supporting VMware vSphere, including the StoneFly Voyager, Integrated Storage Concentrator and OptiSAN product lines, are now available.

Tuesday, April 14, 2009

Symmetrix V-Max: EMC’s big play for big data centers

April 14, 2009 -- There has been a fair amount of speculation that EMC would launch a new Symmetrix DMX-5 system, but while the company's latest high-end array shares the Symmetrix moniker, it's a completely different platform with an architecture built for virtualized data centers.

InfoStor's coverage of the EMC Virtual Matrix Architecture and Symmetrix V-Max Storage System launch outlines the technology, EMC's plans and how it all relates to cloud computing.

The architecture combines scale-up and scale-out capabilities with centralized management and (forthcoming) automated tiering of SSDs, Fibre Channel and SATA drives. The Symmetrix V-Max is significantly bigger and faster than the DMX-4, but has been specifically designed to support enormous cloud computing and virtual data center infrastructures.

David Vellante, co-founder and contributor to The Wikibon Project, says customers should take this announcement very seriously, especially if they have existing Symmetrix processes in place.

"To the extent EMC delivers on its vision, the V-Max will bring incremental strategic value to many customers and will represent a longer term investment platform. Specifically, the possibility of doing automated tiered storage within a federated Symmetrix infrastructure could be very cost competitive and advantageous if EMC can ship enough volume and – very importantly – ship software that automates the placement of data on the most cost-effective tier," says Vellante. "This software is not here today and that's important."

The software – EMC's Fully Automated Storage Tiering (FAST) technology – is expected to debut later this year, according to EMC. It is touted as a feature that will automatically move data to appropriate tiers of storage within the Virtual Matrix Architecture. This is especially significant as EMC tries to speed the adoption of solid-state disk (SSD) drives as "tier zero" storage for frequently accessed data in high performance applications.

"The problem folks are having is they really don't have an automated way to move data between T1 and T2. So if EMC can give them a way to do that all within a single architecture from Tier 0 down to Tier 3 with high capacity SATA that gets interesting. But again, the software to do this is not here today, the [Virtual Matrix Architecture announcement] is the first step," says Vellante.

The industry reaction is sure to come fast and furious as the details of V-Max reverberate through the storage landscape. Stay with InfoStor's coverage of the announcement as Editor-in-Chief Dave Simpson adds his two cents to the discussion.

We have also posted a V-Max Lab Review from Enterprise Strategy Group to our ESG Lab Validation section found here.

Tuesday, April 7, 2009

SNW: Day two recap

Brocade is now shipping an FCoE switch and adapters to OEMs, Symantec has added DR testing software to its product line via a partnership and solid-state specialist Fusion-io just bagged close to $50 million funding.

Day two of Storage Networking World was uneventful from a news perspective, but we were able to track down some industry insiders and SNIA members to explain some of this week's announcements.

First up, a keynote from Symantec's new CEO, Enrique Salem, during which he said:

"Stop buying storage."

Not a surprising statement when you consider it came from a software company, but Salem says data reduction technologies and better management can defray the cost of additional hardware through better utilization.

"In many companies there are differences in storage hardware, and often islands of storage. One department might have plenty of free storage while another is adding arrays," Salem told a standing-room crowd this morning. "You need to identify and reclaim what you've bought but aren't using. Find that orphan storage, and bring it home. The hardware vendors will tell you they can show you how your existing storage is being used. Remember, their ultimate goal is to sell you more hardware."

Salem says storage resource management (SRM), thin provisioning, data de-duplication, and intelligent archiving can all bring those orphans home.

On the cloud storage front, I was able to sit down with Storage Networking Industry Association Chairman Emeritus and member of the Board of Directors Vincent Franceschini to discuss the Association's formation of a Technical Work Group (TWG) for cloud storage.

"It has become very clear that we need to clarify the definitions and terminology surrounding cloud storage," said Franceschini. "We believe we can help the market overall by delivering reference models to describe different solutions and cloud frameworks."

He also said industry collaboration is a must if cloud storage is going to be a viable option for enterprise storage in the future.

"We are going to be collaborating with other industry groups. There is no way it is going to work if [cloud platforms] are not integrated," he said.

The SNIA has also set up a Google group in an effort to maintain a "public face" on the Cloud Storage TWG's work.

Monday, April 6, 2009

SNW: Day one recap

April 06, 2009 -- The Storage Networking World (SNW) conference is under way and the InfoStor team is in Orlando to keep you up-to-date on news and announcements from the show.

A few product announcements trickled out of SNW this morning, including FalconStor Software’s release of the Backup Accelerator option for its Virtual Tape Library (VTL) product, 3PAR’s launch of a quad-controller storage array for midrange customers, the debut of cloud storage services startup Zetta, and the availability of Netgear’s newest NAS/unified storage system with a cloud storage option for SMBs.

Speaking of the cloud – and that’s all we seem to be speaking about lately – the Storage Networking Industry Association (SNIA) today announced the creation of the Cloud Storage Technical Work Group (TWG) aimed at developing SNIA Architectures and best practices related to cloud storage technology. The initial TWG charter includes the focus to produce a set of specifications and to drive the consistency of interface standards across the various cloud storage related efforts.

The Cloud Storage TWG is also soliciting proposals for standard interfaces and is looking to engage vendors and other “Cloud industry parties” in its efforts. The group plans to release a reference model for Cloud Storage with associated terminology definitions to aid in further work on the standards. Cloud service and storage interface definitions are expected in draft form later this year and anticipated to be adopted starting in 2010.

The SNIA is also refocusing its efforts on the IP storage front. The Association announced an expansion in the charter of the SNIA IP Storage Forum, which is reflected in its new name – the SNIA Ethernet Storage Forum (ESF). The EFS has been tasked with driving the broad adoption of all Ethernet–connected storage networking solutions.

The ESF will consist of two Special Interest Groups - the iSCSI SIG and the NFS SIG. The iSCSI SIG will focus on continuing the IP Storage Forum agenda to evangelize the benefits and best practices related to iSCSI. Member companies include Compellent, Dell, HP, Intel, Microsoft, NEC, NetApp and Sun.

The new NFS SIG will be focused on NFS-based NAS solutions, particularly emerging technologies, such as pNFS. The founding members of the NFS SIG include EMC, NetApp, Panasas and Sun.

Additionally, the group also plans to form a Special Interest Group focused on the CIFS/SMB protocol and ecosystem.

Hifn made news with the launch of its BitWackr 250 and 255, which are aimed at server OEMs, Microsoft Partners and white-box server builders looking to add hardware-assisted data de-duplication and compression with thin provisioning to Windows Servers.

According to Hifn, BitWackr provides real-time, in-line de-dupe and compression, reducing the amount of data written to disk. The cards combine the company’s BitWackr block-based de-dupe software with a Hifn Express DR 250 PCI-x or 255 PCIe card that employs specialized hardware to perform data compression and de-dupe hashing operations.

The BitWackr 250 and 255 products are priced at $995 with general availability slated for the third quarter of this year.

InfoStor’s Editor-in-Chief, Dave Simpson, and I will be blogging/reporting from the conference all this week. Check out the Infostor homepage for the latest industry news & analysis from SNW Orlando. There is some news from Symantec on the horizon and Brocade has called a press conference for tomorrow afternoon. Stay tuned…