Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


magu

Professional yak shaver
1599 posts

Uber Geek
+1 received by user: 7

Trusted
BitSignal
Lifetime subscriber

#71155 5-Nov-2010 11:44
Send private message

Hi guys,

Here at the company I work for we're reaching our limitations of what we can do WITHOUT shared storage and have decided to step up with a SAN (and potentially a Blade system in a year's time).

I've been talking to some vendors (Dell and EMC this week, HP and IBM next week) to assess our options but, as expected, I'm only hearing their side of the stories.

I'd like to hear from people who were in this situation and the choices that were made.

Was the chosen vendor's support good? What made you choose that vendor? Are you (or your team) happy with the purchase? Anything (good or bad) that stands out?

My long-term plan involves Blades as well, but the first stop will be a SAN . But if you also got a Blade, chip in with what you reckon we should keep in mind when shopping for one.

And last, but not least: thanks for any feedback posted here. Smile 




"Roads? Where we're going, we don't need roads." - Doc Emmet Brown

View this topic in a long page with up to 500 replies per page Create new topic
 1 | 2
Ragnor
8279 posts

Uber Geek
+1 received by user: 585

Trusted

  #400770 5-Nov-2010 14:02
Send private message

What sort of budget are you working with or aiming for?

What's the primary usage for the storage, iscsi targets for vm's, general file storage etc?

If your budget is pretty tight you don't have to go directly to blades or enteprise san's, there are some very reasonable 1U and 2U business grade NAS servers that support dual 1gbit network, RAID5, iscsi targets etc.

It really comes down to what you are using it for, what performance characteristics need to be met and price/cost.



magu

Professional yak shaver
1599 posts

Uber Geek
+1 received by user: 7

Trusted
BitSignal
Lifetime subscriber

  #400787 5-Nov-2010 14:27
Send private message

Ragnor: What sort of budget are you working with or aiming for?

What's the primary usage for the storage, iscsi targets for vm's, general file storage etc?

If your budget is pretty tight you don't have to go directly to blades or enteprise san's, there are some very reasonable 1U and 2U business grade NAS servers that support dual 1gbit network, RAID5, iscsi targets etc.

It really comes down to what you are using it for, what performance characteristics need to be met and price/cost.


We're eyeing the $50k SAN.

Usage is gonna be both VM storage and file storage (accessed from the VMs).

Blade budget is non-existant yet since we're not looking to get it at this time (if all goes well, 12-24 months).

I tried doing some IOPS requirement checks but all I had to work from were the numbers from vCenter (~1000 per server at peak time). Not sure if I am to believe them or not.

I DO know that IO is my bottleneck at the moment, as I keep getting greater than 0 IOWAIT values inside the VMs.

 




"Roads? Where we're going, we don't need roads." - Doc Emmet Brown

exportgoldman
1202 posts

Uber Geek
+1 received by user: 3

Trusted

  #400892 5-Nov-2010 17:31
Send private message

magu:

We're eyeing the $50k SAN.

Usage is gonna be both VM storage and file storage (accessed from the VMs).

Blade budget is non-existant yet since we're not looking to get it at this time (if all goes well, 12-24 months).

I tried doing some IOPS requirement checks but all I had to work from were the numbers from vCenter (~1000 per server at peak time). Not sure if I am to believe them or not.

I DO know that IO is my bottleneck at the moment, as I keep getting greater than 0 IOWAIT values inside the VMs.

 


I've put in 3 SAN VM Clusters in the last 9 months and have found the main bottleneck for Virtualization is the storage subsystem. Microsoft's internal IT found the same thing, for every compute rack they deployed, they had to deploy 9 storage racks!

I wouldn't bother with blade systems unless you are deploying a LOT of blades, as a two node cheap HP dual quad box with 48GB-96GB of RAM will handle 30-70 VM's with ease.

My thinking on this, is you don't need redundancy in your hypervisors as you have redundant hypervisors, so get the HP/IBM servers without all the flash monitoring and get twice as many of them, then you can get the cheaper non-24/7 warranties because you have extra server's sitting in the rack...

We have deployed 2 x MSA2000 G2 SAS Dual Controller SAN's with 2.5" SAS Drives, and one with a 3.5" SATA 2TB Drives and found them to be problematic, things like it taking 100+ hours to expand a RAID array, or the HP VSS Hardware Provider crashing regularly.

Gen-i cloud outage last week was because of their MSA2000 problems.

IBM has just brought out their new range of SAN's, and Hitachi kit is really good, but pricy.

I would also have a look at the QNAP range as they take bare drives so you are not paying 3-5X more for a HP/IBM/Hitachi branded drive tray.

Also, I would also consider Hyper-V, the Data Centre edition allows you to run a unlimited number of Windows Server Standard on the server which works out to be a excellent cost saving.





Tyler - Parnell Geek - iPhone 3G - Lenovo X301 - Kaseya - Great Western Steak House, these are some of my favourite things.



magu

Professional yak shaver
1599 posts

Uber Geek
+1 received by user: 7

Trusted
BitSignal
Lifetime subscriber

  #400905 5-Nov-2010 18:18
Send private message

exportgoldman:
magu:

We're eyeing the $50k SAN.

Usage is gonna be both VM storage and file storage (accessed from the VMs).

Blade budget is non-existant yet since we're not looking to get it at this time (if all goes well, 12-24 months).

I tried doing some IOPS requirement checks but all I had to work from were the numbers from vCenter (~1000 per server at peak time). Not sure if I am to believe them or not.

I DO know that IO is my bottleneck at the moment, as I keep getting greater than 0 IOWAIT values inside the VMs.

 


I've put in 3 SAN VM Clusters in the last 9 months and have found the main bottleneck for Virtualization is the storage subsystem. Microsoft's internal IT found the same thing, for every compute rack they deployed, they had to deploy 9 storage racks!

I wouldn't bother with blade systems unless you are deploying a LOT of blades, as a two node cheap HP dual quad box with 48GB-96GB of RAM will handle 30-70 VM's with ease.

My thinking on this, is you don't need redundancy in your hypervisors as you have redundant hypervisors, so get the HP/IBM servers without all the flash monitoring and get twice as many of them, then you can get the cheaper non-24/7 warranties because you have extra server's sitting in the rack...

We have deployed 2 x MSA2000 G2 SAS Dual Controller SAN's with 2.5" SAS Drives, and one with a 3.5" SATA 2TB Drives and found them to be problematic, things like it taking 100+ hours to expand a RAID array, or the HP VSS Hardware Provider crashing regularly.

Gen-i cloud outage last week was because of their MSA2000 problems.

IBM has just brought out their new range of SAN's, and Hitachi kit is really good, but pricy.

I would also have a look at the QNAP range as they take bare drives so you are not paying 3-5X more for a HP/IBM/Hitachi branded drive tray.

Also, I would also consider Hyper-V, the Data Centre edition allows you to run a unlimited number of Windows Server Standard on the server which works out to be a excellent cost saving.



That certainly looks similar to the problems we have: storage IO causes load on the VMs.

In our case, though, all our VMs are Linux-based. Currently they have PostgreSQL and Apache running on the same VM, but I'm planning on splitting that up and putting them on different tiers of storage.

The cheap-server thing does sound interesting. We have a similar approach. The main thing on our end is to try to avoid licences as much as possible. Everything we run is open-source. Hyper-V is not an option.

EMC showed me a solution with 2 trays (1 full of 15k FC and the other with cheap SATA) that would pretty much cover what we want, but at a higher cost than Dell's one-size-fits-all 15k SAS approach.

QNAP looks more like a NAS than a professional SAN, didn't inspire much confidence. 




"Roads? Where we're going, we don't need roads." - Doc Emmet Brown

insane
3324 posts

Uber Geek
+1 received by user: 1006

ID Verified
Trusted
2degrees
Subscriber

  #401282 7-Nov-2010 03:24
Send private message

I've been playing with just about every different Dell Equallogic SAN they have for the past 9 months or so and can't rate them highly enough. Also have exp with EMC FC SANs.

They come with all the features out the box, no need to license each feature as with many EMC / IBM SANS, and have impressive monitoring /graphing and are super easy to install.

You can mix disk types, raid levels, 1gig and 10gig SANs together and allow the built in smarts to auto load balance the most used blocks to the faster disk trays.

Of course they also support VAAI with VMware 4.1 so tasks such as cloning and template deploying can be done at the storage level and not through the network. = really FAST. They also have their own VMware multipath driver much like EMC have powerpath which flies compared to RoundRobin.

But as always with SANs, you HAVE to buy based on the I/O's you need, so don't be tempted to purchase the largest disk options as you'll only end up spending more in the long run, although if you're running RAID10 then expect to see the usable storage half of the RAW storage.

While I'm at it and since you asked, have had no problems with the Dell M1000e Blade chassis and M610 blades, again very easy to work with and you have many different networking options with the 6 axillary slots on the back of them.

Can show you them next time you're at your DR DC.


- no I don't work for Dell, just like the kit.

magu

Professional yak shaver
1599 posts

Uber Geek
+1 received by user: 7

Trusted
BitSignal
Lifetime subscriber

  #401292 7-Nov-2010 08:23
Send private message

insane: I've been playing with just about every different Dell Equallogic SAN they have for the past 9 months or so and can't rate them highly enough. Also have exp with EMC FC SANs.

They come with all the features out the box, no need to license each feature as with many EMC / IBM SANS, and have impressive monitoring /graphing and are super easy to install.

You can mix disk types, raid levels, 1gig and 10gig SANs together and allow the built in smarts to auto load balance the most used blocks to the faster disk trays.

Of course they also support VAAI with VMware 4.1 so tasks such as cloning and template deploying can be done at the storage level and not through the network. = really FAST. They also have their own VMware multipath driver much like EMC have powerpath which flies compared to RoundRobin.

But as always with SANs, you HAVE to buy based on the I/O's you need, so don't be tempted to purchase the largest disk options as you'll only end up spending more in the long run, although if you're running RAID10 then expect to see the usable storage half of the RAW storage.

While I'm at it and since you asked, have had no problems with the Dell M1000e Blade chassis and M610 blades, again very easy to work with and you have many different networking options with the 6 axillary slots on the back of them.

Can show you them next time you're at your DR DC.


- no I don't work for Dell, just like the kit.


I'd love to chat sometime next week or after, then. I did like Dell's all-inclusive licensing as opposed to EMC's extensive list of optional extras.

One of the points against Dell is that, according to our network engineer, their post-sales support is crap. I haven't had to rely on support from Dell yet, so I can't verify. Have you had any experience with their support team? 




"Roads? Where we're going, we don't need roads." - Doc Emmet Brown

 
 
 

Want to support Geekzone and browse the site without the ads? Subscribe to Geekzone now (monthly, annual and lifetime options).
insane
3324 posts

Uber Geek
+1 received by user: 1006

ID Verified
Trusted
2degrees
Subscriber

  #401509 7-Nov-2010 22:53
Send private message

magu:
insane: I've been playing with just about every different Dell Equallogic SAN they have for the past 9 months or so and can't rate them highly enough. Also have exp with EMC FC SANs.

They come with all the features out the box, no need to license each feature as with many EMC / IBM SANS, and have impressive monitoring /graphing and are super easy to install.

You can mix disk types, raid levels, 1gig and 10gig SANs together and allow the built in smarts to auto load balance the most used blocks to the faster disk trays.

Of course they also support VAAI with VMware 4.1 so tasks such as cloning and template deploying can be done at the storage level and not through the network. = really FAST. They also have their own VMware multipath driver much like EMC have powerpath which flies compared to RoundRobin.

But as always with SANs, you HAVE to buy based on the I/O's you need, so don't be tempted to purchase the largest disk options as you'll only end up spending more in the long run, although if you're running RAID10 then expect to see the usable storage half of the RAW storage.

While I'm at it and since you asked, have had no problems with the Dell M1000e Blade chassis and M610 blades, again very easy to work with and you have many different networking options with the 6 axillary slots on the back of them.

Can show you them next time you're at your DR DC.


- no I don't work for Dell, just like the kit.


I'd love to chat sometime next week or after, then. I did like Dell's all-inclusive licensing as opposed to EMC's extensive list of optional extras.

One of the points against Dell is that, according to our network engineer, their post-sales support is crap. I haven't had to rely on support from Dell yet, so I can't verify. Have you had any experience with their support team? 


I have actually, and each time it gets better as of recently. Their 'Pro support' has had a fairly large overhaul recently and the last few calls I've made to them have been outstanding. If you had asked me six months or a year ago I would have said they were terrible, but things have changed for the better.

When it comes to SANs they offer 'MISSION CRITICAL' support, meaning that they will have any parts fixed/replaced within 4 hours. Not just replaced 4 hours after they've made you jump through hoops and supply debug info etc so it's pretty good.

Obviously I cannot speak of what it's like for smaller companies with no history with them but all I do is say who I work for and my name and they typically do whatever we ask :)

Regs
4066 posts

Uber Geek
+1 received by user: 206

Trusted
Snowflake

  #401517 7-Nov-2010 23:43
Send private message

i've been pretty impressed with my experiences on the NetApp SANs - i've decided to upgrade a 4 y.o. unit to a newer model in the last few weeks. They seem to be rock-solid as far as uptime goes. Hear many horror stories about the MSA (maybe sometimes available?) products.... The HP Lefthand (P4000) series however I have heard many good things and one of my clients has droped 4 p4000 units under their VMware platform.

One feature of the NetApp i particularly like is the dedupe which works quite well for virtualised systems - you can potentially claw back a bunch of expensive storage with dedupe.

The NetApp can also do both LUNs, CIFS and NFS and can run either iscsi or FC so are quite flexible.

If you want someone to contact re NetApp, send me a PM and i can send you an email with a contact.




magu

Professional yak shaver
1599 posts

Uber Geek
+1 received by user: 7

Trusted
BitSignal
Lifetime subscriber

  #401543 8-Nov-2010 09:11
Send private message

insane:
magu:
insane: I've been playing with just about every different Dell Equallogic SAN they have for the past 9 months or so and can't rate them highly enough. Also have exp with EMC FC SANs.

They come with all the features out the box, no need to license each feature as with many EMC / IBM SANS, and have impressive monitoring /graphing and are super easy to install.

You can mix disk types, raid levels, 1gig and 10gig SANs together and allow the built in smarts to auto load balance the most used blocks to the faster disk trays.

Of course they also support VAAI with VMware 4.1 so tasks such as cloning and template deploying can be done at the storage level and not through the network. = really FAST. They also have their own VMware multipath driver much like EMC have powerpath which flies compared to RoundRobin.

But as always with SANs, you HAVE to buy based on the I/O's you need, so don't be tempted to purchase the largest disk options as you'll only end up spending more in the long run, although if you're running RAID10 then expect to see the usable storage half of the RAW storage.

While I'm at it and since you asked, have had no problems with the Dell M1000e Blade chassis and M610 blades, again very easy to work with and you have many different networking options with the 6 axillary slots on the back of them.

Can show you them next time you're at your DR DC.


- no I don't work for Dell, just like the kit.


I'd love to chat sometime next week or after, then. I did like Dell's all-inclusive licensing as opposed to EMC's extensive list of optional extras.

One of the points against Dell is that, according to our network engineer, their post-sales support is crap. I haven't had to rely on support from Dell yet, so I can't verify. Have you had any experience with their support team? 


I have actually, and each time it gets better as of recently. Their 'Pro support' has had a fairly large overhaul recently and the last few calls I've made to them have been outstanding. If you had asked me six months or a year ago I would have said they were terrible, but things have changed for the better.

When it comes to SANs they offer 'MISSION CRITICAL' support, meaning that they will have any parts fixed/replaced within 4 hours. Not just replaced 4 hours after they've made you jump through hoops and supply debug info etc so it's pretty good.

Obviously I cannot speak of what it's like for smaller companies with no history with them but all I do is say who I work for and my name and they typically do whatever we ask :)


That's good to hear. They are (so far, at least) looking like the most cost-effective solution, since they include all of the software and provide a much leaner SAN than EMC (4-6U vs. 11U), which also counts towards our half-rack allocation. EMC's more expandable in the long-run, but it comes at a cost. 




"Roads? Where we're going, we don't need roads." - Doc Emmet Brown

magu

Professional yak shaver
1599 posts

Uber Geek
+1 received by user: 7

Trusted
BitSignal
Lifetime subscriber

  #401546 8-Nov-2010 09:13
Send private message

Regs: i've been pretty impressed with my experiences on the NetApp SANs - i've decided to upgrade a 4 y.o. unit to a newer model in the last few weeks. They seem to be rock-solid as far as uptime goes. Hear many horror stories about the MSA (maybe sometimes available?) products.... The HP Lefthand (P4000) series however I have heard many good things and one of my clients has droped 4 p4000 units under their VMware platform.

One feature of the NetApp i particularly like is the dedupe which works quite well for virtualised systems - you can potentially claw back a bunch of expensive storage with dedupe.

The NetApp can also do both LUNs, CIFS and NFS and can run either iscsi or FC so are quite flexible.

If you want someone to contact re NetApp, send me a PM and i can send you an email with a contact.


It can't hurt to check with them either. Sent you a PM now.

A point to note that we are moving from VMware vSphere to Citrix XenServer. Using free licenses as well (HA not yet a requirement).




"Roads? Where we're going, we don't need roads." - Doc Emmet Brown

BartManGeek
187 posts

Master Geek


  #401722 8-Nov-2010 15:50
Send private message

Two years ago we put in an HP EVA 4000 SAN (9TB initially + 9TB 12 months later) and connected this to a C7000 blade enclosure running 5 x BL460c ESX Cluster.

Slight overkill for our current workload but we have no concerns for future expansion.

While we had some initial issues with a Blade - 2 actually but the service was second to none and there was less than 2 minutes downtime per blade - the SAN has been rock solid.

We looked at Dell, IBM, EMC, Lefthand, HP MSA and HP EVA. At the end of the day HP put together a solution that was workable. Dell didn't put in an offer, IBM blades gave me the heebee jeebies and we had major concerns about EMC's SAN platform.





Rural Geek - Technology Solutions

"On two occasions I have been asked [by members of Parliament!], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." -- Charles Babbage

 
 
 

Shop now at Mighty Ape (affiliate link).
magu

Professional yak shaver
1599 posts

Uber Geek
+1 received by user: 7

Trusted
BitSignal
Lifetime subscriber

  #401732 8-Nov-2010 16:07
Send private message

Had a chat with an IBM guy that actually talked a lot about NetApp (and IBM's NetApp options). Quite interested to see how it pans out (we have IBM gear at our primary DC).

EMC is not cheap and seems to be driven mostly to VMware's offering. I like NetApp's more unbiased approach, but that's nitpicking.




"Roads? Where we're going, we don't need roads." - Doc Emmet Brown

magu

Professional yak shaver
1599 posts

Uber Geek
+1 received by user: 7

Trusted
BitSignal
Lifetime subscriber

  #414092 7-Dec-2010 15:52
Send private message

Just to update this thread:

We decided to go with the a P4500 Virtualization SAN from HP (LeftHand). It includes a 10-license pack of their Virtual SAN Appliance, which we can put on other servers to setup replication to.

The VSAs are limited to 10TB in size, but that should work for us fine. If we go over than on the secondary facilities, we should be at a stage where a second SAN won't be such a big purchase in the grand scheme of things.

The VSA was a big factor when comparing all the options, as it allows us to leverage the storage server we already have at our secondary DC (DL180 G6 with 8x2TB disks), even if it does not have NFS out of the box.

We'll be putting in dual-switching as well as quad-nic cards for that extra redundancy. Looks like a great all-around solution.




"Roads? Where we're going, we don't need roads." - Doc Emmet Brown

exportgoldman
1202 posts

Uber Geek
+1 received by user: 3

Trusted

  #414131 7-Dec-2010 17:05
Send private message

 

Just thought I'd follow up with the last horror for the MSA2000 G2's we are using.

 

1. Active/Active controllers - 1 hangs the other doesn't take over. Fault logged for 30 days on a P1 24/7 Support contract replicated in Palo Alto by L3 engineer.

2. VSS H/W Provider hangs SAN on Data Protection Manager 2010 snapshot.

3. RAID expanstion of 10x2TB drives to 11x2TB Drives is still expected to take 200 days. We have another tray of drives and controllers arriving from HP on loan to migrate data because during a expansion if a drive dies the rebuild will not happen.

HP have been very helpful and we are working directly with the Level 3 engineers in HP's HQ and probably getting firmware/code cut to resolve these bugs we have found but not something we expected to have to do on a SAN product from HP.

I don't have experience with the G1/G3 products but the G2 is a lemon.

Now the StorWise 7000 from IBM... That looks like a good product.




Tyler - Parnell Geek - iPhone 3G - Lenovo X301 - Kaseya - Great Western Steak House, these are some of my favourite things.

magu

Professional yak shaver
1599 posts

Uber Geek
+1 received by user: 7

Trusted
BitSignal
Lifetime subscriber

  #414417 8-Dec-2010 08:33
Send private message

One thing I noticed while trialling the VSA a couple years back was that you needed a minimum of three nodes for the failover to work. With two, whenever the master went down the slave would kick into emergency mode and block all connections to maintain integrity.

Sounds like they applied the same thought on the MSA active/active management.




"Roads? Where we're going, we don't need roads." - Doc Emmet Brown

 1 | 2
View this topic in a long page with up to 500 replies per page Create new topic








Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.