Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


16 posts

Geek
+1 received by user: 4


Topic # 151698 2-Sep-2014 12:52
Send private message

I have done a bit of a search but I guess this is one of the types of questions that there is no right answer but everyone has an opinion and all the information I have found is either old or not well related to the NZ market

What I am wanting is recommendations on what is a good well performing NAS/SAN to be used as shared disk for 3 HP DL385g8 ESX servers (at this stage 40-50 hosts altogether)

Also what connectivity between SAN/NAS and servers

Thanks for your help

Create new topic
2181 posts

Uber Geek
+1 received by user: 659

Subscriber

  Reply # 1120461 2-Sep-2014 13:25
Send private message

We use an IBM V7000 with 8Gb FC to our 5 DL380's with a mix of SSD/SAS and NL SAS disk. 

Hosting about 80 VMs with ESX.

If you wanted something slightly smaller than that maybe look at the V3700 or V5000.

SANs are not cheap btw....and make sure you have a good support contract for the life of the product. 



2520 posts

Uber Geek
+1 received by user: 937

Subscriber

  Reply # 1120464 2-Sep-2014 13:28
Send private message

What is your budget? What are your IO requirements? What storage capacity do you require? 

I'd look for something like the IBM DS3500 Express, or Dell/HP-equivalent, and use 8Gbps FC for your interface.




Windows 7 x64 // i5-3570K // 16GB DDR3-1600 // GTX660Ti 2GB // Samsung 830 120GB SSD // OCZ Agility4 120GB SSD // Samsung U28D590D @ 3840x2160 & Asus PB278Q @ 2560x1440
Samsung Galaxy S5 SM-G900I w/Spark

4935 posts

Uber Geek
+1 received by user: 101

Trusted

  Reply # 1120844 2-Sep-2014 21:41
One person supports this post
Send private message

You could look at the EMC range which I have been doing.

You  could probably make do with this series

https://store.emc.com/us/Product-Family/EMC-VNXe-Products/EMC-VNXe3200-Hybrid-Storage/p/VNE-VNXe3200-Hybrid-Storage

Start with GBE and if that is not fast enough, then I guess you would have to upgrade the servers to support FC or 10GBE. But they are expensive as you might expect.

Bit hard without knowing your space requirements, performance expectations etc.

Otherwise if you are on a lower budget then this Seagate might do the job.

http://www.seagate.com/external-hard-drives/network-storage/business/business-storage-4-bay-rackmount-nas/#specs

This is VMware ready.

Of if reliability is very important then this model Seagate

http://www.seagate.com/external-hard-drives/network-storage/business/business-storage-8-bay-rackmount-nas/

has dual redundant power supplies which should add to the reliability





System One: Popcorn Hour A200,  PS3 SuperSlim, NPVR and Plex Server running on Gigabyte Brix (Windows 10 Pro), Sony BDP-S390 BD player, Pioneer AVR, Raspberry Pi running Kodi and Plex, Panasonic 60" 3D plasma, Google Chromecast

System Two: Popcorn Hour A200 ,  Oppo BDP-80 BluRay Player with hardware mode to be region free, Vivitek HD1080P 1080P DLP projector with 100" screen. Harman Kardon HK AVR 254 7.1 receiver, Samsung 4K player, Google Chromecast

 


My Google+ page 

 

 

 

https://plus.google.com/+laurencechiu

 

 


2091 posts

Uber Geek
+1 received by user: 848


  Reply # 1120868 2-Sep-2014 22:24
Send private message

Some idea of budget would be good.

HP 3PAR is incredible kit - and with appropriate licensing is super duper amazing - they are adding inline dedupe shortly. Load it up with SSDs (which are almost cheaper than spinning) and everything flies. 7200 series are sufficient for most workloads (we've got 8 ESX hosts and 6 SQL clusters running off 2).

8GB Fibre Channel HBAs, two fabrics,

And do you mean 40-50 HOSTS or guests? 50 hosts is a lot.

3404 posts

Uber Geek
+1 received by user: 399

Trusted

  Reply # 1120871 2-Sep-2014 22:27
Send private message

TBH the whole SAN concept is dieing these days. You just need to follow the tech publications and new features coming out of the virtualization providers e.g. VSAN and Hyper-V running on SMB 3.0.

Up until now I have done DIY SAN route, using LSI MegaRAID controllers with Supermicro chassis (JBOD 2.5" and 3.5" units). These then run Windows with Starwind to mount storage onto ESXi over iSCSI. Can do over 20,000 IOPs on pure SSD arrays and 2,000 IOPS on SATA 7200RPM RAID 10 arrays with cachecade. Probably spend about 5k on a SAN that can take 50 disks, has 3TB worth of 20,000 IOPs storage and 20TB worth of 2,000 IOPs storage and is pretty damn reliable.... run about 4 of these types of setups.

If I were to start these days I would seriously consider VSAN.

Looking at switching to Hyper-V after using VMware for like 8 years due to the licensing costs and broad guest support now. Perhaps just connect up to a similar config as above but cut out the Starwind.







2091 posts

Uber Geek
+1 received by user: 848


  Reply # 1120919 3-Sep-2014 05:54
Send private message

Zeon: TBH the whole SAN concept is dieing these days. You just need to follow the tech publications and new features coming out of the virtualization providers e.g. VSAN and Hyper-V running on SMB 3.0.

Up until now I have done DIY SAN route, using LSI MegaRAID controllers with Supermicro chassis (JBOD 2.5" and 3.5" units). These then run Windows with Starwind to mount storage onto ESXi over iSCSI. Can do over 20,000 IOPs on pure SSD arrays and 2,000 IOPS on SATA 7200RPM RAID 10 arrays with cachecade. Probably spend about 5k on a SAN that can take 50 disks, has 3TB worth of 20,000 IOPs storage and 20TB worth of 2,000 IOPs storage and is pretty damn reliable.... run about 4 of these types of setups.

If I were to start these days I would seriously consider VSAN.

Looking at switching to Hyper-V after using VMware for like 8 years due to the licensing costs and broad guest support now. Perhaps just connect up to a similar config as above but cut out the Starwind.




Hi,
You are completely and utterly wrong.

While vSAN, HP Lefthand/VSA etc are really great in a lot of situations they cannot hold a candle to good, dedicated SAN hardware over Fibre Channel, particularly with SSD. 

The solution you describes sounds fine, for small and medium.

Anything large and it would be 1. Incredibly undersized and 2. Utterly unacceptable from a support standpoint. 

For perspective we use VSAs in our remote sites and have big ESX hosts with lots of disk presented out over iSCSI. We use the VSAs for storage level redundancy.

Our main site we have 4 3PARs, 2 for each environment split across DCs. We have a 30TB SSD Layer in each and around 70TB of FC, per SAN. We use AO to move hot blocks into SSD based on daily usage. We are all thin, with the exception of some core apps.

All of this on a 4 hour SLA for replacement.

So yeah, VSANs are cool and all, but spec to your situation and budget. 


4935 posts

Uber Geek
+1 received by user: 101

Trusted

  Reply # 1120998 3-Sep-2014 09:30
One person supports this post
Send private message

As been noted, it's hard to provide recommendations without knowing the size of the environment (50 hosts does seem like a lot), SLA's, type of user base etc.

But I myself would very rarely consider a roll-my-own solution if I were responsible for a company or enterprise. There is just too much that can go wrong and I want to be able to point the finger at one vendor.




System One: Popcorn Hour A200,  PS3 SuperSlim, NPVR and Plex Server running on Gigabyte Brix (Windows 10 Pro), Sony BDP-S390 BD player, Pioneer AVR, Raspberry Pi running Kodi and Plex, Panasonic 60" 3D plasma, Google Chromecast

System Two: Popcorn Hour A200 ,  Oppo BDP-80 BluRay Player with hardware mode to be region free, Vivitek HD1080P 1080P DLP projector with 100" screen. Harman Kardon HK AVR 254 7.1 receiver, Samsung 4K player, Google Chromecast

 


My Google+ page 

 

 

 

https://plus.google.com/+laurencechiu

 

 


26 posts

Geek
+1 received by user: 4


  Reply # 1122989 6-Sep-2014 11:28
Send private message

"3 HP DL385g8 ESX servers (at this stage 40-50 hosts altogether)"
He is talking about 3 physical hosts with 50VMs

One site we run a couple IBM V3700s with 3 hosts using direct attached SAS cables. For a small amount of hosts it is fine.
Currently looking at purchasing EMC VNX or HP 3PAR using FC switches for a new DC that will have 8 Physical Hosts and a few other physical SQL servers.
Both EMC and HP are competitive with their pricing, IBM is expensive and the feature set not as good.

EMC and HP both have great integration of their backup appliances with Veeam which is a great cost effective combination.

Although it is very hard to say without knowing the type of environment. A couple Synology NAS units may even be good enough with the right backup strategy.


Create new topic

Twitter »

Follow us to receive Twitter updates when new discussions are posted in our forums:



Follow us to receive Twitter updates when news items and blogs are posted in our frontpage:



Follow us to receive Twitter updates when tech item prices are listed in our price comparison site:



Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.

Alternatively, you can receive a daily email with Geekzone updates.