Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


wazzab

84 posts

Master Geek


#109415 19-Sep-2012 11:17
Send private message

Hi there,

I have limited knowledge on Virtualisation, and wanted to ensure my thinking was logical. I will be attending a VMware course soon, but had some ideas on hardware setup prior to me learning everything.

Current we have non-virtual environment with 6 low use older Windows 2003 Srv (DC/file/print/ip phone/webmarshal) and 2 higher use Win 2008 Srv (SQL and Web)

Due to budget and also the fact the our business could sustain small outage windows, we were looking at 2 VM Hosts using Local Storage (no Shared Storage at this stage), and VMware Essentionals. Spread the 8 VMs across the two hosts. We have a 1yr old server that would be fine for 1 host - Dell R710, and looking at getting a Dell R720 for second host.

I guess my main question is, if backups are performed correctly and very regular on all VMs, should we have any issue moving/restarting VMs between these two hosts, of different hardware spec, if there is a issue with one of them. I'm aware we wouldn't have shared storage, Essentials Plus and therefore not be able to use vMotion.  

Would this be acceptable, and would anyone have any ideas or thoughts on this method for our infrastructure and DR strategy.

Thanks for any input on this.


View this topic in a long page with up to 500 replies per page Create new topic
 1 | 2
sbiddle
30853 posts

Uber Geek

Retired Mod
Trusted
Biddle Corp
Lifetime subscriber

  #688107 19-Sep-2012 11:35
Send private message

You can restore onto any other VM Ware machine no problems.

I would recommend Veem as a backup solution, IMHO it's the best backup software around.

 
 
 
 

Lenovo computer and accessories deals (affiliate link).
Zeon
3876 posts

Uber Geek

Trusted

  #688112 19-Sep-2012 11:56
Send private message

Why not go shared storage? It's easily possible using Supermicro equipment for a price cheaper than probably even a single high end HP server and of the same or better quality. I think there was a thread on here previously where we discussed this and could do a SAN + 2x hosts for less than $4k NZD although you probably would want duel PSUs on the hosts which that config didn't have.

In saying that though it sounds like you have enough equipment where its even possible to do this without buying anything additional. Tell us what you have including in production and we may be able to figure something out.




Speedtest 2019-10-14


Zeon
3876 posts

Uber Geek

Trusted

  #688115 19-Sep-2012 11:58
Send private message

In regards to DR strategy VEAAM as mentioned as the way to go for both backups and DR. What kind of net connection do you have and where are you? Usually would suggest put a 1u server in a separate Datacentre to your own and run Veaam across an IPSEC tunnel. Can get it down to less than 10 minutes of loss should your primary servers get hit by a bomb.




Speedtest 2019-10-14




wazzab

84 posts

Master Geek


  #688130 19-Sep-2012 12:19
Send private message

Thanks Zeon - further information. We are a Dell house at the moment, and all servers are twin-proc, twin-PSU's, and all Raid 5 or 10s with decent 10k/15k SAS/iScsi disk. All highest spec at purchase, but well under-utilised. Disk usage and RAM are not high - around 50GB RAM total and 500GB total used disk space across the servers. For our local redundancy we have UPS's on everything, diesel generator in basement which runs the building and I've got around 15 mins to fumble around in the dark and start the generator should we get power cut. Fibre into the building, unlimited data over 10Mb/10Mb and wireless backup running at 35Mb last time I checked it. Auto fail over of internet connectivity by the Cisco switching gear. So I think we are fairly well positioned, and its just the servers are now getting 6-10 years old, apart from 2 of them and virtualisation seems the logical step.

Zeon
3876 posts

Uber Geek

Trusted

  #688135 19-Sep-2012 12:38
Send private message

OK sounds like you have a good setup then! What I would probably suggest would be to:
- use your 2 new servers as hosts
- use one of the newer, older servers which has the most drive bays as your SAN, put your best RAID card in it.
- Install Windows/Starwind on the SAN server and you have shared storage
- Run direct connections using single 1gbps iSCSI between the hosts and the SAN (can add in additional cards for multipathing later)
- Get a newer server or load 1-2 of the old servers up with as much RAM as you can and put them in an offsite datacentre.

It may be a bit of a juggling act to get this working considering all are currently in production. It may be that you:
- install VMware workstation on the more busy of your newer servers
- run a P2V of your other newer server onto this
- test, all ok then move on
- Put ESXi on the server you just did P2V with local storage
- Move VM from VMware workstation across and P2V all other servers testing as you go
- Get the SAN ready and install using the fastest disks you have spare and RAID card from the other newish server that is now idle.
- Migrate VMs to SAN datastore, no downtime required if you get Vmotion with essentials plus
- Turn the other newer server into Host and mount datastore from SAN

I've done sooo many of these juggling acts now... It's frustrating as they have always had huge data (like 3TB +) and need to get done in 1 weekend but shouldn't be too tricky for you.




Speedtest 2019-10-14


wazzab

84 posts

Master Geek


  #688150 19-Sep-2012 13:10
Send private message

Thanks very much for that reply Zeon. I hadn't considered doing it this way. I do have a little bit of budget to spend, so should really which its been offered to me. I didn't really want too much pressure put on, and was trying to leave prod alone as much as possible, so starting off with a fresh piece of kit to set up as a host appealed to me, giving me time to understand how it works, and try a couple of P2Vs. Getting a new host grunty enough to run the whole of our small environment suited, as would be less risky. The idea of two hosts with local disk in my mind seemed less prone to hardware issues than two hosts with shared storage.

I will keep investigating, but my main point of concern seems to be covered, and that is that I can relatively easily move VMs between two different hosts, with some down time, when no shared storage is used.

jaymz
1132 posts

Uber Geek


  #688151 19-Sep-2012 13:17
Send private message

wazzab:
I will keep investigating, but my main point of concern seems to be covered, and that is that I can relatively easily move VMs between two different hosts, with some down time, when no shared storage is used.


Whenever i need to migrate VM's from one host to another without shared storage i use the following from Veeam:
http://www.veeam.com/virtual-machine-backup-solution-free.html?ad=footer

In the earlier versions of ESX (now vSphere) Veeam had a great product called FastSCP.  This has now been superseeded by Backup and Replication (now FastSCP is simply called file manager)

Also, shared storage is perfect for very quick recovery from a failed host (instant if you use HA) but it is still a single point of failure.

Unless you invest in a redudant storage processor type SAN (Like an EMC VNXe) then you are still running with a single point of failure (RAID card in the SAN you build for example)

A better option for a greater redundancy would be to purchase the Veeam Backup and Replicate software and configure your two hosts as a the source and destination hosts.

That was you have a complete copy of your VMs on separate hardware ready to start up if something fails.

Again it comes down to how much downtime the company can support, and how often you run the replications.



wazzab

84 posts

Master Geek


  #688268 19-Sep-2012 16:06
Send private message

Zeon: [OK sounds like you have a good setup then! What I would probably suggest would be to:
- use your 2 new servers as hosts]

I thought the general rule was if you were using Shared Storage, then your Hosts had to be identical, at least the Processors anyway. My two newish servers are :

Dell R410 - 2 x Xeon E5606 2.13GHz, 8MB Cache, 4.8 GT/s QPI, 4C - 12GB RAM, 2 x 300GB 15k SAS
Dell R710 - 2 x Xeon X5660 2.80GHz, 12M cache, 6.4 GT/s QPI, Turbo, HT, 6C - 24GB RAM, 6 x 146GB 15k SAS

Using your action plan, I could effectively beef up the RAM in both of these, and use my budget to purchase a Disk Array of some description, Essential Plus, and then have HA and vMotion.

garethbezett
39 posts

Geek


  #688286 19-Sep-2012 16:34
Send private message

With wanting to start a debate on the relative merits of VMWare vs Hyper-V, I was very impressed to see that Hyper-V for Windows Server 2012 allows migrations without shared storage.  I've previously used VMWare with a SAN and today would seriously consider MS.


Zeon
3876 posts

Uber Geek

Trusted

  #688294 19-Sep-2012 16:41
Send private message

wazzab: Zeon: [OK sounds like you have a good setup then! What I would probably suggest would be to:
- use your 2 new servers as hosts]

I thought the general rule was if you were using Shared Storage, then your Hosts had to be identical, at least the Processors anyway. My two newish servers are :

Dell R410 - 2 x Xeon E5606 2.13GHz, 8MB Cache, 4.8 GT/s QPI, 4C - 12GB RAM, 2 x 300GB 15k SAS
Dell R710 - 2 x Xeon X5660 2.80GHz, 12M cache, 6.4 GT/s QPI, Turbo, HT, 6C - 24GB RAM, 6 x 146GB 15k SAS

Using your action plan, I could effectively beef up the RAM in both of these, and use my budget to purchase a Disk Array of some description, Essential Plus, and then have HA and vMotion.


Yup no need for identical hosts. If you want to use Vmotion you need to set the processor mode of the newer CPU to that of the older in terms of instruction sets etc. I'm running both e5-2620 and e5620 servers in my one. its a setting within VMware somewhere from memory.

But yea you could beef up the RAM in those. SAN wise I personally would still go for the starwind option due to the flexibility but if you want to go for one of the ready made SANs from Dell etc. that is another option.

garethbezett: With wanting to start a debate on the relative merits of VMWare vs Hyper-V, I was very impressed to see that Hyper-V for Windows Server 2012 allows migrations without shared storage.  I've previously used VMWare with a SAN and today would seriously consider MS.



Yes good point. I wouldn't look at Hyper-V 2008 and have heard and seen many issues due to the design and its inability to often deliver what was promised. But the new version looks to have ovcercome all the shortcomings and gone further. In saying that its pretty pre-emptive to be deploying it full swing considering its only just launched.




Speedtest 2019-10-14


Regs
4064 posts

Uber Geek

Trusted
Snowflake

  #688410 19-Sep-2012 20:15
Send private message

might want to check out Hyper-V in Windows Server 2012. You can now do shared-nothing live migration between boxes that aren't even in a cluster.




blair003
557 posts

Ultimate Geek


  #688515 20-Sep-2012 01:05
Send private message

Also if I understand correctly, Windows Server 2012 Standard comes with a 1 physical 2 virtual license. So if you had 2 machines, and you got 2 Windows 2012 server licenses, you could install Hyper-V on each machine, then have 2 guest OS's on each Hyper-V instance so 4 guests total.

mthand
148 posts

Master Geek


  #688520 20-Sep-2012 06:47

Vsphere 5.1 can now use local host storage and pool it across multiple hosts allowing vMotion and DRS.

insane
3170 posts

Uber Geek

ID Verified
Trusted

  #688528 20-Sep-2012 07:47
Send private message

mthand: Vsphere 5.1 can now use local host storage and pool it across multiple hosts allowing vMotion and DRS.


Yeah was thinking about this too while reading through the thread. 5.1 has some pretty nice new features, almost like they flipped the switch after that MS's announcement.

I'll leave the OP with a warning about running iSCSI , if you're used to and expecting high disk throughout then 1gig iSCSI is not for you, will get far better performance from direct attached storage. Having said that, if engineered properly there is nothing wrong with iSCSI in general, just need > 1gig if ~120MB/s is too slow for you.




wazzab

84 posts

Master Geek


  #688576 20-Sep-2012 09:24
Send private message

insane: I'll leave the OP with a warning about running iSCSI , if you're used to and expecting high disk throughout then 1gig iSCSI is not for you, will get far better performance from direct attached storage. Having said that, if engineered properly there is nothing wrong with iSCSI in general, just need > 1gig if ~120MB/s is too slow for you.



Thanks, is SAS throughput higher that iSCSI, if I were to have two hosts with local storage? Currently R710 running H700 controller with 4 x 146GB 15k SAS. This with some bigger/more disks should be fine for local storage host?

 1 | 2
View this topic in a long page with up to 500 replies per page Create new topic





News and reviews »

Samsung Announces Galaxy AI
Posted 28-Nov-2023 14:48


Epson Launches EH-LS650 Ultra Short Throw Smart Streaming Laser Projector
Posted 28-Nov-2023 14:38


Fitbit Charge 6 Review 
Posted 27-Nov-2023 16:21


Cisco Launches New Research Highlighting Gap in Preparedness for AI
Posted 23-Nov-2023 15:50


Seagate Takes Block Storage System to New Heights Reaching 2.5 PB
Posted 23-Nov-2023 15:45


Seagate Nytro 4350 NVMe SSD Delivers Consistent Application Performance and High QoS to Data Centers
Posted 23-Nov-2023 15:38


Amazon Fire TV Stick 4k Max (2nd Generation) Review
Posted 14-Nov-2023 16:17


Over half of New Zealand adults surveyed concerned about AI shopping scams
Posted 3-Nov-2023 10:42


Super Mario Bros. Wonder Launches on Nintendo Switch
Posted 24-Oct-2023 10:56


Google Releases Nest WiFi Pro in New Zealand
Posted 24-Oct-2023 10:18


Amazon Introduces All-New Echo Pop in New Zealand
Posted 23-Oct-2023 19:49


HyperX Unveils Their First Webcam and Audio Mixer Plus
Posted 20-Oct-2023 11:47


Seagate Introduces Exos 24TB Hard Drives for Hyperscalers and Enterprise Data Centres
Posted 20-Oct-2023 11:43


Dyson Zone Noise-Cancelling Headphones Comes to New Zealand
Posted 20-Oct-2023 11:33


The OPPO Find N3 Launches Globally Available in New Zealand Mid-November
Posted 20-Oct-2023 11:06









Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.







Pluralsight