Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.




4974 posts

Uber Geek
+1 received by user: 105

Trusted

Topic # 32528 19-Apr-2009 13:23
Send private message

The environment I am looking at is as above with locally attached storage to each server. Space has become an issue plus server life.

Somebody proposed (using existing servers) that all storage be consolidated into a NAS (either a HP storage device with attached storage up to 128TB) or re-use a server and attach a solution solution with 4Gb FC connections. But both solutions will have all traffic going over the GB ethernet network.

I would prefer a more strategic solution where we

1. Ditch all the servers and buy 2 modern multi-core multi-socket servers (maybe 4x quad)
2. Run VM Ware or Microsoft's product and create VM's for the services as above
3. Look at the FC storage solution and direct connect each server to the storage over FC rather than using the network. I think based on the proposal received might have to purchase a FC switch since the storage has only 2 FC ports on it and you need 2 for each server

Can't see how 60-70 users can tax those servers doing mail, file & print and small SQL database activity and this way all storage is consolidated and also very high speed access.  Might just leave an older server around for ISA but that has limited disk access

Thoughts?

If you have time to chat in person feel free to send me  PM?

Thanks

Larry




System One: Popcorn Hour A200,  PS3 SuperSlim, NPVR and Plex Server running on Gigabyte Brix (Windows 10 Pro), Sony BDP-S390 BD player, Pioneer AVR, Raspberry Pi running Kodi and Plex, Panasonic 60" 3D plasma, Google Chromecast

System Two: Popcorn Hour A200 ,  Oppo BDP-80 BluRay Player with hardware mode to be region free, Vivitek HD1080P 1080P DLP projector with 100" screen. Harman Kardon HK AVR 254 7.1 receiver, Samsung 4K player, Google Chromecast

 


My Google+ page 

 

 

 

https://plus.google.com/+laurencechiu

 

 


View this topic in a long page with up to 500 replies per page Create new topic
 1 | 2

mjb

922 posts

Ultimate Geek
+1 received by user: 21

Trusted

  Reply # 207975 19-Apr-2009 18:38
Send private message

If you can afford it, go for the FC, and don't *ever* put exchange databases on non-local storage.




contentsofsignaturemaysettleduringshipping




4974 posts

Uber Geek
+1 received by user: 105

Trusted

  Reply # 207982 19-Apr-2009 19:02
Send private message

That's what I was thinking. Some of the users have 3-4Gb mail folders and there is no policy around control. So DAS is proving to be an issue and NAS as you say is not viable. That is why I am also leaving towards FC off VM's




System One: Popcorn Hour A200,  PS3 SuperSlim, NPVR and Plex Server running on Gigabyte Brix (Windows 10 Pro), Sony BDP-S390 BD player, Pioneer AVR, Raspberry Pi running Kodi and Plex, Panasonic 60" 3D plasma, Google Chromecast

System Two: Popcorn Hour A200 ,  Oppo BDP-80 BluRay Player with hardware mode to be region free, Vivitek HD1080P 1080P DLP projector with 100" screen. Harman Kardon HK AVR 254 7.1 receiver, Samsung 4K player, Google Chromecast

 


My Google+ page 

 

 

 

https://plus.google.com/+laurencechiu

 

 


 
 
 
 


3259 posts

Uber Geek
+1 received by user: 643

Trusted

  Reply # 208841 23-Apr-2009 17:42
Send private message

Get your users to run the email archiving to clear anything more than 2 months old off of the server. I think you can set autoarchive in outlook to do this for them. Also set a policy so that no user can use more than 1gb - this will identify the users who need to run the autoarchive more often and they can be shown how to do it.





Ray Taylor
Taylor Broadband (rural hawkes bay)
www.ruralkiwi.com

There is no place like localhost
For my general guide to extending your wireless network Click Here




BDFL - Memuneh
61764 posts

Uber Geek
+1 received by user: 12425

Administrator
Trusted
Geekzone
Lifetime subscriber

  Reply # 208844 23-Apr-2009 17:54
Send private message

Better still get an archiving application from GFI or QUEST (this one used to be Rod Drury's AfterMail until he sold it to QUEST before starting Xero).

Do you trust your users will have enough space to archive? Do you trust they will correctly run backups of gigantic PST files? And if they decide to keep multiple copies of these PST files on a server share, how you avoid duplication?

Nope, best thing is run one of these applications and keep things on the server but using a different technology with easier search, sharing, deduplication of attachments, etc..





1200 posts

Uber Geek
+1 received by user: 3

Trusted

  Reply # 208855 23-Apr-2009 18:59
Send private message


I've been pricing up this exact scenario for work this week, a few suggestions but first a pet hate...If you need your mail, KEEP IT IN EXCHANGE!!! It's far easier to DR, backup, migrate, manage than hundreds of PST files.

One option if you can replace everything is get two front end VM hosts (2xquad core Xeons) backed onto a SAN doing virtual machines (Hyper-V or VMWARE)

Do snapshots of the VM's for backups, and get the data offsite using backupexec/fav backup software via tape/RDX/fiber links.

Cluster your VM machines for fallover.

Exchange works fine over a SAN, NAS boxes I don't really like. You can have multiple redundant paths for the fiber/SAN. IBM have a good whitepaper on SAN configurations for their DS3000/DS4000 boxes, and you can mix and match different drive types (SAS/SATA) in the same box for fast and slow storage.

Think fast = exchange, slow is archive files, VM snapshots.




Tyler - Parnell Geek - iPhone 3G - Lenovo X301 - Kaseya - Great Western Steak House, these are some of my favourite things.

mjb

922 posts

Ultimate Geek
+1 received by user: 21

Trusted

  Reply # 208860 23-Apr-2009 19:17
Send private message

Also think slow is: Exchange mailboxes over around 2.5-3GB, and Exchange databases over 10-15GB.

Yep, that means that you need to, and should, use the enterprise edition of exchange, and use as many databases as you can. Another really important requirement is to have databases and their associated logs on separate drives (that is, drives, arrays or LUNs, Not partitions - you want the I/O diversity of more spindles).




contentsofsignaturemaysettleduringshipping


mjb

922 posts

Ultimate Geek
+1 received by user: 21

Trusted

  Reply # 208861 23-Apr-2009 19:21
Send private message

mjb: Also think slow is: Exchange mailboxes over around 2.5-3GB, and Exchange databases over 10-15GB.


I should clarify - databases over 15GB are slow on three counts - turn-key restore time, maintenance time (if you ever have to do an isinteg repair or eseutil defrag) and performance for users. For performance, you really don't start noticing it badly until you hit around 30GB, but the others *can* end up being timed at 1GB/hour.

(Oh, and if you ever have to do an isinteg repair or eseutil defrag, the database should be recreated as soon as you possibly can. Never trust a database that's had to have maintenance performed to get it online again).




contentsofsignaturemaysettleduringshipping




4974 posts

Uber Geek
+1 received by user: 105

Trusted

  Reply # 208867 23-Apr-2009 20:27
Send private message

Thanks for all the comments. I am leaning towards using two boxes and running either Hyper-V on them or possibly VM Ware ESXi which runs on bare metal so no OS overhead. Guess it depends on how much experience the support organisation has in virtualisation and in what product set.

Still not decided about the storage though. One thought is to load up on of the boxes with lots of attached storage to handle Exchange and File Shares and the EDRMS and the other box with less storage just for SQL (which isn't going to grow dramatically). Would be a cheaper option.

The other is to have more flexibility and buy FC attached storage and connect them to both servers. But in the VM configuration put Exchange, File and Print and EDRMS on the same box so that if a user saves data from Exchange into the EDRMS, it doesn't have to go over the network to the other box, even though it's connected to the same SAN.  And put SQL and ISA on the other box as well as the FMIS.

Not sure where to put OWA - maybe on the first box since again it will not cause access to mail and file to go across the network.

BTW we are looking at many TB's of storage.  But good point about physical LUN or spindles - would probably provide better performance.





System One: Popcorn Hour A200,  PS3 SuperSlim, NPVR and Plex Server running on Gigabyte Brix (Windows 10 Pro), Sony BDP-S390 BD player, Pioneer AVR, Raspberry Pi running Kodi and Plex, Panasonic 60" 3D plasma, Google Chromecast

System Two: Popcorn Hour A200 ,  Oppo BDP-80 BluRay Player with hardware mode to be region free, Vivitek HD1080P 1080P DLP projector with 100" screen. Harman Kardon HK AVR 254 7.1 receiver, Samsung 4K player, Google Chromecast

 


My Google+ page 

 

 

 

https://plus.google.com/+laurencechiu

 

 


1200 posts

Uber Geek
+1 received by user: 3

Trusted

  Reply # 208898 23-Apr-2009 23:27
Send private message

mjb: Also think slow is: Exchange mailboxes over around 2.5-3GB, and Exchange databases over 10-15GB.



Yep, that means that you need to, and should, use the enterprise edition of exchange, and use as many databases as you can. Another really important requirement is to have databases and their associated logs on separate drives (that is, drives, arrays or LUNs, Not partitions - you want the I/O diversity of more spindles).


I'm currently lusting over Exchange 2010, with support for non-raid, multiple replica (local and remote) with automatic database page corruption patching and non-administator invention of fallover clustering out of the box.





Tyler - Parnell Geek - iPhone 3G - Lenovo X301 - Kaseya - Great Western Steak House, these are some of my favourite things.

Infrastructure Geek
4057 posts

Uber Geek
+1 received by user: 195

Trusted
Microsoft NZ
Subscriber

  Reply # 208900 23-Apr-2009 23:39
Send private message

if you can afford it, go with the SAN option.  I'm also a fan of virtualisation in conjunction with the SAN.  Blade Servers are a good pair with a SAN for virtualization.

If you are going to virtualize, run exchange 2007 as it uses a lot less I/Os than older versions. 

also consider hyper-v plus scvmm or vmware esx (not esxi).  both will cost you extra but can give you extra features around redundancy (only hyper-v with new windows 2k8 release due out is capable of live migration)

if you do go with virtualisation make sure you understand the impact of disk types and performance.  dynamically expanding disks, for example, are poor performers.  pass-through FC/iSCSI LUNS, fixed sized (i.e. pre-allocated) virtual disks are the way to go.  Also check the Microsoft support policies on virtualisation for SQL and Exchange to ensure your solution is supported.

you dont necessarily need fiber channel - you can also use hardware accelerated iscsi and if you have multiple NICs per server you can trunk them to get multiple GBps connectivity to the SAN and to the network.  many SAN solutions offer both so if you start with iscsi and its not keeping up then you can lay the extra cash out for FC connectivity.

also check out exchange 2010 - they have done work around archiving etc which may make some of the email achiving solutions obsolete (dont quote me on that though :)

SANS can also run data de-duplication too (usually at an extra price) - so if your users do keep multiple copies of stuff on fileshares etc, the storage layer can reduce the storage requirements.

Seeing as you are looking at multiple TBs of storage, you might like to split between SAS/SCSI and SATA disk shelves for different types of storage.  (never SATA for high IOPS lik exchange or sql though).  As was mentioned, creating several different raid-groups in the SAN for different loads can be good for performance.  Make sure that you have at least 7 spindles in each raidgroup for performance though - several raidgroups of 2 or 3 spindles wont really give you much benefit.




Technical Evangelist
Microsoft NZ
about.me/nzregs
Twitter: @nzregs


Infrastructure Geek
4057 posts

Uber Geek
+1 received by user: 195

Trusted
Microsoft NZ
Subscriber

  Reply # 208902 23-Apr-2009 23:45
Send private message

couple more points:

when you need more servers and you have a SAN then you can leave out the RAID controllers and HDDs which can save you money in the long run

if your exchange 2007 server/blade fails and the data is all stored on the SAN, you can build a new server with the same name, point it at the SAN and run setup recovery mode and hey-presto your exchange server is back online!  (trust me, i've tested that theory out!  i even used it as a migration process to the server off of bare-metal and on to virtual seeing as it worked so well the first time)




Technical Evangelist
Microsoft NZ
about.me/nzregs
Twitter: @nzregs




4974 posts

Uber Geek
+1 received by user: 105

Trusted

  Reply # 208932 24-Apr-2009 08:12
Send private message

The site currently runs Exchange 2003 but because of SA Exchange 2007 is a free upgrade. A bit early to think about 2010!

As for VMWare vs Hyper-V I am pretty familar with VMWare. I managed an orgnisation that had VMWare and EMC SAN's and we were able to boot all our virtual servers from SAN and build them in a matter of an hour or so.

Budget not quite the same here but the points about virtualisation and SAN redunancy still are relevant

Thanks




System One: Popcorn Hour A200,  PS3 SuperSlim, NPVR and Plex Server running on Gigabyte Brix (Windows 10 Pro), Sony BDP-S390 BD player, Pioneer AVR, Raspberry Pi running Kodi and Plex, Panasonic 60" 3D plasma, Google Chromecast

System Two: Popcorn Hour A200 ,  Oppo BDP-80 BluRay Player with hardware mode to be region free, Vivitek HD1080P 1080P DLP projector with 100" screen. Harman Kardon HK AVR 254 7.1 receiver, Samsung 4K player, Google Chromecast

 


My Google+ page 

 

 

 

https://plus.google.com/+laurencechiu

 

 




4974 posts

Uber Geek
+1 received by user: 105

Trusted

  Reply # 208933 24-Apr-2009 08:16
Send private message

Regs:

also consider hyper-v plus scvmm or vmware esx (not esxi).  both will cost you extra but can give you extra features around redundancy (only hyper-v with new windows 2k8 release due out is capable of live migration)


Out of interest what is wrong with ESXi? Fewer management tools but for two boxes and maybe 6 images, looks quite compelling from what I can see

Thanks




System One: Popcorn Hour A200,  PS3 SuperSlim, NPVR and Plex Server running on Gigabyte Brix (Windows 10 Pro), Sony BDP-S390 BD player, Pioneer AVR, Raspberry Pi running Kodi and Plex, Panasonic 60" 3D plasma, Google Chromecast

System Two: Popcorn Hour A200 ,  Oppo BDP-80 BluRay Player with hardware mode to be region free, Vivitek HD1080P 1080P DLP projector with 100" screen. Harman Kardon HK AVR 254 7.1 receiver, Samsung 4K player, Google Chromecast

 


My Google+ page 

 

 

 

https://plus.google.com/+laurencechiu

 

 


mjb

922 posts

Ultimate Geek
+1 received by user: 21

Trusted

  Reply # 208939 24-Apr-2009 08:53
Send private message

exportgoldman: I'm currently lusting over Exchange 2010, with support for non-raid, multiple replica (local and remote) with automatic database page corruption patching and non-administator invention of fallover clustering out of the box.


Mmmmm, yes, it does look quite attractive - but it's still only a Beta :)

Regs: If you are going to virtualize, run exchange 2007 as it uses a lot less I/Os than older versions.


Agreed.

2007 is a pretty damn good product. I have disliked Exchange for a long time, 2007 was the only version where I started to actually think nice things about it.

lchiu7: The site currently runs Exchange 2003 but because of SA Exchange 2007 is a free upgrade.


The word you definitely want here is 'migration' (can't "upgrade" to 2007) - The process is pretty painless, but it's not simple. Happy to provide you with tips and help if you go down this route. You won't regret it, 2007 is light years ahead of 2003. And when you configure Autodiscover properly, it just shines.

I'm currently mired in manual public folder replication for one client - 2007 is a lot more strict about content conversion... :(




contentsofsignaturemaysettleduringshipping


1369 posts

Uber Geek
+1 received by user: 348


  Reply # 209045 24-Apr-2009 14:56
Send private message

Hiya,

Couple of things to throw in for you to mull over

1)  If you go the SAN route you'll also need to factor in the expense of some SAN switches (ie 2).  Direct attaching to the storage isn;t such a great idea as it limits your failover and loadbalancing capability.  Plus you'd have to get the FC HBAs as well.
2)  If you go NAS you can carry on using your existing LAN infrastructure and NICs in the servers.


For your size, go NAS. 

60 to 70 users is nothing in terms of I/O.  Plus if you gofor a decent NAS box (EMC or NetApp .. even though I'm an EMC brainwashed person I'd say go for NetApp they are cheaper and easier to setup) you'll get iSCSI for free and VMware over iSCSI (and NFS) works rather nicely indeed.

NetApp also give you data de-duplication for free as well, so you can schedule that to trawl your data and save space.

If budget permits and you can get the software implement email archiving and ban .PST files .. pain in the arse for backups .. Outlook only has to look at them and they change and get scooped up in the next backup job, or users store them locally on the computers or a shared drive on the network .. horrible things!

No reason to go over complicated, keeping your solution simple is the way to go :-)

Regards!

Mark



 1 | 2
View this topic in a long page with up to 500 replies per page Create new topic



Twitter »

Follow us to receive Twitter updates when new discussions are posted in our forums:



Follow us to receive Twitter updates when news items and blogs are posted in our frontpage:



Follow us to receive Twitter updates when tech item prices are listed in our price comparison site:



Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.

Alternatively, you can receive a daily email with Geekzone updates.