We are an HP Partner and when appropriate promote (leverage) HP solutions to our customers who are mainly in the aviation industry worldwide. Most of our customers either have HP solutions already or do not require large storage databases.
The solid state disk SSD will be of interest to our customers who want fast data access and computer start up times. It will be interesting to see what growth HP will have in the SSD market and how quickly the capacity of the SSD will grow and be competitively priced.
Storage...arghhh, one of my on going head aches at work. We have a multitude of storage devices but the main ones I use are IBM V7000 (which we have 2 of, in the office and at our backup site), an old DS4300 that I am trying to find time to get rid of, an IBM NAS which I stream all my VM backups to, a few QNAP boxes littered around the place mostly for test systems and various other systems with large amounts of DAS for archiving stuff etc.
My biggest problem is trying to manage the growth which has already been referred to here. People are now expecting that they can keep everything and they do. I have BI systems growing stupidly fast in terms of data size, Exchange servers with multiple large databases causing me headaches, virtual sprawl going crazy as people start to realise we can stand up servers very quickly and other issues usually caused by roaming profiles and people backing up stuff to their desktops (think iTunes, phones full of photos etc). Ive just ordered another tray for my SAN and its pretty much allocated already.
My biggest problem is I dont have time to do all the things I have to do in a normal day, let alone the things I want to do (i.e. manage our storage properly). Someday I want someone to sell me storage in the cloud where I just ring someone up and say give me another 5 TB and then it magically appears from somewhere along with a cold beer :)
What are your current storage problems in enterprise and SMB? - Backing up to theCloud so at the moment the only issue is speed, but with UFB rolling past my gate in March 2013 that problem will go away.
Do you use any specific technique to solve these problems? - Getting the initial 208GB of backup to theCloud was going to take some time over ADSL so I "seeded" the backup to a USB drive first, then theCloud operations team imported it for me and viola, done.
What kind of hardware/software platform are you using to manage storage? - For Backup: CloudSync Backup (powered by Crash Plan PROe) on www.thecloud.net.nz. - For File Sharing: HomeDrive on www.thecloud.net.nz (currently in beta, launch Dec 12). - Pretty much every hardware platform at theCloud is HP based in some way.
How do you leverage HP products now? - At home, only the TouchSmart AIO in the kitchen. At work, pretty much everything is HP.
Are you attending HP Discover, or watching online? - Discover in Frankfurt in December 2012 = Online. - Discover in Vegas in June 2013 = in person.
The biggest problem we have with our smaller customers who have large storage requirements is backup. Online is too slow, and honestly not that practical since you still need to recover in the event of a disaster and downloading 4TB over a xDSL connection isn't suitable disaster recovery. Also a lot of our customers (rightfully so) want to maintain complete control of their data, and have it available locally.
Customer might want 4-6TB of data, and a copy or two of that offsite. The cost of something like that is pretty frightening for most customers who are under 20 users.
As a commercial photographer my storage requirements have increased with the quality of the digital cameras I use... plus the change from only shooting jpegs to shooting RAW files or RAW plus jpegs. Luckily storage costs are declining (kind of) on a reverse track. I used to fill up folders of images on 100MB Zip disks then take them to a friend to burn the contents of 7 disks onto 2 CD's, then I started burning my own copies of CD's & store 1 offsite and 1 in my studio... then along came DVD's and I'd do the same thing (ideally choosing 2 different brands of disks). Now I copy my data to 2 different 1TB ext. HDD's... but haven't got around to keeping one offsite. I've also just started getting all my old CD's & DVD's copied onto a HDD (a few corrupt files, but no failed disks so far). This is a painful job that was started with enthusiasm by our daughter... but she has lost interest in the project (I guess I'm not paying her enough!).
We had issues with capacity v performance for our test ESXi hosts. 2 of these servers with 6 drive bays each and a combination of 15Krpm SAS disks and 7.2Krpm SATA drives.
The result which worked best for us was to install 3 of each type of drive (in raid 5) in both servers, and have 2 separate datastores, labelled "performance" and "capacity". We found that the capacity datastore was able to cope with plenty of virtual servers, currently one of the hosts has 11 guests running on the capacity datastore and 7 off the performance one. The environment has been running without issue for about 7 months.
This only utilises direct storage and was inexpensive, which was a requirement as it was for a test environment as opposed to production.
I believe that a shared storage solution would have been a good option, so some sort of NAS device connected to the network, capable of handling up to 12 lower cost SATA disks would have been appropriate. I have not investigated the options from HP on these products.
freitasm: Anyone using storage in a datacenter, fiber, etc?
I wouldn't call it a datacentre but the server room is running HP Servers, HP Core Switch, Brocade 8GB FC switches, IBM V7000 SAN. We have 8GB fibre to another local site and we mirror the IBM V7000 to an IBM DS3500 SAN. We went with the IBM SAN solution due to the easy tier SSD features that allow hot disk sectors to be moved to the SSD for faster access. We currently run 80VM's (file server, 700 user email, sql, 250 user Citrix farm, etc) plus a Solaris based 2TB production database on the V7000. We use Commvault/LTO 5 tapes for backups.
As always the biggest issues with enterprise storage are cost, capacity, and failures.
- Cost wise we cannot change that but over time prices are falling the previous SAN storage and VM servers cost over $300k, the latest units cost $150k and we almost doubled capacity.
- Capacity wise we can never keep up, we went through a 6 month project to get the new servers and SAN. By the time it was up and running some of us already have concerns that extra storage shelves will be required within 2 years, but prices should be less as storage shelves are cheaper compared to controllers.
- Failures, we are still reliant on a couple of items as a single point of failure, the core switch and the V7000 SAN. In the next year we may get another core switch, but we are not likely to get another SAN. Although we have reduced the number of spinning disks we have, the chance of a failure never really goes away. We have done every thing possible to control the environment (23kw coolers), UPS, Generator. But $#!^ happens as they say.
Nothing special for storage management, V7000 is browser based and it calls IBM with issues so jobs are logged automatically. The DS3500 is a little more archaic in it's management as it uses a storage manager product from IBM but it works well.
We use HP servers and the new Gen8's are nice and the intelligent provisioning is great (no more smartstart). HP servers run our VM environment, and we are replacing IBM servers at remote sites. All remote sites backup to tape, but database transactions are streamed to Head Office and stored just in case.
Won't be attending HP discover, but will be following new developments as they are announced. We are always looking for new ways to do things that make life easier and more reliable. We don't look at the cloud as a solution for our business due to the size of our data and reliability of our remote site links.
freitasm: Anyone using storage in a datacenter, fiber, etc?
Using iSCSI and NFS here. We have an existing ethernet infrastructure, so these made sense. Deploying optical Fibre Channel wasn't economic and (IMHO) FCOE has all the disadvantages of iSCSI and none of the advantages of classic FC.