![]() ![]() ![]() |
|
Ipv89:
It will cost but its for a critical service.
When it is really critical, I'd recommend to have one LTE/4G link and UPS in service.
- NET: FTTH, OPNsense, 10G backbone, GWN APs, ipPBX
- SRV: 12 RU HA server cluster, 0.1 PB storage on premise
- IoT: thread, zigbee, tasmota, BidCoS, LoRa, WX suite, IR
- 3D: two 3D printers, 3D scanner, CNC router, laser cutter
Give me a call.
We are based out of the napier exchange but depending upon your location, your physical fiber may run back to Hastings exchange, which I understand is separate from Napier even though it is still the same UFB local coverage area.
As suggested another option is unison fiber.
We do backup connections for exactly this reason for many of our customers.
Ray Taylor
There is no place like localhost
Spreadsheet for Comparing Electricity Plans Here
UnisonFibre doesn't have equipment located in Chorus exchanges.
So as far as I can tell there is no way to have physical redundancy without a single point of failure out of HB? In regards to failing over to 4G I’m guessing you would need several 4G connections? Where I am at the moment gives about 35 -40 mbps over 4G. I guess in a disaster we could just route everything that’s not critical over 4G and just dump the rest of the traffic.
You could add a satellite link, that will get you past any single point of failure....................slowly
Cyril
Ipv89:
So as far as I can tell there is no way to have physical redundancy without a single point of failure out of HB? In regards to failing over to 4G I’m guessing you would need several 4G connections? Where I am at the moment gives about 35 -40 mbps over 4G. I guess in a disaster we could just route everything that’s not critical over 4G and just dump the rest of the traffic.
It's all about risk management taking into account of the frequency that the even will occur, time for recovery and mitigation options.
If it will only happen once every 4 years, and the outage will only be for less than 8 hours what is the risk you are prepared to accept?
What options do you have around sending everyone home and then they work from home over residential UFB service if your services are hosted in the cloud or you have a DR site that is cloud hosted and your onprem production site.
When I worked at Spark and post Christchurch earthquake I know a lot of teams went and camped at peoples houses who still had power / services for months before things started to come right and the VPN took a hammering during that time.
If you haven't figured out a way to not be 100% dependent on your primary site in the event of disaster then you need to re-think around your Business Continuity plans. Assuming this isn't a manufacturing plant where everything is done within the 4 walls of the building. But in that situation do you also have redundant power links? Generator / UPS to keep critical services up for a week?
My personal view is a good quality 4G backup for 99% of all businesses should be sufficient, that way you can still email / run orders / transact online as all of that requires minimal bandwidth. And people aren't going to be streaming Youtube/Netflix/Creating video content which are the only significant bandwidth hogs and if that is your business then people can disperse back to their homes during the outage.
Could you do microwave to a (say) Kordia high site? That way you avoid the regional exchange bottle neck, and would probably have oodles of capacity
You need to explain what this critical service is a little more, is it a bandwidth hog? is it latency sensitive? is there a reason it even HAS to be served from a business location and not a proper DataCentre based location?
Beccara:
You need to explain what this critical service is a little more, is it a bandwidth hog? is it latency sensitive? is there a reason it even HAS to be served from a business location and not a proper DataCentre based location?
A couple of decades ago, I was IT Manager for a utility company, and when I started looking at Business Continuity / Disaster Recovery planning I soon realised that there was no single-site solution that would provide the availability required.
To do this job properly you need an alternate location as well as your primary.
And the alternate needs either to be far enough away that one event won't clobber both, or your Board needs to sign off (yes, it is that responsibility level) that the ICT BCP / DRP does not provide a solution for an event that takes out both nominated sites.
PolicyGuy:
Beccara:
You need to explain what this critical service is a little more, is it a bandwidth hog? is it latency sensitive? is there a reason it even HAS to be served from a business location and not a proper DataCentre based location?
A couple of decades ago, I was IT Manager for a utility company, and when I started looking at Business Continuity / Disaster Recovery planning I soon realised that there was no single-site solution that would provide the availability required.
To do this job properly you need an alternate location as well as your primary.
And the alternate needs either to be far enough away that one event won't clobber both, or your Board needs to sign off (yes, it is that responsibility level) that the ICT BCP / DRP does not provide a solution for an event that takes out both nominated sites.
Pretty much this, I didn't want to assume the service was able to be moved but if it can and if their is any sort of HA feature built into it then it will be much easier and cheaper to fire up some VM's in a pair of DC's then build your current location upto spec. If their isn't any HA then running hourly/15min backups/replications between 2 locations would be the next best thing.
|
![]() ![]() ![]() |