![]() ![]() ![]() |
|
"VFNZ are learning your routes via the Equinix route-servers.
We notice that when patches are preformed on game servers etc we see a result in traffic going via the USA"
Yea, that network is not on the route servers, and according to the peering info they only do private, so they are just making it hard on themselves and everyone else for no good reason.
Yea, that network is not on the route servers, and according to the peering info they only do private, so they are just making it hard on themselves and everyone else for no good reason.
Makes sense.
Wonder why they decided to do it this way.
I would guess lack of networking experience.. just a guess though
Yay for this tread. Vodafone and ping has been crap for days. Still was last night. Now know that rebooting is useless.
Makes sense.
Wonder why they decided to do it this way.
If you are a US based company with a primarily US focus networking is easy, you just fire up some "cloud" services and start advertising some IP's in a couple of US locations and you are underway. The complexities of getting the lowest possible latency to your customers at locations all around the world is a substantially larger level of complexity that many US based network companies struggle to understand. Remember the average American struggles to realise that Australia and New Zealand are different places. Now imagine a company that is not a networking specialist having to understand that its really important for routes to take the "short" cable path(s) to NZ and that the "long" paths are no good.
noroad:
If you are a US based company with a primarily US focus networking is easy, you just fire up some "cloud" services and start advertising some IP's in a couple of US locations and you are underway. The complexities of getting the lowest possible latency to your customers at locations all around the world is a substantially larger level of complexity that many US based network companies struggle to understand. Remember the average American struggles to realise that Australia and New Zealand are different places. Now imagine a company that is not a networking specialist having to understand that its really important for routes to take the "short" cable path(s) to NZ and that the "long" paths are no good.
There have been reports coming out of Australia of the same (or similar) issue. Not sure if its directly related though.
I can also say that the Google data center in Sydney has the same problem with routing (I set up a VM in their data center to test out Cloud Gaming to get a feel of what Stadia MIGHT be like here in NZ but all Blizzard games were being routed through the US. Other games were fine though).
You seem to be rather knowledgeable on this kind of stuff (Its been 4 years or so since I did my CCNA and I'm pretty sure we didn't cover anything like this in it, but I think I understand the basics), what change would have caused this to happen? Every thing has been fine for ages now and only recently did the routing get messed up.
Only Blizzard could say what they have changed recently to cause the routing issues, generally as an internet transit provider you don't track the individual network routes (800k of them) unless an issue comes up. Often what will happen with a hosting (ie a gaming company hosts their servers) is they will have a large block or more of IP's and they will advertise them from central place. Then they will advertise more specific (also known as "longer" (ie a /24 is more specific than a /23) ) networks in a region that override the larger advertisement to influence paths. The issue comes in when you only advertise your more specific path to some network providers and not others as these will take precedence. So, for example they are advertising to Telstra in Australia but only through private peering sessions on the Sydney public exchanges, so this means if your ISP has a transit arrangement via Telstra or NTT (in my case) then traffic can take whatever path Telstra wants to use instead of the shorter peering path. If you look at the first trace @nohas posted for example GGI picks it up from the peering exchange in Hong Kong as that's the best path GGI knew about for that network. Now GGI has the Sydney peer up it becomes preferred there instead.
But GGI (spark) would certainly not have known Blizzard is off doing whatever they have been doing, but once they were told by customers of an issue they have worked with Blizzard to enable the shorter path. Same as Vodafone and the rest of us (transit providers), if Blizzard has not arranged for the appropriate connectivity to be in place before putting customers onto it then its outside of Vodafone's control.
Basically, BGP has no idea what latency is, just what's the most specific and preferred path that's available to it. Remember TCP/IP (especially IPv4) really was not designed with the current "route the whole world" internet in mind, it just grew and people have adapted to make it work as issues have arisen. When a company like Blizzard wants to have distributed hosts around the world with the shortest path's to their customers they have to intimately understand how the Internet comes together in each region, its really not an easy task.
Nice, gotta love how it's Vocus with the preferential routes these days into the non-peerers Telstra/Optus etc. when it used to be Telecom with AAPT and TelstraClear being subsidiary of Telstra.
A Vodafone birdie tells me it should be fixed. Mine is back at 40.
Quinny:
A Vodafone birdie tells me it should be fixed. Mine is back at 40.
|
![]() ![]() ![]() |