Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


11 posts

Geek


Topic # 119040 17-May-2013 23:26
Send private message

I play a variety of games in the US and have noticed a significant increase in latency when accessing servers there. The increase appears to occur when internationally traversing global-gateway...

13 ms 13 ms 13 ms ae5-2.akbr4.global-gateway.net.nz [210.55.202.213]
165 ms 165 ms 191 ms ae2-3.sjbr2.global-gateway.net.nz [203.96.120.194]
191 ms 193 ms 192 ms ae0.pabr4.global-gateway.net.nz [203.96.120.74]


The increase in latency is ~60ms and started occurring roughly 2 weeks ago. Has anyone else experienced a similar problem? 

Thanks for any feedback in advance  :)

View this topic in a long page with up to 500 replies per page Create new topic
 1 | 2
1497 posts

Uber Geek
+1 received by user: 474

Trusted

  Reply # 821421 17-May-2013 23:34
Send private message

i get around 214ms avg on 203.96.120.74 which i would say is above the usual ping i would get on that




1948 posts

Uber Geek
+1 received by user: 469
Inactive user


  Reply # 821441 18-May-2013 07:05
Send private message

Have you checked to make sure interleaving is turned off for you as long as you are close enough to the cabinet.

Nothing else has changed and that looks like pretty typical numbers to me.

1919 posts

Uber Geek
+1 received by user: 376

Subscriber

  Reply # 821473 18-May-2013 09:28
Send private message

Out of curiosity I tried that ping. 100% data loss with timeouts!! So you are doing well, then!



11 posts

Geek


  Reply # 821479 18-May-2013 09:50
Send private message

Yeap, I have interleaving turned off.  :)

The target server in the US I have been testing against is 66.150.148.1, with the traceroutes showing the massive lag-spike at so6-1-3.sjbr2.global-gateway.net.nz [202.50.232.34].

My first and second hops are 122.61.40.4 and 122.61.40.4. Both are consistently at 11-12ms (in my mind very good). For anything that resides with NZ such as cache.l.google.com or rachel.paradise.net.nz, my pings are great (~11ms and ~20ms respectively). It is only when my traffic attempts to route via the US that I get that massive increase in lag, specifcally at the global-gateway address specified above.

3210 posts

Uber Geek
+1 received by user: 917

Trusted

  Reply # 821485 18-May-2013 10:18
Send private message

For comparison this is what I get (on snap VDSL - don't think that should make much difference to international though?)

Tracing route to 66.150.148.1 over a maximum of 30 hops
1 ~1 ms ~1 ms ~1 ms h001.endless.net [192.168.51.1]
2 6 ms 6 ms 6 ms 16.17.69.111.static.snap.net.nz [111.69.17.16]
3 6 ms 5 ms 5 ms 54.32.69.111.dynamic.snap.net.nz [111.69.32.54]
4 6 ms 5 ms 5 ms 111-69-27-65.core.snap.net.nz [111.69.27.65]
5 134 ms 134 ms 134 ms te7-3.ccr01.sjc05.atlas.cogentco.com [38.122.92.105]
6 159 ms 159 ms 159 ms te0-2-0-4.ccr21.sjc01.atlas.cogentco.com [154.54.84.53]
7 137 ms 137 ms 137 ms te0-4-0-2.mpd21.lax01.atlas.cogentco.com [154.54.85.25]
8 299 ms 204 ms 207 ms te7-1.mag02.lax01.atlas.cogentco.com [154.54.47.166]
9 138 ms 138 ms 138 ms 38.104.77.122
10 138 ms 138 ms 138 ms border2.po1-20g-bbnet1.lax010.pnap.net [216.52.255.12]
11 138 ms 136 ms 137 ms 66.150.148.1

EDIT: less than tags messed up results

EDIT2: Bear in mind that the time for a "hop" to respond to a ICMP package may be different to the time it takes to just pass it through to the next one, which is why intermediate hops can have higher times than subsequent or the final one. And I believe this is also why traceroute is generally considered a good tool to find the path to destination but not really a good indicator of the quality of that path. 



11 posts

Geek


  Reply # 821755 18-May-2013 21:21
Send private message

To give a larger sample, to provide better context:

Ping statistics for 203.96.120.194:
Packets: Sent = 3961, Received = 3945, Lost = 16 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 161ms, Maximum = 237ms, Average = 191ms

The average latency across the route to the West coast of the US a fortnight ago was ~130-140ms. I don't know what may have caused a change, but one possibility is that route costs could have been altered. One thing I have noticed during deeper inspection is that subnets that are used by the upstream connectivity provider of the servers I am trying to connect to are routing through a different global-gateway host: ae2-3.labr5.global-gateway.net.nz. The latency is much better via that route, but not quite to the levels it was prior to a fortnight ago.

If anyone else wants to try, the following details are those I am testing:

Target Server: 66.150.148.1
Average ping to 66.150.148.1: 191ms

Telecom international hop/gateway used when tracerouting 66.150.148.1: ae2-3.sjbr2.global-gateway.net.nz [203.96.120.194]
Average ping to 203.96.120.194: 184ms


Last hop at Upstream Provider: border2.po2-20g-bbnet2.lax010.pnap.net [216.52.255.93]
Average ping to 216.52.255.93: 174ms

Telecom international hop/gateway used when tracerouting 216.52.255.93:  ae2-3.labr5.global-gateway.net.nz [203.96.120.142]
Average ping to 203.96.120.142: 149ms

Looking at the hostnames of the global gateway routers, I would guess that one is in San Jose and one in LA? Given the server I am connecting to is hosted in LA, it would make the most sense for 66.150.148.0/24 traffic to also traverse 203.96.120.142.

Is logging a basic request at the helpdesk the right method to get this resolved or is there a method of getting this information more directly to someone with the capability to validate within the Telecom network and action a fix?

3740 posts

Uber Geek
+1 received by user: 2270

Trusted
Spark NZ

  Reply # 821761 18-May-2013 21:43
Send private message

MrGreenNZ: To give a larger sample, to provide better context:

Ping statistics for 203.96.120.194:
Packets: Sent = 3961, Received = 3945, Lost = 16 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 161ms, Maximum = 237ms, Average = 191ms

The average latency to this host roughly a fortnight ago was ~130ms. I don't know what may have changed. Is logging a basic request at the helpdesk the right method to get this resolved or is there a method of getting this information more directly to someone with the capability to validate within the Telecom network and action a fix?


I'll ask someone about this on Monday, but please understand that performance to random servers on the internet isn't guaranteed and this may be the result of something outside our network we can do nothing about (easily). It may also be a known transient issue as capacity is juggled or work conducted on our, or other networks.

I'm afraid that latency changing isn't usually a fault and you'll probably not have much luck through the helpdesk. I am in a position to see if it's a known issue (by those that would know) and I'll report back if I can. Just be warned, the report _MIGHT_ be "outside our network, sorry it's the Wild West out there"

Cheers - N




11 posts

Geek


  Reply # 821764 18-May-2013 21:49
Send private message

Hi N,

First up, you're a star   :)

Second, I did a little more work trying to track the problem and updated the prior post. Hopefully the information may provide better breadcrumbs to track the problem down.

Thanks to everyone who has posted!

3740 posts

Uber Geek
+1 received by user: 2270

Trusted
Spark NZ

  Reply # 821767 18-May-2013 21:55
Send private message

Couple more things... Can you post a full tracert to 66.150.148.1 and 216.52.255.93 as well as...

- Your physical location
- The cabinet you are connected to (if known)
- The primary DNS server assigned to your router by the network -(humour me. I know DNS has nothing to do with your latency - it will tell me something about your connection)
- Ping and a tracert to 219.88.188.5

Is the latency consistent at all times of day? Is it variable when it's pinging (assuming nothing else on your network and I have no doubt you are testing while connected via an eth cable)

Regards
N



11 posts

Geek


  Reply # 821787 18-May-2013 22:35
Send private message

I'll answer the questions first and post the traceroutes below:

Location: Kumeu / Huapai

Cabinet: KME/N

Primary DNS on router: 122.56.237.1

Route consistency: very consistent - only alters ~5-10ms during peak residential hours, sometimes not at all.

Method:  I am testing connectivity using a windows workstation connected to an unmanaged switch via wired ethernet. The first hop in the traceroutes refers to the Thompson 585v8 router provided by Telecom - I use a firewall/VPN gateway which is half bridged to the Thompson 585v8 router as default gateway for hosts within my network. Thus the reference to 192.168.1.254 is the Thompson router.

Ping to 219.88.188.5:
Pinging 219.88.188.5 with 32 bytes of data:
Reply from 219.88.188.5: bytes=32 time=12ms TTL=59
Reply from 219.88.188.5: bytes=32 time=10ms TTL=59
Reply from 219.88.188.5: bytes=32 time=10ms TTL=59
Reply from 219.88.188.5: bytes=32 time=10ms TTL=59
Reply from 219.88.188.5: bytes=32 time=10ms TTL=59
Reply from 219.88.188.5: bytes=32 time=10ms TTL=59
Reply from 219.88.188.5: bytes=32 time=10ms TTL=59
Reply from 219.88.188.5: bytes=32 time=12ms TTL=59
Reply from 219.88.188.5: bytes=32 time=9ms TTL=59
Reply from 219.88.188.5: bytes=32 time=9ms TTL=59

Ping statistics for 219.88.188.5:
Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 9ms, Maximum = 12ms, Average = 10ms

Traceroute to 219.88.188.5:

Tracing route to cache.google.com [219.88.188.5]
over a maximum of 30 hops:

1 <1 ms <1 ms <1 ms 192.168.1.254
2 14 ms 12 ms 12 ms 122-61-40-4.jetstream.xtra.co.nz [122.61.40.4]
3 27 ms 11 ms 11 ms 122-61-40-1.jetstream.xtra.co.nz [122.61.40.1]
4 12 ms 13 ms 14 ms cache.google.com [219.88.188.1]
5 13 ms 15 ms 15 ms cache.google.com [219.88.188.5]

Trace complete.
Traceroute to 66.150.148.1:
Tracing route to 66.150.148.1 over a maximum of 30 hops

1 <1 ms <1 ms <1 ms 192.168.1.254
2 13 ms 12 ms 13 ms 122-61-40-4.jetstream.xtra.co.nz [122.61.40.4]
3 16 ms 14 ms 151 ms 122-61-40-1.jetstream.xtra.co.nz [122.61.40.1]
4 14 ms 12 ms 12 ms mdr-ip20e-int.msc.global-gateway.net.nz [203.96.66.22]
5 * 12 ms 12 ms ae0-10.akbr6.global-gateway.net.nz [203.96.66.21]
6 13 ms 12 ms 12 ms ae5-2.akbr4.global-gateway.net.nz [210.55.202.213]
7 166 ms 166 ms 165 ms so6-1-3.sjbr2.global-gateway.net.nz [202.50.232.34]
8 193 ms 193 ms 191 ms ae0.pabr4.global-gateway.net.nz [203.96.120.74]
9 192 ms 194 ms 194 ms xe-0-0-0-1.r06.plalca01.us.bb.gin.ntt.net [140.174.21.17]
10 201 ms 200 ms 203 ms ae-4.r07.snjsca04.us.bb.gin.ntt.net [129.250.4.119]
11 196 ms 193 ms 167 ms te0-7-0-35.ccr22.sjc03.atlas.cogentco.com [154.54.11.1]
12 203 ms 203 ms 204 ms 154.54.89.97
13 203 ms 203 ms 205 ms te0-3-0-2.mpd22.lax01.atlas.cogentco.com [154.54.25.189]
14 203 ms 203 ms 203 ms te8-1.mag02.lax01.atlas.cogentco.com [154.54.47.170]
15 206 ms 209 ms 212 ms internap.com [38.104.82.234]
16 175 ms 172 ms 172 ms border2.po2-20g-bbnet2.lax010.pnap.net [216.52.255.93]
17 239 ms 240 ms 240 ms 66.150.148.1

Traceroute to 216.52.255.93:
Tracing route to border2.po2-20g-bbnet2.lax010.pnap.net [216.52.255.93]
over a maximum of 30 hops:

1 <1 ms <1 ms <1 ms 192.168.1.254
2 12 ms 11 ms 12 ms 122-61-40-4.jetstream.xtra.co.nz [122.61.40.4]
3 50 ms 12 ms 12 ms 122-61-40-1.jetstream.xtra.co.nz [122.61.40.1]
4 12 ms 13 ms 12 ms mdr-ip20e-int.msc.global-gateway.net.nz [203.96.66.22]
5 12 ms 17 ms 13 ms ae0-10.akbr6.global-gateway.net.nz [203.96.66.21]
6 21 ms 12 ms 13 ms xe3-1-0.tkbr9.global-gateway.net.nz [202.50.232.74]
7 191 ms 137 ms 193 ms ae2-3.labr5.global-gateway.net.nz [203.96.120.142]
8 172 ms 172 ms 137 ms ae0-3.lebr6.global-gateway.net.nz [203.96.120.86]
9 169 ms 140 ms 141 ms ae0-3.lebr6.global-gateway.net.nz [203.96.120.86]
10 138 ms 137 ms 138 ms ae-4.r04.lsanca03.us.bb.gin.ntt.net [129.250.6.88]
11 139 ms 137 ms 138 ms xe-0-3-0-1.r04.lsanca03.us.ce.gin.ntt.net [198.172.90.26]
12 143 ms 170 ms 141 ms xe-0-3-0-1.r04.lsanca03.us.ce.gin.ntt.net [198.172.90.26]
13 211 ms 190 ms 176 ms border2.po2-20g-bbnet2.lax010.pnap.net [216.52.255.93]
Please note:

- I gathered the cabinet name a while back when the Chorus Service Availability tool provided that information (~Feb 2013). Not sure if there is a method of the tracking this information currently.

- I use a caching DNS server that forwards LAN requests to 202.27.158.40 for resolution. I flushed the DNS cache to ensure hostnames were accurate. Using traceroute to the hosts without name resolution bears the same performance results however.

Cheers,

C
 



11 posts

Geek


  Reply # 821795 18-May-2013 23:25
Send private message

FYI, the networks of interest for me are:

216.133.234.0/24,
192.64.168.0/24,
192.64.169.0/24,
192.64.170.0/24,
64.7.194.0/24,
66.150.148.0/24

They are for a game I like...

3740 posts

Uber Geek
+1 received by user: 2270

Trusted
Spark NZ

  Reply # 822285 20-May-2013 10:52
Send private message

I've spoken to someone about this and they are aware of some normal variation in the upstream routing that happened at about the same time as you noticed the increase in latency. We're looking into it but from our point of view, all the upstream providers are operating within the SLAs that they have with us. That doesn't mean we won't look to improve this - we will - but any improvements will be part of a normal and ongoing process to optimise performance rather than treating this as an actual fault at this stage.

Regards - N



11 posts

Geek


  Reply # 822327 20-May-2013 11:35
Send private message

That sounds reasonable, if slightly frustrating :)

Thanks for looking into the issue and informing those that may be able to increase performance over time for those routes.

Cheers,

C

2282 posts

Uber Geek
+1 received by user: 370

Trusted
Subscriber

  Reply # 822332 20-May-2013 11:42
Send private message

I'll be impressed if they can do anything about it, many of the routing decisions are made dynamically or on commercial grounds.

What works for you today might not work optimally tomorrow, and vice verse for other things.



11 posts

Geek


  Reply # 822336 20-May-2013 11:49
Send private message

My main hope is centered on a failed fibre that apparently was connected to the LA landing point for SxC. If the cost for that route or performance on that route was poor for a period of time and someone manually shifted the route, then shifting it back could provide a pleasant surprise. This is obviously conjecture, based on the rumor of the failed fibre (which apparently effected Vodafone transit quite badly?!?!?).

Anyhow, I will remain an optimist for the moment   :)

 1 | 2
View this topic in a long page with up to 500 replies per page Create new topic

Twitter »

Follow us to receive Twitter updates when new discussions are posted in our forums:



Follow us to receive Twitter updates when news items and blogs are posted in our frontpage:



Follow us to receive Twitter updates when tech item prices are listed in our price comparison site:



Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.

Alternatively, you can receive a daily email with Geekzone updates.