Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.
Buying anything on Amazon? Please use the Geekzone Amazon aff link.


View this topic in a long page with up to 500 replies per page Create new topic
1 | 2 | 3 | 4 | 5
Voice Engineer @ Orcon
1999 posts

Uber Geek
+1 received by user: 472

Trusted
Orcon
Subscriber

  Reply # 666846 3-Aug-2012 12:04 Send private message

mercutio: 

Well yeah - but I think there is actually a lot of hidden latency on the internet now days.  Some people say that ping/traceroute is inaccurate cos icmp is depriortised.  But sometimes whole tcp streams can be deprioritised (or go across higher latency path), rate-shaped(or have enough packet loss to limit transfer speeds below that of which the connection accessing).  You can do a traceroute, then do an actual connection and end up on a different path.

So you may ping and get 125 msec ping, then do a tcp connection and get 135 msec. 

But this kind of traffic shifting could become even more common with "bulk" traffic being pushed through Australia and interactive traffic being pushed straight.  Or low-value customers being pushed through Australia.


ICMP should have same priority as most other traffic.  Besides which, this is only an issue if the slowest link is nearing saturation.

TCP introduces its own overheads, and packets will be bigger, so you can't really compare TCP ping with ICMP ping.

Truth is that as many people have pointed out, on international traffic most of the latency is purely due to physics and cannot be reduced in the context of our current understanding of physics.

Have plan, send $NZD50m
3475 posts

Uber Geek
+1 received by user: 75

Subscriber

  Reply # 666857 3-Aug-2012 12:07 Send private message

Wow.... ok this could take a bit of addressing :) 

I am sure that some of this will sound quite daft, and some will be like preaching the the converted...  but I'm throwing my thoughts out there as much for the benefit of other interested readers who are just keen to better understand the space.  I know I'm very keen to better understand the dynamics as different people see them even if I don't always agree.

daverobb:  The problem with accessing global capacity via Australia is that there are already people who already complain about the RTT from NZ to the US (someone should do something about that pesky speed of light), and when traffic has at times been sent to the US and elsewhere via Australia, even more people complain!


Ok first up you seem to be making the assumption that I think traffic draw is going to continue to come from .us and push to same.  I don't.

if I have a look around me in New Zealand, I see the country is filling up with people from Asia. 

Our major trading partner is Australia and then we're seeing more and more trade with Asia and less and less with the USA.

RTT comes into play when you're dealing with time dependent applications such as Skype or online gaming.

Skype traffic is going to go to people who want to talk to people.  So in a country full of Asian people, that traffic is going to head to Asia where their families and friends are.

Gaming comes in two parts.  You've got the live data for the interactive game and core software downloads.  I agree the live stuff wants to be fast, but the software and 'off line' stuff (such as a game play review, or what ever they call those little videos they make of their battle with that mate in where ever...) can all happen at 300ms and no one will know or care.


daverobb:   Having a cable dedicated to NZ-AU traffic isn't going to be any different to the existing SCCN capacity; I think it very unlikely that it would be cheaper


Yes it will be different.  To get to PPC1, AJC or Endv, you're not going to have to touch SCCN at all.  The problem at present is that when you buy capacity from SCCN you then have to get it off SCCN, to a peering point then onto the other cables, which the SCCN sales guy knows and will cut a deal just to use the capacity with them direct to .us.   This stuff has to be an obvious competitive issue.


daverobb:  (SCCN pricing for capacity NZ-AU has, like NZ-US prices, been dropping dramatically over the last few years),


21% is not Moores Law  - I keep seeing this 21% price drop figure bounced about the net all over the place.  SCCN seem to hold it up like they're doing us all this super favour.  But are they doing us a favour at all?

Moore's law means that the same capacity can be built for half the price every 18 months.  While that's not what he said I think that most people can see this price thing in action in most technology spaces.

Is 21% even a doubling of value every 12 month?


daverobb:  you still need network at both ends (otherwise you're just getting between landing stations, and I don't see a lot of torrent peers present at those), and people will complain if you use AU as a hop in the path for NZ->US capacity.


Ok, isn't this where traffic management technology has to come in?  Also, how are people going to know that the traffic is heading to .au --> .us if it's layer 2?

Is anyone going to notice is you push torrent traffic via .au?

We all seem to agree that UFB is going to be about video.  We also know that once iTunes hit the net, people started paying for music online rather than just stealing it, once it became easier to buy than steal.  Is the same thing going to happen with video.

Where are QuickFlicks hosting their data? 

There has been a great deal of discussion about CDN's in Australia and massive investment in DC's in Australia.

I posted yesterday about 'back load' capacity.  With the likes of Bevan Slattery, NextDC, building data centers like mad, seems to me that there are people with their eye on exactly these issues.








Promote New Zealand - Get yourself a .kiwi.nz domain name!!!

Check out mine - i.am.a.can.do.kiwi.nz - [email protected]


1089 posts

Uber Geek
+1 received by user: 49


  Reply # 666862 3-Aug-2012 12:13 Send private message

ubergeeknz:
mercutio: 

Well yeah - but I think there is actually a lot of hidden latency on the internet now days.  Some people say that ping/traceroute is inaccurate cos icmp is depriortised.  But sometimes whole tcp streams can be deprioritised (or go across higher latency path), rate-shaped(or have enough packet loss to limit transfer speeds below that of which the connection accessing).  You can do a traceroute, then do an actual connection and end up on a different path.

So you may ping and get 125 msec ping, then do a tcp connection and get 135 msec. 

But this kind of traffic shifting could become even more common with "bulk" traffic being pushed through Australia and interactive traffic being pushed straight.  Or low-value customers being pushed through Australia.


ICMP should have same priority as most other traffic.  Besides which, this is only an issue if the slowest link is nearing saturation.

TCP introduces its own overheads, and packets will be bigger, so you can't really compare TCP ping with ICMP ping.

Truth is that as many people have pointed out, on international traffic most of the latency is purely due to physics and cannot be reduced in the context of our current understanding of physics.


Actually I'd say that a large amount of perceived latency actually happens from packet loss, and application overhead.

The reason why ebay etc is slow, is partially due to their page generation being slow.

The larger packets overhead isn't actually that high.

1937 posts

Uber Geek
+1 received by user: 473

Trusted
Spark NZ

  Reply # 666869 3-Aug-2012 12:17 Send private message

DonGould: Wow.... ok this could take a bit of addressing :) 
[snip]
RTT comes into play when you're dealing with time dependent applications such as Skype or online gaming.


Wrong. RTT is a major contributor to per connection throughput.



21% is not Moores Law  - I keep seeing this 21% price drop figure bounced about the net all over the place.  SCCN seem to hold it up like they're doing us all this super favour.  But are they doing us a favour at all?

Moore's law means that the same capacity can be built for half the price every 18 months.  While that's not what he said I think that most people can see this price thing in action in most technology spaces.



I think you should go and read what Moores law actually is. Hint - it's not network capacity. Some people have co-opted it under another name but data consumption hasn't progressed at the same rate of Moores law, anywhere that I am aware of over meaningful periods of time.

In addition to that, SCC isn't being rebuilt every 18 months.

Sorry Don, you're way off here, and definitely seem to be trying to bat outside your league.

Cheers - N

Watchmaker Wizard
2414 posts

Uber Geek
+1 received by user: 57

Subscriber

  Reply # 666878 3-Aug-2012 12:23 Send private message

Cable seems like a fairly sure way to make money, a public offering might've attracted interested, heck, even I might've been keen to put a bit in.

Regardless, I'm still pretty happy with $70-ish getting me 120GB/month @ 2MB/sec via Telecom. UFB is of no interest to me personally but I'm obviously not the target audience.




1089 posts

Uber Geek
+1 received by user: 49


  Reply # 666880 3-Aug-2012 12:24 Send private message

Talkiet:
DonGould: Wow.... ok this could take a bit of addressing :) 
[snip]
RTT comes into play when you're dealing with time dependent applications such as Skype or online gaming.


Wrong. RTT is a major contributor to per connection throughput.


To ramp up speeds maybe - but now with so many Linux hosts using cubic congestion control, high speeds can be reached quite quickly, and packet loss seems to be the main reason for sluggish connection.

And in the case of equal levels of packet loss, a short rtt connection wins over a long rrt connection.

I suppsoe RTT is still a major contributor yes.

If you have 8 connections to a web server, each with 15k of data, then you are looking at 3 to 5 round trip times in ideal circumstances.

1 to create the connection.  1 to send the first 3 or 10 packets depending on whether google's recommended initial window size of 10 packets is used. (and 2 on some servers, like www.godaddy.com) 

then 1 more to send the rest of the data after ramping up on 10x initial window size, or 3 more on 3x.

but if you lose the 1st packet sent, or the 3rd packet received, on 3x initial window size you don't have another packet to signal the loss.  and even then fast retransmit usually happens after multiple repeated acknowledgements for the missing packet.

It's the extra reason 10x initial window size helps performance.  But with web objects, there are so common small ones that it still can't check in.

The thing is - if you go to a page and it has 40 images, all 40k big, all sent in one go, you have 400 packets you're receiving, without pipelining.  if you have 1% loss it has a high chance to have to least wait an extra rtt.  And that loss can happen anywhere along the way. 

1089 posts

Uber Geek
+1 received by user: 49


  Reply # 666887 3-Aug-2012 12:27 Send private message

stevenz: Cable seems like a fairly sure way to make money, a public offering might've attracted interested, heck, even I might've been keen to put a bit in.

Regardless, I'm still pretty happy with $70-ish getting me 120GB/month @ 2MB/sec via Telecom. UFB is of no interest to me personally but I'm obviously not the target audience.


Yeah at the end of the day, for web browsing, the main difference for UFB will be 5 msec or so lower latency.  And being able to ack packets faster lettting the other end know you have a fast connection.

But because you're on a fast connection and being artificially limited it'll be easy to have a remote end send too fast for your connection, and overestimate your connection speed.

There'll also be a difference because of the upload speed - your connection will not spend as much of your upload bandwidth when downloading.  And uploading while using a connection at the same time will work a lot better.

Also sending images etc on skype, email etc will seem a lot faster.

jsr

16 posts

Geek


  Reply # 666918 3-Aug-2012 12:49 Send private message

 RTT comes into play when you're dealing with time dependent applications such as Skype or online gaming. 


RTT comes into play on any TCP connection, just because of how the protocol works - packet acks take longer on high latency links (like, say, NZ to US) and you don't get the next packet until the last one is ack'd. This means that on the exact same gear, using the exact same TCP stack, s%&t will go slower when you're further away. Doesn't matter if it's mail, web, ftp, torrents, or my own personal favourite: Gopher.[1]

You can get around this a bit by, say, opening a crapload of simultaneous TCP streams - this is one of the reasons torrents are so good at filling up links - but on a single TCP session, latency really, really, really matters.

Sorry that I posted something to GZ in which I didn't insult anyone[2]. Rest assured I am insulting you appropriately in my head, and behind your back.[3]

JSR

[1] It's coming back! YOU MARK MY WORDS!

[2] I also didn't post a speedtest.net screenshot. Again, I'm sorry, I've let Geekzone down.

[3] And on IRC.

jsr

16 posts

Geek


  Reply # 666920 3-Aug-2012 12:50 Send private message

DISCLAIMER: TalkieT works for Telecom NZ.

1089 posts

Uber Geek
+1 received by user: 49


  Reply # 666922 3-Aug-2012 12:53 Send private message

jsr:
 RTT comes into play when you're dealing with time dependent applications such as Skype or online gaming. 


RTT comes into play on any TCP connection, just because of how the protocol works - packet acks take longer on high latency links (like, say, NZ to US) and you don't get the next packet until the last one is ack'd. This means that on the exact same gear, using the exact same TCP stack, s%&t will go slower when you're further away. Doesn't matter if it's mail, web, ftp, torrents, or my own personal favourite: Gopher.[1]


It won't make local downloads go slower normally.  going from 10 to 20 msec latency, might make 30 msec difference to a large local transfer speed.  It's only under congestion that it matters.

It also shouldn't make utorrent go much slower.  As utp as become common, and responds to increases in latency rather than latency.  (and is single direction latency too, unlike tcp)

But yeah, utorrent isn't so great in that by default it tries to avoid 100 msec latency, and 100 msec latency is crazy high!  Which means if there isn't a seperate queue for it, it'll hog the connection.  And adsl2+ connections can't usually lag out by 100 msec with downloads. (but can easily with uploads)

net.utp.receive_target_delay and 
net.utp.target_delay

can be tweaked to fix that.   or a seperate queue for downloads.

and really, if everyone could have their downloads on a seperate queue, with a nice amount of bufferbloat, then they can get fast downloads and not congest other connections.

jsr

16 posts

Geek


  Reply # 666941 3-Aug-2012 13:11 Send private message

mercutio: 

It won't make local downloads go slower normally.  going from 10 to 20 msec latency, might make 30 msec difference to a large local transfer speed.  It's only under congestion that it matters.


You misunderstand. Congestion has nothing to do with it. 

Look, imagine two computers. You're on Geekzone, so imagine that they're both PC's running Windows and they have, like, perspex inserts in the case and are full of LED fans and a seven-segment display on the front that shows what temp the CPUs are. Point is, they're identical. They're both connected to the internet via an un-contended 100Mbits/sec link to the internet[1]. One is in Los Angeles. The other is in Auckland. 

They both request the following from a server also located in Los Angeles:

PC: "HAY I WANT SUM DATA"
SERVER: "OK MANG, HERE'S 1500 BYTES BRO. LET ME KNOW WHEN YOU GET IT."
PC: GOT IT! NOW GIVE ME ANOTHER!
SERVER: SURE THING! 

..this repeats until you have all the data you wanted, 1500 bytes at a time. 

Thing is, each bit of that "Give it to me, here you go, did you get it?, yep sure did!" conversation takes, like 2 or 3 or maybe 5ms between the computer in LA and the server in LA. But each bit of it takes 160 to 180ms between the computer in NZ and the server in LA. So the total time to get each 1500bytes is longer and ALWAYS WILL BE LONGER, just due to (a) physics, and (b) how TCP works.

JSR

[1] Purchase from the good, clueful, smart, impossibly handsome and charismatic folks at Vector Communications.[2]

[2] Disclaimer: I work for Vector Communications[3]

[3] TalkieT still works for Telecom NZ.

1089 posts

Uber Geek
+1 received by user: 49


  Reply # 666942 3-Aug-2012 13:13 Send private message

jsr:
mercutio: 

It won't make local downloads go slower normally.  going from 10 to 20 msec latency, might make 30 msec difference to a large local transfer speed.  It's only under congestion that it matters.


You misunderstand. Congestion has nothing to do with it. 

Look, imagine two computers. You're on Geekzone, so imagine that they're both PC's running Windows and they have, like, perspex inserts in the case and are full of LED fans and a seven-segment display on the front that shows what temp the CPUs are. Point is, they're identical. They're both connected to the internet via an un-contended 100Mbits/sec link to the internet[1]. One is in Los Angeles. The other is in Auckland. 

They both request the following from a server also located in Los Angeles:

PC: "HAY I WANT SUM DATA"
SERVER: "OK MANG, HERE'S 1500 BYTES BRO. LET ME KNOW WHEN YOU GET IT."
PC: GOT IT! NOW GIVE ME ANOTHER!
SERVER: SURE THING! 

..this repeats until you have all the data you wanted, 1500 bytes at a time. 

Thing is, each bit of that "Give it to me, here you go, did you get it?, yep sure did!" conversation takes, like 2 or 3 or maybe 5ms between the computer in LA and the server in LA. But each bit of it takes 160 to 180ms between the computer in NZ and the server in LA. So the total time to get each 1500bytes is longer and ALWAYS WILL BE LONGER, just due to (a) physics, and (b) how TCP works.

JSR

[1] Purchase from the good, clueful, smart, impossibly handsome and charismatic folks at Vector Communications.[2]

[2] Disclaimer: I work for Vector Communications[3]

[3] TalkieT still works for Telecom NZ.


Yeah, but if it means choosing between packet loss, and latency, then having 0% packet loss, and 150 msec latency will give you better performance than 1% packet loss and 130 msec latency.

Sure with a few short connections it'll help having the lower latency some of the time.  But the "random" cost of much higher latency is what "hurts".   And if there is limited bandwidth because it's expensive there'll continue to be "hurt" while packets are dropped randomly to keep costs down.

If going via Australia can mean less cost and less need to drop packets, it could still work out towards a more consistent experience.

Ok, in my little test, right now from a UK VPS I am getting some packet loss, and from a Los Angeles
VPS no packet loss.

This is my modified iperf to allow an interval of 0.1 seconds for statistics.

[ 3] 6.0- 6.5 sec 480 KBytes 7.86 Mbits/sec 0.128 ms 0/ 1920 (0%)
[ 3] 6.5- 7.0 sec 487 KBytes 7.97 Mbits/sec 0.140 ms 6/ 1953 (0.31%)
[ 3] 7.0- 7.5 sec 488 KBytes 7.99 Mbits/sec 0.144 ms 3/ 1953 (0.15%)
[ 3] 7.5- 8.0 sec 487 KBytes 7.98 Mbits/sec 0.134 ms 5/ 1953 (0.26%)
[ 3] 8.0- 8.5 sec 488 KBytes 8.00 Mbits/sec 0.134 ms 2/ 1955 (0.1%)
[ 3] 8.5- 9.0 sec 486 KBytes 7.97 Mbits/sec 0.150 ms 6/ 1951 (0.31%)
[ 3] 9.0- 9.5 sec 486 KBytes 7.96 Mbits/sec 0.176 ms 10/ 1954 (0.51%)
[ 3] 9.5-10.0 sec 482 KBytes 7.89 Mbits/sec 0.220 ms 21/ 1948 (1.1%)


Packet loss isn't very consistent.  But it's frequent enough that it could be intrusive at times.  But not frequent enough that you could say that there is a problem.  (and on a VPS you can't really expect to always have under 0.5% packet loss or anything, but the internet even more so)

Currently a transfer only goes at 432k/sec.  whereas i'm getting 1486k/sec to the los angeles host.

The ping is 140 msec to the los angeles host and 281 to the uk host.  and yet the speed is over twice to the los angeles host!

Actually I was taking the current speed not the total speed.


uk:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10.0M 100 10.0M 0 0 512k 0 0:00:19 0:00:19 --:--:-- 432k

la:

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10.0M 100 10.0M 0 0 1323k 0 0:00:07 0:00:07 --:--:-- 1486k


Curiously it's even worse from los angeles to uk:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10.0M 100 10.0M 0 0 204k 0 0:00:50 0:00:50 --:--:-- 145k

With a ping of 160 - which is quite high.  And is most likely due to "as5580" whoever that is.  (as5580.net) as they're in the path in both directions with no other transit providers.

7439 posts

Uber Geek
+1 received by user: 956

Trusted
Subscriber

  Reply # 666947 3-Aug-2012 13:17 Send private message

jsr: DISCLAIMER: TalkieT works for Telecom NZ.


I think that is adequately clear, it's stated on his profile beside all his posts and he has made NO attempt to hide it regardless. 


1089 posts

Uber Geek
+1 received by user: 49


  Reply # 666968 3-Aug-2012 13:38 Send private message

jsr: 
Thing is, each bit of that "Give it to me, here you go, did you get it?, yep sure did!" conversation takes, like 2 or 3 or maybe 5ms between the computer in LA and the server in LA. But each bit of it takes 160 to 180ms between the computer in NZ and the server in LA. So the total time to get each 1500bytes is longer and ALWAYS WILL BE LONGER, just due to (a) physics, and (b) how TCP works.

[1] Purchase from the good, clueful, smart, impossibly handsome and charismatic folks at Vector Communications.[2]

[2] Disclaimer: I work for Vector Communications[3]


Vector have a REALLY long path from Los Angeles.  At least to their web site.

zsh 3053 # traceroute vectorfibre.co.nz
traceroute to vectorfibre.co.nz (120.138.24.214), 64 hops max, 40 byte packets
1 174.136.111.233 (174.136.111.233) 7.920 ms 11.584 ms 12.453 ms
2 vocus.com.au.any2ix.coresite.com (206.223.143.136) 8.262 ms 8.247 ms 8.221 ms
3 ten-0-2-0.bdr01.sjc01.ca.VOCUS.net.au (114.31.199.116) 186.803 ms 186.921 ms 186.996 ms
4 ip-52.199.31.114.VOCUS.net.au (114.31.199.52) 187.115 ms 187.404 ms 187.384 ms
5 ip-55.199.31.114.VOCUS.net.au (114.31.199.55) 186.503 ms 186.394 ms 186.443 ms
6 ip-88.202.31.114.VOCUS.net.au (114.31.202.88) 187.108 ms 186.857 ms 186.816 ms
7 ten-0-1-0.bdr02.akl02.akl.VOCUS.net.au (114.31.202.41) 187.86 ms 187.169 ms 187.241 ms
8 ge-0-0-0.bdr01.akl02.akl.VOCUS.net.au (114.31.202.36) 186.770 ms 186.762 ms 186.956 ms
9 as9503-2.cust.bdr01.akl02.akl.VOCUS.net.au (114.31.203.13) 187.302 ms 187.90 ms 187.125 ms
10 TenGigabitEthernet0-2-0-961.akkin-rt1.fx.net.nz (131.203.254.205) 187.267 ms 188.83 ms 187.188 ms
11 fx-ak1.king.sitehost.co.nz (131.203.251.250) 187.119 ms 186.942 ms 187.9 ms
12 120-138-16-2.sitehost.co.nz (120.138.16.2) 188.701 ms 251.910 ms 214.951 ms
13 br0.air2.sitehost.co.nz (120.138.16.10) 187.492 ms 188.152 ms 187.458 ms
14 aegir.vector.co.nz (120.138.24.214) 187.907 ms 187.703 ms 188.192 ms

And look what we have here.  They're going via Australia.  And the ping is nearly 50% higher than it could be.

Although Ihug seems slight  better:

# traceroute www.ihug.co.nz
traceroute to www.ihug.co.nz (203.109.135.195), 64 hops max, 40 byte packets
1 174.136.111.233 (174.136.111.233) 10.673 ms 5.335 ms 16.340 ms
2 vocus.com.au.any2ix.coresite.com (206.223.143.136) 8.243 ms 8.270 ms 8.332 ms
3 ip-142.199.31.114.VOCUS.net.au (114.31.199.142) 8.314 ms 8.361 ms 8.300 ms
4 203-109-178-3.dsl.dyn.ihug.co.nz (203.109.178.3) 161.3 ms 160.960 ms 161.170 ms
5 gi16-0-0-130.akl-ldv-edge1.akl.vf.net.nz (203.109.180.14) 161.710 ms 161.466 ms 161.388 ms
6 gi16-0-0-130.akl-ldv-edge1.akl.vf.net.nz (203.109.180.14) 161.137 ms 161.244 ms 161.378 ms



Who are also using vocus...

And wxc are just as bad:
traceroute to www.wxc.co.nz (58.28.4.142), 64 hops max, 40 byte packets
1 174.136.111.233 (174.136.111.233) 0.905 ms 0.831 ms 0.932 ms
2 vocus.com.au.any2ix.coresite.com (206.223.143.136) 8.404 ms 8.254 ms 8.499 ms
3 ten-0-2-0.bdr01.sjc01.ca.VOCUS.net.au (114.31.199.116) 186.735 ms 187.520 ms 186.858 ms
4 ip-52.199.31.114.VOCUS.net.au (114.31.199.52) 187.106 ms 187.384 ms 187.306 ms
5 ip-55.199.31.114.VOCUS.net.au (114.31.199.55) 186.468 ms 186.469 ms 186.408 ms
6 ip-88.202.31.114.VOCUS.net.au (114.31.202.88) 186.833 ms 186.801 ms 186.746 ms
7 ten-0-1-0.bdr02.akl02.akl.VOCUS.net.au (114.31.202.41) 187.78 ms 187.78 ms 187.46 ms
8 ge-0-0-0.bdr01.akl02.akl.VOCUS.net.au (114.31.202.36) 186.743 ms 186.750 ms 186.730 ms
9 as17435.cust.bdr01.akl02.akl.VOCUS.net.au (114.31.203.90) 187.111 ms 186.846 ms 186.872 ms
10 xe-0-2-0-35.akl-ip01.wxnz.net (58.28.160.1) 186.833 ms 238.235 ms 187.278 ms


As much as I think there's more consistent performance with less packet loss rather than lower latency by 10 to 20 msec, my opinion changes when there is 60 msec+ latency.


Voice Engineer @ Orcon
1999 posts

Uber Geek
+1 received by user: 472

Trusted
Orcon
Subscriber

  Reply # 666983 3-Aug-2012 13:56 Send private message

mercutio:
jsr: 
Thing is, each bit of that "Give it to me, here you go, did you get it?, yep sure did!" conversation takes, like 2 or 3 or maybe 5ms between the computer in LA and the server in LA. But each bit of it takes 160 to 180ms between the computer in NZ and the server in LA. So the total time to get each 1500bytes is longer and ALWAYS WILL BE LONGER, just due to (a) physics, and (b) how TCP works.

[1] Purchase from the good, clueful, smart, impossibly handsome and charismatic folks at Vector Communications.[2]

[2] Disclaimer: I work for Vector Communications[3]


Vector have a REALLY long path from Los Angeles.  At least to their web site.

zsh 3053 # traceroute vectorfibre.co.nz
traceroute to vectorfibre.co.nz (120.138.24.214), 64 hops max, 40 byte packets
1 174.136.111.233 (174.136.111.233) 7.920 ms 11.584 ms 12.453 ms
2 vocus.com.au.any2ix.coresite.com (206.223.143.136) 8.262 ms 8.247 ms 8.221 ms
3 ten-0-2-0.bdr01.sjc01.ca.VOCUS.net.au (114.31.199.116) 186.803 ms 186.921 ms 186.996 ms
4 ip-52.199.31.114.VOCUS.net.au (114.31.199.52) 187.115 ms 187.404 ms 187.384 ms
5 ip-55.199.31.114.VOCUS.net.au (114.31.199.55) 186.503 ms 186.394 ms 186.443 ms
6 ip-88.202.31.114.VOCUS.net.au (114.31.202.88) 187.108 ms 186.857 ms 186.816 ms
7 ten-0-1-0.bdr02.akl02.akl.VOCUS.net.au (114.31.202.41) 187.86 ms 187.169 ms 187.241 ms
8 ge-0-0-0.bdr01.akl02.akl.VOCUS.net.au (114.31.202.36) 186.770 ms 186.762 ms 186.956 ms
9 as9503-2.cust.bdr01.akl02.akl.VOCUS.net.au (114.31.203.13) 187.302 ms 187.90 ms 187.125 ms
10 TenGigabitEthernet0-2-0-961.akkin-rt1.fx.net.nz (131.203.254.205) 187.267 ms 188.83 ms 187.188 ms
11 fx-ak1.king.sitehost.co.nz (131.203.251.250) 187.119 ms 186.942 ms 187.9 ms
12 120-138-16-2.sitehost.co.nz (120.138.16.2) 188.701 ms 251.910 ms 214.951 ms
13 br0.air2.sitehost.co.nz (120.138.16.10) 187.492 ms 188.152 ms 187.458 ms
14 aegir.vector.co.nz (120.138.24.214) 187.907 ms 187.703 ms 188.192 ms

And look what we have here.  They're going via Australia.  And the ping is nearly 50% higher than it could be.

Although Ihug seems slight  better:

# traceroute www.ihug.co.nz
traceroute to www.ihug.co.nz (203.109.135.195), 64 hops max, 40 byte packets
1 174.136.111.233 (174.136.111.233) 10.673 ms 5.335 ms 16.340 ms
2 vocus.com.au.any2ix.coresite.com (206.223.143.136) 8.243 ms 8.270 ms 8.332 ms
3 ip-142.199.31.114.VOCUS.net.au (114.31.199.142) 8.314 ms 8.361 ms 8.300 ms
4 203-109-178-3.dsl.dyn.ihug.co.nz (203.109.178.3) 161.3 ms 160.960 ms 161.170 ms
5 gi16-0-0-130.akl-ldv-edge1.akl.vf.net.nz (203.109.180.14) 161.710 ms 161.466 ms 161.388 ms
6 gi16-0-0-130.akl-ldv-edge1.akl.vf.net.nz (203.109.180.14) 161.137 ms 161.244 ms 161.378 ms



Who are also using vocus...

And wxc are just as bad:
traceroute to www.wxc.co.nz (58.28.4.142), 64 hops max, 40 byte packets
1 174.136.111.233 (174.136.111.233) 0.905 ms 0.831 ms 0.932 ms
2 vocus.com.au.any2ix.coresite.com (206.223.143.136) 8.404 ms 8.254 ms 8.499 ms
3 ten-0-2-0.bdr01.sjc01.ca.VOCUS.net.au (114.31.199.116) 186.735 ms 187.520 ms 186.858 ms
4 ip-52.199.31.114.VOCUS.net.au (114.31.199.52) 187.106 ms 187.384 ms 187.306 ms
5 ip-55.199.31.114.VOCUS.net.au (114.31.199.55) 186.468 ms 186.469 ms 186.408 ms
6 ip-88.202.31.114.VOCUS.net.au (114.31.202.88) 186.833 ms 186.801 ms 186.746 ms
7 ten-0-1-0.bdr02.akl02.akl.VOCUS.net.au (114.31.202.41) 187.78 ms 187.78 ms 187.46 ms
8 ge-0-0-0.bdr01.akl02.akl.VOCUS.net.au (114.31.202.36) 186.743 ms 186.750 ms 186.730 ms
9 as17435.cust.bdr01.akl02.akl.VOCUS.net.au (114.31.203.90) 187.111 ms 186.846 ms 186.872 ms
10 xe-0-2-0-35.akl-ip01.wxnz.net (58.28.160.1) 186.833 ms 238.235 ms 187.278 ms


As much as I think there's more consistent performance with less packet loss rather than lower latency by 10 to 20 msec, my opinion changes when there is 60 msec+ latency.



I will point out why this testing is flawed:

1. It looks like you are tracerouting from US back to NZ. Routing is not always the same both ways, so this might not mean much for a connection initiated from NZ.

2. Who's to say that the www is on their network core or that it has any bearing on residential customer traffic routes

A better to pick a server in the US - or several - and trace routes from residential connections on each ISP with good local performance.

Let's also be mindful that not all ISPs have interconnects everywhere, so the best path to two servers in LA might be very different.

1 | 2 | 3 | 4 | 5
View this topic in a long page with up to 500 replies per page Create new topic




Twitter »
Follow us to receive Twitter updates when new discussions are posted in our forums:



Follow us to receive Twitter updates when news items and blogs are posted in our frontpage:



Follow us to receive Twitter updates when tech item prices are listed in our price comparison site:





Trending now »

Hot discussions in our forums right now:

Click Monday Deals
Created by mrtoken, last reply by Krishant007 on 24-Nov-2014 17:11 (25 replies)
Pages... 2


Gull Employment Dispute.
Created by networkn, last reply by Geektastic on 26-Nov-2014 16:35 (142 replies)
Pages... 8 9 10


Gigatown winner town and plans
Created by freitasm, last reply by joker97 on 27-Nov-2014 07:39 (45 replies)
Pages... 2 3


HP Stream 7 arrives
Created by gnfb, last reply by gnfb on 26-Nov-2014 22:49 (19 replies)
Pages... 2


The Warehouse pulling R18 games and DVD's
Created by semigeek, last reply by mattwnz on 26-Nov-2014 16:13 (56 replies)
Pages... 2 3 4


Lollipop no more
Created by ronw, last reply by kiwitrc on 26-Nov-2014 13:44 (13 replies)

Knock off electronics in The Warehouse
Created by jpoc, last reply by openmedia on 26-Nov-2014 13:01 (13 replies)

Letter from Vodafone Speed Decrease WTF
Created by rokki, last reply by rokki on 27-Nov-2014 04:43 (22 replies)
Pages... 2



Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.

Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.

Alternatively, you can receive a daily email with Geekzone updates.