Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


Filter this topic showing only the reply marked as answer View this topic in a long page with up to 500 replies per page Create new topic
1 | 2 
Sounddude
I fix stuff!
1928 posts

Uber Geek

Trusted
2degrees
Lifetime subscriber

  #831545 6-Jun-2013 12:33
Send private message

mercutio:
i dunno why everyone still seems to think it's delay that effects bandwidth on high latency connections.  it's packet loss.  if you have 0% packet loss you can get line rate to overseas.  if you have even minor packet loss it can severely degrade performance.

for the same level of packet loss, the less latency the better it'll handle it.

but with modern tcp improvements like cubic congestion control it's not hard to do 100 megabit+ to the other side of the world.




yes you are correct

Cool little calculator for those who haven't seen it.

http://wand.net.nz/~perry/max_download.php





mercutio
1392 posts

Uber Geek


  #831569 6-Jun-2013 13:06
Send private message

Sounddude:
mercutio:
i dunno why everyone still seems to think it's delay that effects bandwidth on high latency connections.  it's packet loss.  if you have 0% packet loss you can get line rate to overseas.  if you have even minor packet loss it can severely degrade performance.

for the same level of packet loss, the less latency the better it'll handle it.

but with modern tcp improvements like cubic congestion control it's not hard to do 100 megabit+ to the other side of the world.




yes you are correct

Cool little calculator for those who haven't seen it.

http://wand.net.nz/~perry/max_download.php




that wand stuff seems a bit low, tcp/ip ramps up more than that with packet loss in modern stacks afaik.

Sounddude
I fix stuff!
1928 posts

Uber Geek

Trusted
2degrees
Lifetime subscriber

  #831583 6-Jun-2013 13:19
Send private message

mercutio:
Sounddude:
mercutio:
i dunno why everyone still seems to think it's delay that effects bandwidth on high latency connections.  it's packet loss.  if you have 0% packet loss you can get line rate to overseas.  if you have even minor packet loss it can severely degrade performance.

for the same level of packet loss, the less latency the better it'll handle it.

but with modern tcp improvements like cubic congestion control it's not hard to do 100 megabit+ to the other side of the world.




yes you are correct

Cool little calculator for those who haven't seen it.

http://wand.net.nz/~perry/max_download.php




that wand stuff seems a bit low, tcp/ip ramps up more than that with packet loss in modern stacks afaik.


yea very old, doesn't support dynamic window sizing etc.





mercutio
1392 posts

Uber Geek


  #831584 6-Jun-2013 13:19
Send private message

dwl: We need some clarity about what typical TCP stacks are being used and what "recent TCP stacks" means. Improved error correction has been around for many years and I believe appeared with Windows XP which is now starting to drop off as the most common OS. I don't believe the very latest stacks are essential, simply one that has window scaling and SACK (which I think should be many).

I think many users could get high bandwidth if the path to the originating server had low enough loss. It isn't fair to blame only SCCN bottlenecks as many other segments, outside NZ ISP control, will also play a key role but I don't think it is valid to suggest the limitation is necessarily the user or server TCP stacks.


the thing is that most of the tcp stack thing is on the sender side not receiver side.  so modern linux being clever and good and also common on overseas hosts means that you've got a reasonably good chance of having a good tcp interaction as far as stacks goes.

in my own testing it seems that having too high window sizes tends to decrease performance across common networks... (like cogent, telstraglobal etc) most likely due to some kind of rate limiting of excessive burst traffic.

there are a few modern improvements in linux that mostly help things like web hosting more so than huge single threaded downloads.

linux basically changed from tcp bic congestion control to tcp cubic.  tcp bic is more aggressive and less fair, but shouldn't hurt performance, more it can degrade other connections.  this has been around for a very long time, and it's expected that most linux hosts are now using it.

then there's initial window size increase which has been around a while (linux 3.0 by default iirc, about 2.6.32 it's able to be configured), but isn't on older centos versions etc, and so there's a mixture of sites using it and not using it.  windows server lets you set it but isn't enabled by default but it can basically save one round trip time on small files.  which mostly helps browsing. (a 300 msec reduction to UK is noticable, a 20 msec reduction to NZ less so) (http://datatracker.ietf.org/doc/rfc6928/)

then more recent than that there are a few things that have been added.  linux 3.2 added proportional rate reduction, which is less agressive at rate reduction and does it more gradually.  (which means there's more traffic to be acknowledged which means speed goes back up quicker) (http://www.ietf.org/proceedings/80/slides/tcpm-6.pdf)

early retransmit (linux 3.5) http://tools.ietf.org/html/rfc5827 improved somewhat in i think linux 3.9 ? 

anyway, that's just a few examples, but linux is constantly improving their tcp/ip stack, and to me, it looks a bit too aggressive with how common usage is, and that these things haven't necessarily been tested long term.  years back linux was enabling ecn by default and lots of things broke, so it could hurt the internet as a whole, but that's how it is.

mercutio
1392 posts

Uber Geek


  #831585 6-Jun-2013 13:22
Send private message

Sounddude:
mercutio:

that wand stuff seems a bit low, tcp/ip ramps up more than that with packet loss in modern stacks afaik.


yea very old, doesn't support dynamic window sizing etc.



which reminds me, tcp cubic congestion also uses latency between packet pairs to determine bandwidth as well as loss.

so jitter is anothing thing as well as loss to keep in mind with acheiving mind blowing performance.

mercutio
1392 posts

Uber Geek


  #831594 6-Jun-2013 13:41
Send private message

anyway, as much as having great throughput is important, most user complaints come from appalling speeds etc when there's significant degredation, and some of these recent improvements will help that slightly. but i personally don't really care if something goes 3megabytes/sec or 6 megabytes/sec but when something goes 80k/sec it's annoying.

and the ufb network should make it easier to share between multiple users and not have as much performance degredation. nz speeds should also stay fast.

with web browsing, a lot of times when things are slow there is significant packet loss or latency, and even dropping one packet when going to a web page can mean a delay of a round trip time, which to europe is significant. so if you request 50 images to see if they've changed, and one of those is dropped an extra delay of 3 seconds can happen (or 1 if initial retransmit time of 1 second as in recent linux is used) and in normal dsl connections if users do full speed downloading especially with threaded download managers a lot of these kinds of timeouts can happen. but if connection is 100 megabit it's less likely.

also on dsl uploading to one site can completely ruin the experience to other users, and ufb again improves that situation.

JohnButt
374 posts

Ultimate Geek

Trusted

  #831778 6-Jun-2013 18:28
Send private message

chevrolux: I don't get why RBI has been lumped with ADSL. Any ADSL cabinets that are being upgraded in the RBI project get ADSL2+/VDSL line cards so should be considered in the same group as the other ADSL probes.
And what is going on with Vodafone's Australia download times? Have they got a screwed up route or something?


Some RBI are supplied using direct wireless, some are via a cabinet with wireless feed, hence I have kept them together.

Happy to consider otherwise, we don't go out of our way to get RBI probes, but they do occur and I think it is important to show them separately.

 
 
 

Cloud spending continues to surge globally, but most organisations haven’t made the changes necessary to maximise the value and cost-efficiency benefits of their cloud investments. Download the whitepaper From Overspend to Advantage now.
JohnButt
374 posts

Ultimate Geek

Trusted

  #831781 6-Jun-2013 18:33
Send private message

Sounddude: Latency is always going to have an effect on download times.

its not fair to say that SCCN is the issue. The issue is we are so far away from the content.



I agree, but the reporter's assumption is equally relevant where the cause is completely unknown.  Packetloss, insufficient capacity, shaping and a lot of other causes could be to blame, but that would be impossible to explain for the reporter.


JohnButt
374 posts

Ultimate Geek

Trusted

  #831792 6-Jun-2013 19:07
Send private message

insane: Wow Truenet are not doing their reputation any good with these pathetic tests, but mainstream media will undoubtedly love it.  That's twice in a month now, remember those VoIP tests...

Ask any provider who purchases bandwidth from SCCN, you get exactly what you pay for, they actually offer a damn good service.

And testing high-speed connections using 300-500KB websites... OK, must be accurate, read it on the internet.


Huh!  Trademe page is just under 500KB,  for comparison purposes our testpage of 300k seems a pretty good size.

We are planning to review the size however, to match the current weighted average size, but Trademe has a lot of weight so it may not change a lot.

Beccara
1469 posts

Uber Geek

ID Verified

  #831794 6-Jun-2013 19:12
Send private message

Whilst it's sound testing to compare X Tech to Y Tech I feel the one thing missing would be what the same tests done on say a CIR connection inside a transit providers core. Somewhere the last mile doesn't exist and the bandwidth isn't shared.

It's all good to say Tech/ISP X is better than Y but if A thru Z are all crappy compared to CIR straight of SCCN then it's much ado about nothing really




Most problems are the result of previous solutions...

All comment's I make are my own personal opinion and do not in any way, shape or form reflect the views of current or former employers unless specifically stated 

JohnButt
374 posts

Ultimate Geek

Trusted

  #831814 6-Jun-2013 19:41
Send private message

A missing comment here is the impact of Shaping. When you see our file downloads individually there appears to be a lot of shaping going on for Australia and USA (not in this report it is ONLY webpages, however it would impact these results)

Unless someone has another interpretation here is some facts from my VDSL probe:

WIX download to VDSL (the measure is each quartile, ie 4 measures, the first quartile includes rampup for a 1MB file, each quartile is just 250kB)



One from Sydney ( same file)



Then we have one from the US for the same file:



Shaping?

Finally, a 5MB file from WIX but this time in Deciles, ie each spot represents 500kB



Locally rampup is finished inside 250kB, yet we have interesting effects overseas.

We had to change to 1MB for 100M Cable/fibre due to the speed causing timing errors on our routers - tiny differences makes for greater % errors.  

We test each 1MB file every 5 hours on all probes that offer 3GB/month or more for testing purposes, there will soon be 5 files making it a 1MB file every hour. (3 at present)  These results are on my own probe, but similar speed behaviour is common on UFB as well as other technologies, ie this may not directly be an ISP issue, but their supplier - tis the ISPs problem though.

If you are a volunteer, please update your cap so we can do more 1MB testing.  even more testing uses a 5MB file but it is only used for the 5GB and above caps and then is only 4 times a day.  

Note:for those wishing we tested more, we have to operate within the data cap issues created by NZ market pricing.

Whoops stuffed up the image loading - hope this is OK & in the right order

mercutio
1392 posts

Uber Geek


  #831850 6-Jun-2013 21:03
Send private message

johnbutt, your images aren't showing up.

dwl

dwl
371 posts

Ultimate Geek


  #832087 7-Jun-2013 10:16
Send private message

The images highlight poor results going international but the speeds are a lot worse than others are getting in other geekzone threads relating to either UFB or even ADSL. This doesn't say anything clear to me about whether SCCN is a bottleneck as it could also be TCP tuning issue.

On the web page load time concept, if we look at the trademe home page from an external overseas test site such as pingdom, GTmetrix or neustar they all show high load times at around 5 seconds for the 44 page elements (see waterfall reports). The important point is there are sequential transactions where it needs round trip wait times and maybe initial content to point to next links.

For these external browser simulations we can see groups of files waiting together, I guess in some cases due to number of concurrent thread limitations. Only once the earlier files finish will others start. As the round trip time gets higher (e.g. 200 ms) then theses cycles get drawn out a lot (e.g. if say 9 phases, each needing at least 200ms time to connect, you already have nearly 2 seconds). In addition there are also external links on many pages which mean total page time is extended.

I agree with becarra that CIR services may give different result - unless you change only one variable (e.g. SCCN via ISP vs via CIR service) it is not possible to blame a particular provider.

We have two separate issues of single thread throughput vs. page loads times (with many elements). For single thread loss is the primary controller of TCP rates with delay still important (noting that the graphic above may also be showing client/server limitations). Page load times will probably be more sensitive to delay with high round trip time when there are many page elements and number of thread limitations. Many of the files are tiny and never get much chance to ramp up speed so I would expect an overall slower rate for the total page size with higher delay.

It is easy to simulate delay on the bench (built into the Linux kernel) so a starting point is how fast does a page with many elements load when there is zero loss but high delay. No international network can improve on this.

I think we we need to be cautious when drawing conclusions from the page load tests due to added complexity (especially when you add in other ISP processing). A result of only 2.5 Mbps from Dallas also suggests other test problems and I would not trust the US carriers to provide services any cleaner than what we have in NZ.

The slow RBI result at roughly twice ADSL FS may simply be due to the higher delay (what is typical?) which I guess will drop once 4G is rolled out ....

Interesting points being raised and it would be great if we can get some metrics that highlight limitations.


mercutio
1392 posts

Uber Geek


  #832150 7-Jun-2013 11:20
Send private message

dwl: The images highlight poor results going international but the speeds are a lot worse than others are getting in other geekzone threads relating to either UFB or even ADSL. This doesn't say anything clear to me about whether SCCN is a bottleneck as it could also be TCP tuning issue.

On the web page load time concept, if we look at the trademe home page from an external overseas test site such as pingdom, GTmetrix or neustar they all show high load times at around 5 seconds for the 44 page elements (see waterfall reports). The important point is there are sequential transactions where it needs round trip wait times and maybe initial content to point to next links.

For these external browser simulations we can see groups of files waiting together, I guess in some cases due to number of concurrent thread limitations. Only once the earlier files finish will others start. As the round trip time gets higher (e.g. 200 ms) then theses cycles get drawn out a lot (e.g. if say 9 phases, each needing at least 200ms time to connect, you already have nearly 2 seconds). In addition there are also external links on many pages which mean total page time is extended.


preconnecting etc mitigates this a bit.  but that just makes the truenet prob further away from real world performance.


I agree with becarra that CIR services may give different result - unless you change only one variable (e.g. SCCN via ISP vs via CIR service) it is not possible to blame a particular provider.


i tried doing 5 second iperf to dsl, versus a virtual machine on same network from vm in dallas:

[ 4] 0.0- 1.0 sec 336 KBytes 2.76 Mbits/sec
[ 4] 1.0- 2.0 sec 1.08 MBytes 9.10 Mbits/sec
[ 4] 2.0- 3.0 sec 1.30 MBytes 10.9 Mbits/sec
[ 4] 3.0- 4.0 sec 904 KBytes 7.41 Mbits/sec
[ 4] 4.0- 5.0 sec 726 KBytes 5.95 Mbits/sec
[ 4] 0.0- 5.9 sec 5.00 MBytes 7.10 Mbits/sec



[ 4] 0.0- 1.0 sec 348 KBytes 2.85 Mbits/sec
[ 4] 1.0- 2.0 sec 3.50 MBytes 29.4 Mbits/sec
[ 4] 2.0- 3.0 sec 6.31 MBytes 53.0 Mbits/sec
[ 4] 3.0- 4.0 sec 4.75 MBytes 39.9 Mbits/sec
[ 4] 4.0- 5.0 sec 5.42 MBytes 45.4 Mbits/sec
[ 4] 0.0- 5.1 sec 20.8 MBytes 34.1 Mbits/sec

They're both experiencing the seesaw speed effects of packet loss.  And for the 1st second speeds are hardly different.  This is where most interactive stuff happens.  Single page elements shouldn't really take more than a second.

I wouldn't necessarily call it shaping from the ISP.  Let's try to a virtual machine in another country.  amsterdam, 126 msec away.. not quite as much latency.. but should be close enough to get a fair idea.

[ 4] 0.0- 1.0 sec 344 KBytes 2.81 Mbits/sec
[ 4] 1.0- 2.0 sec 1.04 MBytes 8.76 Mbits/sec
[ 4] 2.0- 3.0 sec 2.49 MBytes 20.9 Mbits/sec
[ 4] 3.0- 4.0 sec 5.40 MBytes 45.3 Mbits/sec
[ 4] 4.0- 5.0 sec 8.42 MBytes 70.6 Mbits/sec
[ 4] 0.0- 5.1 sec 18.8 MBytes 30.8 Mbits/sec


And wait?  It's completely different...  the ramp up is slower than to NZ.  It's slower even than to DSL in NZ at the 22 second mark.  

We have two separate issues of single thread throughput vs. page loads times (with many elements). For single thread loss is the primary controller of TCP rates with delay still important (noting that the graphic above may also be showing client/server limitations). Page load times will probably be more sensitive to delay with high round trip time when there are many page elements and number of thread limitations. Many of the files are tiny and never get much chance to ramp up speed so I would expect an overall slower rate for the total page size with higher delay.


My iperf stuff shows the not having time to ramp up speed thing.   and at a guess 500k shouldn't really vary too much by connection.  But need finer granuality than 1 second.   

amsterdam:
[ 4] 0.0- 0.5 sec 148 KBytes 2.43 Mbits/sec
[ 4] 0.5- 1.0 sec 221 KBytes 3.62 Mbits/sec
[ 4] 1.0- 1.5 sec 356 KBytes 5.84 Mbits/sec
[ 4] 1.5- 2.0 sec 606 KBytes 9.93 Mbits/sec

nz:

[ 4] 0.0- 0.5 sec 79.8 KBytes 1.31 Mbits/sec
[ 4] 0.5- 1.0 sec 224 KBytes 3.67 Mbits/sec
[ 4] 1.0- 1.5 sec 653 KBytes 10.7 Mbits/sec
[ 4] 1.5- 2.0 sec 1.19 MBytes 20.0 Mbits/sec


nz dsl:

[ 4] 0.0- 0.5 sec 89.8 KBytes 1.47 Mbits/sec
[ 4] 0.5- 1.0 sec 265 KBytes 4.34 Mbits/sec
[ 4] 1.0- 1.5 sec 405 KBytes 6.63 Mbits/sec
[ 4] 1.5- 2.0 sec 664 KBytes 10.9 Mbits/sec


It is easy to simulate delay on the bench (built into the Linux kernel) so a starting point is how fast does a page with many elements load when there is zero loss but high delay. No international network can improve on this.
[

I think it's a bad idea to assume that there'll be 0 loss on the modern internet.   There is on and off loss frequently overseas.  DDOS attacks are becoming more common etc, and it's better to have testing that takes into account there may be some loss around the place.  There are network simulators like http://www.nsnam.org/ commonly available.


I think we we need to be cautious when drawing conclusions from the page load tests due to added complexity (especially when you add in other ISP processing). A result of only 2.5 Mbps from Dallas also suggests other test problems and I would not trust the US carriers to provide services any cleaner than what we have in NZ.

The slow RBI result at roughly twice ADSL FS may simply be due to the higher delay (what is typical?) which I guess will drop once 4G is rolled out ....

Interesting points being raised and it would be great if we can get some metrics that highlight limitations.



2.5megabit/sec is low, but about similar to what I got within the first second.

that said, I also tested to chicago, and got 91.7 megabit/sec in first second.  my testing source is rate limited to 100 megabit.   and the following seconds got 101, 101, 102, 98.4.. 

I think it's more relevant what parallel connections are like in a way.

like 2 second parallel test with 4 connections gives
dsl: [SUM] 0.0- 2.6 sec 4.12 MBytes 13.2 Mbits/sec
nz: [SUM] 0.0- 2.2 sec 11.1 MBytes 42.9 Mbits/sec
am: [SUM] 0.0- 2.1 sec 14.5 MBytes 57.0 Mbits/sec

the dsl sync rate is 15 megabit.  so 13.2 mbit/sec isn't bad.

and although amsterdam is faster than nz, it is also lower latency at ~124 msec versus ~168 msec.  dsl is ~180 msec, and 42.9 * (168.0/124) = 58.1 megabit.  


dwl

dwl
371 posts

Ultimate Geek


  #833159 9-Jun-2013 18:36
Send private message

A lot of interesting data - the startup phase is the important bit and the whole webpage could probably fit into the first 0.5 sec except for the many different object transactions needed - the exact time will depend on how the page is downloaded in terms of max simultaneous connections and whether there is persistent keep-alive (each connection open/close is a killer).

Looking at similar size and delay downloads, the per thread TCP initial window is only a few packets and only manages to rise to around 30 packets (44kB) before the page object is fully loaded - the max instantaneous bandwidth seen (10ms interval) was 35Mbps but averaged per second was under 1 Mbps due to the waiting due to high round trip time.

I believe the loss on SCCN is quite low as we are getting reasonable download speeds. Using the old WAND calculator it would suggest the packet loss needs to be much less than 0.1% but I believe it is out of date (better error recovery now) and one guess is around 0.1% loss but this can only be assessed by analysing the downloads for the retransmissions.

For the webpages being mentioned there are probably less than 500 packets needed so if we lose only one packet (0.1% loss) then I think the dominant effect of SCCN is probably only delay (due mostly to simple physics for the distance with not much evidence of excess buffer delay) for the many transactions needed for webpage download. Page load times getting significantly slower than for national seems more likely to me not to be due to any performance limitation with SCCN beyond simple path distance.

Of course download of large files can be quite a different story but rather unrelated to page load times ....

1 | 2 
Filter this topic showing only the reply marked as answer View this topic in a long page with up to 500 replies per page Create new topic





News and reviews »

Air New Zealand Starts AI adoption with OpenAI
Posted 24-Jul-2025 16:00


eero Pro 7 Review
Posted 23-Jul-2025 12:07


BeeStation Plus Review
Posted 21-Jul-2025 14:21


eero Unveils New Wi-Fi 7 Products in New Zealand
Posted 21-Jul-2025 00:01


WiZ Introduces HDMI Sync Box and other Light Devices
Posted 20-Jul-2025 17:32


RedShield Enhances DDoS and Bot Attack Protection
Posted 20-Jul-2025 17:26


Seagate Ships 30TB Drives
Posted 17-Jul-2025 11:24


Oclean AirPump A10 Water Flosser Review
Posted 13-Jul-2025 11:05


Samsung Galaxy Z Fold7: Raising the Bar for Smartphones
Posted 10-Jul-2025 02:01


Samsung Galaxy Z Flip7 Brings New Edge-To-Edge FlexWindow
Posted 10-Jul-2025 02:01


Epson Launches New AM-C550Z WorkForce Enterprise printer
Posted 9-Jul-2025 18:22


Samsung Releases Smart Monitor M9
Posted 9-Jul-2025 17:46


Nearly Half of Older Kiwis Still Write their Passwords on Paper
Posted 9-Jul-2025 08:42


D-Link 4G+ Cat6 Wi-Fi 6 DWR-933M Mobile Hotspot Review
Posted 1-Jul-2025 11:34


Oppo A5 Series Launches With New Levels of Durability
Posted 30-Jun-2025 10:15









Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.