The install didn't go as smoothly as I would have liked. Chorus had no problems installing the second ONT, but the resulting connection was unusable. To help diagnose the problem, Bigpipe technical support temporarily moved me to a CGNAT connection, which did work. Later in the evening I received a bizarre e-mail from tech support, indicating they were doing network upgrades that may be causing issues with public IPs, and they would get back to me with updates. That didn't sound promising, as the wording suggested it could be days or weeks before the problem was resolved. Fortunately, that was my pessimistic interpretation, and the connection was up and running with a public IP around 9pm. BigPipeNZ has since posted on Geekzone indicating the issue was caused by a RADIUS configuration change yesterday morning, and only impacted about a dozen customers.
While on CGNAT, my fibre connection only managed 200/20, not 200/200, and I was told "...the speeds take a while to get up to full speeds". Not sure how that works. Speeds are unchanged this morning, so I'll need to get that sorted. If Bigpipe performs acceptably, and critically, they can deliver a static IP address by the end of the year (and hopefully subnets), I will remain on the service and disconnect my 30/10 fibre connection provided by HD.
In addition to dual fibre connections, I have tunnelled IPv6 provided via SixXS on the HD connection, so I have included that in comparisons.
Connections:
HD - 30/10 (fibre)
Bigpipe - 200/? (fibre) - Seems to be 200/20, meant to be 200/200.
SixXS IPv6 - 30/10 (HD fibre)
Testing Method:
Everything at my end is as near to identical as possible for all connections. I am using a pfSense-based router with both WAN interfaces on a dual port Ethernet adaptor (Intel controller), allowing me to have both Internet connections up concurrently. Requests over IPv4 are being routed based on source IP, with multiple addresses assigned to the client system. As pfSense re-writes source ports for outbound requests by default, uses a stateful firewall (pf), and the router is not constrained by memory or CPU, the impact of this is both negligible and equivalent on both IPv4 connections.
IPv6 is terminated on the HD static IP, using a GIF interface, and the default SixXS MTU of 1280. The POP is located in Wellington, provided by ACS Data (nzwlg01).
For my initial tests, I compared downloads from a VPS in Los Angeles (Debian 7, 32-bit), having native IPv6. Unfortunately, I have little control over the remote end, and while it has a gigabit connection, I am at the mercy of other users, and as such speed varies somewhat over time. While not idea, I chose a sample size of 100 per connection, and downloading in a round-robin fashion, in order to minimise the significance of this variability.
National downloads are to a hosted at Vocus (FreeBSD 10, 64-bit), and unlike the VPS in Los Angeles, I have control over not only the VPS but the host as well. The connection is however only 100Mb.
The international downloads are a 10MB file, national downloads are 100MB files. I may increase the international files to 100MB for later testing, but as it's early in the month I do not wish to waste my HD quota if for any reason I need to switch back to HD as my main connection. For the same reason, I am also not running ongoing tests to compare variability throughout the day, but may attempt to do some testing of Bigpipe variability.
The script I used is nothing fancy, just a simple Windows batch file (ask if you want it). It is only accurate to seconds, which I feel is good enough for comparative purposes. I am using Wget to handle requests, and the --bind-address option to control which IPv4 connection is used (I have a specific AAAA for IPv6).
No TCP tuning has been done (that I am aware of) on either VPS or Windows 8.1 client. There are settings which would have some influence over the results. Investigating those is beyond the scope of this testing.
Results:
PM = Starting 2200 3/11 Los Angeles (Duration, approximately 4:20).
AM = Starting 0530 4/11 Los Angeles (Duration, approximately 2:07).
Bigpipe wins! While consistently faster than HD, there does appear to be more variability in speed during peak hours.
I'm careful not to read too much in to the results. Subjective and sustained testing is required. It has been my experience that downloads via IPv6 are typically faster than downloads on HD via IPv4, and the results here are consistent with that. Being a different protocol, with different implementation, tuning, and features, I don't think it's reasonable to assume this reflects poorly on the performance of IPv4 via HD.

Starting 0800 4/11 Auckland (No IPv6. Duration, approximately 1:13).
Bigpipe wins! Being local data, this is what we'd expect given the higher connection speed.

While the setup issues were disappointing, such issues are not unusual, and exactly why I opted to have a second ONT installed. Overall, things are looking quite positive with Bigpipe so far.


