I hope Russell Stanners had a good Xmas. My new years prediction is that it will be his last one as CEO of Vodafone New Zealand. I also pick 2015 as the year Vodafone could well exit the New Zealand market.
For many years Vodafone New Zealand was the Southern Hemisphere money tree. It was the jewel in the crown of the Newbury based global empire – New Zealand was a far off country where margins were good and the profits kept flowing back to the Northern Hemisphere most years. A blind eye was turned to the operation because it delivered results. Unfortunately that money tree now has a toxic illness. Following in the footsteps of Vodafone Australia which over the course of a few years lost it’s way and has now been haemorrhaging money with no light at the end of the tunnel, Vodafone New Zealand is now following Australia into the same tunnel. Before Grahame Maher was sadly taken from this earth way too early he set up both companies on their respective paths to success. He must now turning in his grave to see what has become of both companies.
In 2012 Vodafone New Zealand purchased TelstraClear for $880 million, a deal that made sense to many, but to others started ringing alarm bells. Telstra had been very frugal in the New Zealand market and their investment had seen traditionally low capital expenditure over a number of years, even when the New Zealand operation was clearly in need of it. Internally inside Telstra many looked at the New Zealand regulatory environment and saw that making inroads was going to be difficult, and that generating a return on large capital investment would probably not occur. Even their plans to build a mobile network were scuttled, part way through the construction of the network in Tauranga the plug was pulled.
It’s safe to say many at Vodafone through the purchase of TelstraClear was a bargain. They inherited a massive nationwide fibre network that would reduce their reliance on Spark and Chorus, a huge residential customer base (TelstraClear had more residential broadband customers than Vodafone) and a relatively large number of corporate customers. Somehow throughout the due diligence process, Vodafone seemed to miss the duct tape that held TelstraClear together. To some TelstraClear was shambolic mess of different technologies, multiple CRM systems and products and services that simply didn’t deliver or work the way they were supposed to. Vodafone did however inherit some fantastic technology – the cable network in Wellington and Christchurch, and the IPTV playout system that now forms the basis of the UFB Vodafone TV offering. They could have done a lot more with this technology, but have chosen not to. They could have done a lot with their Vodafone TV set top box (aka the former T-Box) but chose not to as well. Making the guy who knew all about this redundant during the last round of restructuring before Xmas probably wasn’t the smartest move either.
In many ways however they were too slow at achieving any synergies that should have occurred as a result of the merger, and failed to predict the ruthless cut throat competition that was about to hit the Broadband market in New Zealand. The mass introduction of unlimited plans and price cuts in 2013 and 2014 that have seen retail margins plummet, with many plans only delivering very small margins. Orcon famously said in late 2013 that it took 27 months to make a profit from a customer signed up to a residential DSL based broadband connection.
Buying TelstraClear has certainly burdened Vodafone financially - October 2014 saw Vodafone New Zealand announce a $27.9 million dollar loss – it’s first in 13 years. In the good old days before the TelstraClear merger in 2011 it made a $151 million profit, and sent $130 million back to Newbury. Cost cutting and large scale redundancies have occurred within the company in the past year, and customer service now seems to be at an all time low. Posts of social media suggest average wait times when calling their call centre are often 1hr +, with many people talking about giving up after numerous calls with long wait times.
There have been plenty of rumours over the years about Vodafone pulling out of New Zealand and plenty of mainstream media who have been sucked in by these rumours and written speculative stories. These were all laughable as there was never a reason for Vodafone to want to pull out, but times have now changed. Vodafone sold their Fiji operation in mid 2014, and with Australia showing no sign of a turnaround and New Zealand now facing tough times, it has resulted in some market analysts in the UK now calling for Vodafone to consider it’s position down under. Vodafone Group CEO Vittorio Colao told investors in November that the company would consider selling it’s Australian operation. Logic would dictate that there would be little sense in Vodafone staying solely in New Zealand should it sell in Australia. There is however one major stumbling block – finding a buyer for one (or both) may prove incredibly challenging.
If you’re a Vodafone shareholder you’ll be happy to know Vodafone’s response to it’s financial struggles has been to announce an across the board price increase for most fixed line broadband and phone customers in New Zealand.
As you may have seen recently reported in the media, there have been changes in the industry to the costs of delivering broadband and home phone services for all providers in New Zealand. These cost changes affect the conditions that all providers operate under and unfortunately our prices will need to change to reflect these new conditions.
From 1 February 2015, most monthly fees for our broadband and home phone plans will increase by $4 per month.
People will know that over the past couple of year the price of wholesale access to the Chorus copper network and wholesale broadband services has been under review by the Commerce Commission. As of the 1st December 2014 the wholesale cost of these services was cut, but is still under review by the Commerce Commission who have recommend that the cut now not be as a great as it first recommended. Regardless of the final outcome, pricing will still be cheaper than it was prior to December 1st 2014. In the race to the bottom many ISPs claim they put pricing in the marketplace that factored in greater discounts and that the change will mean prices have to go back up. Spark have already announced some price increases across some of it’s products, but like Vodafone some of these changes are of a very dubious nature including combinations of services that have decreased, not increased in price.
The price of UFB services, and services delivered over their own cable network in Wellington, Christchurch and Kapiti are not affected by the Chorus copper price changes, yet Vodafone are increasing these prices. Adding on $4 per month to these plans and increasing data overuse charges is a pure profit making decision, and to even subtly infer that this increased cost is a related to “industry changes” is quite frankly the biggest (excuse my English) load of bullshit I’ve ever read in a press release. If you’re a Vodafone shareholder it’s probably great news. For Vodafone customers it’s anything but.
If you’re a Vodafone customer wanting to see what your pricing will increase to, you can view it here.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
PLEASE NOTE: Unless you have very good reason for wanting to move away from the hardware your ISP supplies you should always use it. Using non ISP supplied hardware does break the terms & conditions of some ISPs and I am not responsible if they come chasing after you. You should never expect to receive any support at all from your ISP if you are planning to use non approved hardware. I will not provide support or help if you can’t get this working – I suggest you post in the Geekzone Forums if you need help and somebody may be able to help you.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Here in New Zealand the number of UFB connections is currently increasing rapidly as the network rollout focus moves from high priority schools and business users towards residential users. While many people signing up for UFB are happy to use the router or residential gateway (RGW) supplied by their ISP, some may want to use their own hardware. There are a few obstacles to overcome to do this which I’ll explain below.
Most ISPs by default will require a 802.1Q VLAN tag of 10 to be set on the WAN interface of your router. The vast majority of Ethernet routers available on the market do not support the ability to set a VLAN on the WAN port, but this is changing quickly as vendors realise this has become the default standard on fibre networks around the world. In the fibre world this is known as a tagged UNI port.
So why does a VLAN have to be set?
To understand that requires a a basic understanding of networking. Traffic over your UFB connection is split into two categories – low priority, and high priority. The 30Mbps, 50Mbps, 100Mbps or 200Mbps headline speeds that are available with current UFB connections are known as an Excess Information Rate (EIR) and fall into the low priority category. This speed is best effort, with absolutely no guarantee of performance or throughput. There is certainly no guarantee this headline speed will be available 24/7, and a user should not have an expectation that this will be the case.
Your UFB connection also has a Committed Information Rate (CIR) component which falls into the high priority category. The CIR value ranges from 2.5Mbps to 10Mbps on most plans and is guaranteed bandwidth for both upstream and downstream (which may have different CIR figures in each direction). You should expect be able to obtain this guaranteed bandwidth 24/7 between your router and your ISP.
The catch with the CIR is that it’s only accessible with the correct 802.1p tag on your traffic. The 802.1p tag is a value between 0 and 7 inside the 802.1Q section of an Ethernet header that specifies the priority of individual packets. By default all Ethernet traffic will typically have a 802.1q value of 0 and will be placed in the low priority EIR queue. To access the CIR component of your connection you need to tag traffic with an 802.1p value of 4 or 5 (depending on your connection type) on a UFB connection here in New Zealand.
So what use is the CIR? The High Priority CIR component is especially suited to voice or video applications where guaranteed bandwidth and low latency is important. If your ISP offers VoIP services they are most likely using this CIR component to guarantee the quality of their VoIP service as traffic in the low priority and high priority queues have different network performance targets for common network measurements such as jitter and packet loss. If you’re using your own router with VoIP it’s best practice to create QoS or firewall rules to tag voice traffic to use the CIR. As usual with any CIR you need to ensure that you have local policies in place to manage this bandwidth to handle traffic that may be generated in excess of the CIR.
It’s worth mentioning now that Chorus along with the other Local Fibre Companies (LFCs) responsible for the UFB rollout support untagged UNI ports and this is something that some ISPs do offer. An untagged UNI port means there is no requirement for a VLAN10 tag, but it also means you will have no high priority CIR component on your connection as a 802.1p tag can only be set inside a 802.1Q VLAN header.
So what solutions are there for somebody wanting to use a device that doesn’t support VLAN tagging? There are two that are simple – a switch capable of VLAN tagging that you can use to add the VLAN 10 tag to your traffic, or a Mikrotik Routerboard which can also do the same thing. I’ll describe how to do this with a Mikrotik Routerboard.
You will need to be aware with either approach that you will be unable to set any 802.1p tagging in your router with this approach as traffic leaving your router will not have a 802.1Q header. If you are using a Mikrotik it is possible to create mangle firewall rules inside your Mikrotik to set the priority of traffic inside the bridge, but this is outside the scope of this guide.
Something such as a Mikrotik RB750 device makes the perfect solution to tag your traffic. While any Mikrotik device out there with multiple Ethernet ports can be used, the RB750 is a nice low cost device that will achieve this. One thing to note is that the RB750 only supports 10/100 Fast Ethernet ports, if you have a UFB connection with a faster speed you’ll need something such as a RB750GL that supports Gigabit Ethernet ports.
The basic principle of this setup is to create a VLAN10 tag on an interface, and create a bridge to bridge together VLAN10 with another Ethernet port that you can plug your router into. The example below will create VLAN10 on Ethernet port 1, and bridge this to Ethernet port 2. You would then run a cable from Ethernet port 1 to your ONT, and plug your router into Ethernet port 2.
There are multiple ways to log into a Mikrotik router (SSH, telnet, Winbox or web browser) so I’ll leave that option up to the end user. This is not a guide to using Mikrotik hardware or RouterOS (which does have a steep learning curve) so please don’t ask me questions on this.
Once logged in ensure you delete all existing configuration in the device and either add an IP address to a port you will not be using, or use Winbox MAC address discovery to log into the Mikrotik.
From the terminal enter the following commands:
add interface=ether1 l2mtu=1522 name=vlan10 vlan-id=10
/interface bridge port
add bridge=UFB_Bridge interface=vlan10
add bridge=UFB_Bridge interface=ether2
Or if you want to create this from Winbox via a GUI the following screenshots will help
1) Add a VLAN with a VLAN ID of 10 to the interface you wish to use as your WAN port (in this case I’ve used ether1)
2) Create a Bridge – you can call this whatever you like.
3) Add VLAN10 and the Ethernet port you wish to plug your router into to the Bridge
You should now connect an Ethernet cable from Ether1 (or the port you selected) of your Mikrotik device to your ONT, and plug your router into Ether2 (or the port you selected). Assuming your router is configured with the correct PPPoE or DHCP settings for your ISP, you should now be connected. Some ISPs may tie DHCP leases to a specific MAC address in which case you’ll need to clone the MAC address of your ISP supplied router into your router.
On Radio NZ National this morning Kathryn Ryan interviewed Roger Heale, the Executuve Director of the New Zealand Taxi Federation. It’s well known the Taxi federation have a strong dislike for Uber, and have been spreading plenty of FUD about Uber since their launch in New Zealand. It wasn’t long ago that they were telling drivers who belong to the Taxi Federation that Uber was operating illegally in NZ and that drivers were unlicenced - both of which are untrue.
You can listen to this interview here http://www.radionz.co.nz/national/programmes/ninetonoon/audio/20155699/nz-taxi-drivers-call-for-level-playing-field-with-uber
Heale claimed that Uber does not charge GST and that explains an instant 15% difference in the rates. The problem is once again the Taxi federation have been caught telling blatant lies. Uber does charge GST.
Here’s a copy of a recent Uber invoice from a ride in Auckland.
A few weeks ago I wrote a blog post asking why Taxis in NZ aren’t interested in competing with Uber (which you can read here). If the Taxi Federation stopped spinning lies and FUD and focussed on bringing member companies and drivers together to deliver the products and service that consumers want, rather than delivering in some cases sub par experiences and hefty electronic card surcharges they’d probably find themselves in a much better place right now. I’ve got no time for companies and organisation that focus their entire resources on trying to spread negative publicity about their competitors rather than trying to make their own product better.
If you’re an Air New Zealand Airpoints member you’ll have an Airpoints card that’s branded OneSmart. The card can be used as a regular Prepaid Mastercard and is powered by the Rêv global wallet solution backed by Mastercard and supplied to Air New Zealand by BNZ. You can top up your card with multiple currencies and use this card while overseas for regular credit card transactions and ATM withdrawals with lower fees than you may pay for using a regular ATM card or credit card for cash withdrawals.
I noticed this thread on Geekzone last night discussing a new promotion offering $50 free Airpoints Dollars, simply by spending NZ$300 on your OneSmart card by the end of February 2015.
You can sign up for this promotion on the OneSmart site - http://www.airnewzealand.co.nz/onesmart-register
While I don’t find the exchange rates on this card great for international travel (the conversion rates are worse than you’ll get with a regular credit card), it’s great for booking Air New Zealand airfares as the regular credit card booking fees are waived if you use this card as payment. This promotion is a great one however, and is a very easy way to score a free $50!
As a VoIP guy I’d heard of Obihai Technology and seen their products online but had never had the opportunity to sit down and play with any of their range of products. While at ITExpo in Las Vegas in August I met Sherman Scholten, VP of Sales & Marketing for Obihai, and had an opportunity to look at their range of ATAs and new IP Phones and excitedly left with an OBi 200 ATA to have a play with when I got home. What’s an ATA? It’s an Analogue Telephone Adapter, a device to connect a regular analogue telephone to a VoIP provider for making and receiving calls.
Obihai Technology was formed by Jan Fandrianto and Sam Sin, both of whom are true pioneers in the VoIP space. What’s their background you might ask? Together they built the first mass market ATAs in the 90s when VoIP was in its infancy, and sold their company to Cisco who launched these as the Cisco ATA-186 and ATA-188 ATAs. Both then left Cisco and formed Sipura Technology, starting again building new voice products, and launching new Sipura branded products including the SPA2000 and SPA3000 ATAs onto the market. Like a story repeating itself, they once again sold their company to Cisco where they stayed, designing the Linksys/Cisco branded SPA2102, SPA3102 and SPA921/2 and SPA941/2 and SPA500 series IP phones which pretty much set the benchmark in the VoIP space for many years. Both left Cisco in 2010 and formed Obihai, launching their first products in the market not long after.
So what makes the Obihai products so different from others? The answer is simple – lots!
Like any ATA the basic functionality of plugging in an analogue phone and configuring a VoIP provider is there. Unlike many other products which only allow a single VoIP provider, the OBi 200 allows up to 4 individual VoIP provider accounts to be configured, all of which can be used for inbound and outbound calling. Each account can be accessed with a prefix for outbound calling, or can be integrated into the dial plan to make this fully seamless. All Obihai products also feature Google approved Google Voice support, meaning if you have a Google Voice account this device can be fully integrated for both inbound and outbound calling. Each Obi device also comes with it’s own unique OBiTALK number, which allows free calling between OBi devices without requiring an account with a VoIP provider.
The OBi 200 features a single USB port on the back which can be used for the OBiWiFi (WiFi), OBiBT (Bluetooth) or OBiLINE adapters. The WiFi adapter lets the OBi200 connect to the internet using your WiFi connection, whereas the Bluetooth adapter lets you pair your Bluetooth capable mobile phone with the OBi ATA and lets you use your mobile phone to make and receive calls via your Obihai ATA. The OBiLINE adapter allows you to connect regular analogue POTS line to the OBi device for inbound and outbound calling. Obihai have a number of different models, each with some slightly different features and ports.
OBi 200 featuring the OBiWiFi adapter
If you thought having Bluetooth capabilities was great then the OBiON app for both Android and Apple iOS allows you to make calls from the app on your mobile phone and route these via your OBi device from anywhere in the world. I could stop there, but I won’t, because I haven’t yet covered the cool stuff!
The OBi 200 web interface is very extensive, allowing every aspect of the device to be configured. For somebody like myself who’s used Linksys/Cisco SPA hardware extensively, I felt at home. With a huge number of advanced SIP configuration options available this device should not suffer any incompatibility issues which can still occur in the SIP world. If you’re just configuring a single device, then the web interface may be sufficient – if you’ve got multiple devices then the remote provisioning capabilities really make the Obihai products stand out from the competition. Full XML based provisioning with support for DHCP options 66,150, 159, 160 and 161 is included, but it’s the OBiTALK portal that really makes the Obihai products stand out from the competition.
So what exactly is the OBiTALK portal? It’s a cloud based portal that allows you to configure one or more Obihai devices from the web, and it is available for both individuals and VoIP service providers. Traditionally to manage multiple devices you’ve had to either ensure you had access to a network via a port forward (which introduces security issues), a VPN, or by creating your own server to manage provisioning files for each device. The OBiTALK portal allows you to configure the common account settings of the device and makes managing devices that may be at remote locations far easier. Changes made in the OBiTALK portal are immediately updated in the OBi device. More advanced settings such as dial tones and network settings can’t be configured from the portal, and need to be configured either via the web interface or using a XML based configuration file.
Having a cloud based portal poses the question of what should happen if Obihai were to go bust – would the device still work? Moving configuration to the cloud is something many companies are doing, and it’s something that does need to be considered whenever you’re purchasing a device that has this sort of functionality. The good news with Obihai is that if by some chance this did happen you’d still have full control over the ATA and be able to fully configure all the settings locally or via XML provisioning, but would probably lose the ability to call directly between devices, and possibly the Google Voice functionality depending on exactly how this is integrated.
In terms of other technical features the OBi 200 supports G.711a, G711u, G729, G726-32 and iLBC voice codecs along with T.38 support for fax over VoIP. It also supports setting idle and connect polarity settings to forward or reverse which will ensure accurate call connection and disconnection if the device is being used with a PBX.
So what is their not to like about the product? The simple answer is not a lot. One minor annoyance was the advertising and OBi Amazon Affiliate links on the OBiTALK page, but that’s not something that affects the performance of the device in any way.
It would be nice to to able to configure more features of the device from the OBiTALK portal, but once again this isn’t a limitation, but merely something that would be nice to have. EDIT: There is an Expert Mode setting within the OBiTALK portal that gives access to every setting in the device, which is something I missed when initially playing with the device. By default only basic settings are shown. With it’s current feature set, Obihai really have set the benchmark for an ATA.
*Obihai products are not currently distributed in NZ, but are available in Australia or from Obihai directly on Amazon. If ordering from Amazon you’ll get 12V multi voltage plug pack, but with US pins, so will need to source a 1A 12VDC plug pack locally, or use an adapter.
Unless you’ve been living on another planet for the last 18 months, you’ll know that the price of copper based Internet access in New Zealand had been a hot political issue. As part of the separation of Telecom into a retail company (Telecom, now Spark), and an infrastructure provider (Chorus), the Commerce Commission indicated that the cost of copper based wholesale broadband services in New Zealand delivered using the Unbundled Bitstream Access (UBA) product offering would move from a regulated price that was based off a retail minus calculation, to one that was based off a cost plus model. UBA is the regulated wholesale product used to deliver most ADSL, ADSL2+ and VDSL2 based copper broadband connections in New Zealand.
The existing UBA wholesale cost was historically set by the Commerce Commission by looking at the retail price of internet in New Zealand and deducting a % margin from this. It was a very flawed methodology, and one that arguably resulted in New Zealanders paying too much for broadband access for a number of years and restricted the growth of higher data caps. The move to a cost plus model required the Commerce Commission to establish what it believed was a fair price a for providing a wholesale UBA service.
As of the 1st December 2014 there are some very significant changes that will occur to regulated wholesale copper based broadband pricing as a result of the pricing changes set by the Commerce Commission. These changes have resulted in the averaging of prices for rental of copper lines between the exchange or cabinet and premises in both rural and urban areas, with an increase in urban areas to effectively subsidise rural users. It’s also meant a new lower price for the UBA component, with a price set by the Commerce Commission based solely off two other countries in the world – Denmark and Sweden. (I have my own views on the their methodology and the flaws with it, but discussing these isn’t the point of this post!)
**Note that the following prices are the wholesale price paid by your ISP or telecommunications provider and are regulated prices set by the Commerce Commission. They exclude GST, tail services, and all other costs that an ISP will have such as bandwidth, staff costs, marketing etc, all of which need to be added to these to set a retail price.**
The wholesale cost of a standalone POTS line (a regular phone line) remains at NZ$41.50 per month
The wholesale cost of a naked UBA DSL connection (naked ADSL, ADSL2+ or VDSL2) drops from $44.98 to $34.44 per month
The wholesale cost of the POTS phone service when combined with UBA (when your phone line and broadband are with the same provider) drops from $41.50 to $17.98 per month. With POTS phone and broadband from the same provider the total cost is to the provider is the UBA price of $34.44 + the POTS cost of $17.98 per month.
The wholesale cost of clothed UBA product (when UBA is combined with an existing POTS phone service from a different provider) increases from $21.46 to $34.44 per month. If you have a phone and broadband from different providers the total cost is $41.44 to the provider of the POTS phone service provider, and $34.44 per month to the UBA broadband provider.
One significant shift is that the “primary” product being delivered over a copper line is now deemed to be broadband, not voice, meaning that UBA is now deemed as the primary service. Regardless of whether you have a phone line or naked copper broadband, the access cost of the copper line has to be built in to the price. This means that if you have broadband and a phone, that the cost of the copper line will now be built into the UBA price, not the voice price, hence UBA costs are now the same regardless of whether you have a phone or not.
What do these changes mean for me? That’s going to depend on your individual setup.
If you’re a residential customer with a copper home phone line and broadband with the same provider, or you have a naked broadband offering, it’s likely that nothing will change. The wholesale cost of your connection will drop, but due to the highly competitive nature of the New Zealand broadband market and incredibly thin margins it’s unlikely that there will be any significant savings passed on to customers. As this new pricing has been known about since early in the year many providers have already said publically that they’ve factored these reductions into current pricing.
If you’re a residential customer who has a copper home phone line and broadband with a different provider, you’re going to be impacted. There are a lot of residential customers in the country who have their phone line with Spark, and their broadband with another provider. As explained above the UBA product will now be priced the same whether you have a phone line or not, however if you have a phone line and UBA with the same provider the voice service is provided at a discounted price. If you have these with different providers the cost of the UBA service will increase by approximately $13 per month.
A number of providers including Slingshot and Orcon (both owned by CallPlus) have contacted customers in recent days advising that as of the 1st December you will no longer be able to have your copper broadband and phone line with copper broadband with different providers. While this scenario is still possible, the costs of your broadband service would have to increase by at least $13 for them to recover these increased costs. Both Slingshot and Orcon have made the decision to not pass on a price increases to customers, but instead either force customers to move their phone line under their billing control, or let the customer move their broadband to their existing phone line provider, which in most instances will be Spark.
I’m sure in coming weeks we’ll see a lot more providers start to contact their customers and advise of these upcoming changes and present the options to them. There will clearly be many people unhappy about these changes, but you need to remember that these changes (and any possible price increases) are nothing to do with your provider – they are simply making changes as a result of the Commerce Commission pricing changes. As $13 significantly exceeds the profit margins that an ISP would typically make on a customer, it’s unlikely that any would want to simply absorb this increase.
Anybody who follows my blog will have spotted a number of hotel and airline reviews that have crept in during the last year. While I love tech, I love travel even more. I love writing about things and getting to interact with people with similar interests, and get a lot of emails from people who have read my flight or hotel reviews asking me questions that the airline or hotel website may not have answers to.
As I sat down to write this post I stopped to think how many flights I’ve taken in the past year (somewhere around 45 it seems), and how many nights I’ve stayed in hotels (somewhere around 50). As I’ve got older my travel habits have changed - it’s safe to say the luxuries of flying business class and staying in nice hotels is something I really enjoy, but I’m still a backpacker at heart. I’ve met some great friends while staying at backpackers, and have stayed at a number in the past year. I’m just as happy staying at a backpackers as I am a 5* hotel, and have very different expectations for both when it comes to the product on offer, and the level of service.
I’ve stayed at plenty of Auckland hotels in recent years and have found some pretty average offerings from Accor. I stayed at the Ibis Styles with a friend a year or so ago and we were both left almost in shock at the state of the rooms. Yes they’re very compact, and the beds weren’t great, but that’s OK. When you’re paying $79 for a low cost inner city hotel room from Accor’s low cost budget brand you have to expect that sort of thing and my expectations were set to that level. What you don’t expect to get is a bathroom ceiling completely covered in mould on the bathroom, a rotting door, and a handle that’s about to fall off. If you want sleep it’s not the best place to say either – when reception have earplugs on the counter to combat the noise from the nearby bar below you realise what a big issue it is.
On another visit I stayed at the Pullman which Accor pitch at the high end of the market, and had a room which had a lot of dust behind the door and a buffet breakfast experience that was absolute dreadful. A lack of food was put down to “we’re busy”, while staff seemed to be walking around with no real idea of what they were supposed to actually be doing. A complaint at check-out resulted in an offer of a 2-for-1 breakfast the next time I stayed there. I’m sorry Accor but that’s not a satisfactory solution to a problem – expecting a customer to pay full price for a bad experience in the promise it’ll be “better next time” doesn’t fit well in my world, because there probably won’t be a next time. In a competitive marketplace such as hotels I doubt I’ll ever be back.
So all of this leads me to my stay at the Mercure Auckland. I’ve stayed there in the past and my expectations were of a fairly stock standard middle of the road hotel. The hotel is located on Customs St East and is right next door to Britomart - Auckland’s downtown train station and transport hub. Buses to the airport and trains are literally 2 minutes walk away. The hotel features 13 floor with the top floor featuring a bar and restaurant that has great views over Auckland harbour.
I checked into the hotel at around 2pm on a Saturday, the front desk staff were super friendly and efficient and all were immaculately presented.
The 11th floor room was fairly standard in terms of layout and size. It featured good views of the Sky Tower, a good sized bathroom with a shower over the bath.
Unfortunately that’s about the only nice things I have to say about my stay. It really does rate as one of the worst hotels I’ve ever stayed in for the price, anywhere in the world. I’ve stayed in better backpackers in Auckland for a mere fraction of the price that the Mercure charge for a room.
After arriving in the room I went to brush to teeth and found that after turning on the tap the water stopped. 15 minutes later it still wasn’t back, but hey, plumbing problems occur so that’s not something I can really fault the hotel on too much. The plumbing problems were the least of the issues however – this hotel is in such a bad need of a refurbishment that I can really only describe it as a disgrace. Is I mentioned earlier I have very different expectations depending on where I’m staying and believe my expectations are very fair. I don’t expect to visit a middle of the range hotel that I’ve paid $135 per night for and see something that I’d expect to see in a $35 per night backpackers. That’s the exact scenario I walked into.
The state of the interior of the room was terrible. Clearly the room had been given a green feature wall at some point – it’s just a shame they never took the lamp shade off the wall to paint under it. It looked so tacky I was lost for words. All other walls showed significant damage and scratches.
The bed itself was also terrible and was certainly due for replacement. The sheets however were nice, and in line with most hotels now there was no unhygienic bedspread.
I thought I’d check out the TV. One of my pet hates in any hotel is incorrectly configured aspect ratios when a 16:9 TV features video in a 16:9 format that’s then centre cut resulting in a 4:3 picture which is then stretched vertically resulting the picture being chopped off on the left and the right. While some channels were fine, TV2 and Maori Television weren’t and exhibited this problem.
But at least the TV did work – it doesn’t look like whoever installed it knew how to crimp F connectors.
And while we’re on cables – I wonder what the story is with this mess that’s hanging under the desk? Not to mention the chair which looks like it’s seen better days.
Could the bathroom be better? Nope. The front panel was falling off.
The door was rotten from the water.
The silicon around the bath looked was looking pretty tired.
And there was a great view of the dust while having a shower.
The hotel provides free WiFI in the lobby – and is limited to 30 mins or 50 Megabytes per 24 hours. Judging by the speeds on offer you’re going to struggle to get even a basic webpage to load.
But these speeds are also going to be impacted by the WiFi equipment on offer – a D-Link AP with –77dBm signal strength. This is very poor with best practice when installing WiFi is to aim for -70dBm or better.
There is no WiFi coverage in rooms, instead access is via Ethernet cable. Usage is charged at 68c per minute up to $14.90 for 2 hours or 100MB, $29.90 per 24 hours or 200MB, or $115 per 7 days or 1GB. Usage in excess of this is charged at 10c per MB, or you can opt to have your connection speed shaped to continue using it at no charge.
The check-out process was quick and efficient but (un)luckily for them they never asked me how my stay was.
Writing this feels like a bit of a rant which is something I don’t aim for on my blog, but this would have to rank as one of the most disappointing hotel experiences I’ve ever had. If the Mercure where charging $35 for a room I wouldn’t be writing this, but when they’re charging $135 for a very sub standard product it deserves others to be wary of what they’re getting themselves into when booking a room. My advice is to look elsewhere – there are plenty of far better hotels in Auckland for a similar price.
Complaints about poor WiFi performance are one of the biggest issues facing the average internet user right now. With most people now relying on phones, tablets and laptops for their Internet fix, the days of a cabled Ethernet connection being the norm is well and truly over. Unlike a cabled Ethernet connection which offers a guaranteed level of performance, WiFi is incredibly complex with many variables that will impact performance and reliability. Dealing with wireless performance issues is a nightmare for the helpdesk of the average Internet Service Provider (ISP), with poor speeds being one of the biggest complaints from customers, and something they can do very little about as it’s something they have absolutely no control over.
First off lets be very clear with one thing. WiFi is not, and never will be a replacement for a cabled Ethernet connection. It will always be a convenient, complementary solution. Unless the laws of physics are changed at some point in the future there will never be an exception to this rule.
The first WiFi capable devices first appeared in the market in the late 90s with a standard known as 802.11b, offering headline theoretical speeds of up to 11Mbps. From the 802.11b standard the 802.11g standard was created, delivering an improved headline theoretical speed of 54Mbps. By 2009 the 802.11n standard was ratified, delivering headline speeds of up to 300Mbps, and by 2011 we were seeing the first 802.11ac gear promising headline speeds of up to 867Mbps (or even greater from some vendors).
The issue of theoretical vs real world throughput is something that we need to cover here. Every headline wireless speed you see mentioned, whether it be 11Mbps, 54Mbps, 150Mbps, 300Mbps or 867Mbps is a theoretical maximum speed. For various reasons it’s impossible for these sorts of speeds to ever be achieved in in the real world, primarily because they don’t take into account any overheads that exist in the WiFi protocol at Layer 1, and overheads at Layer 2 and Layer 3 in the open systems interconnection (OSI) model which form the basis of all data communications. They also don’t factor in WiFI being a half duplex medium, meaning that a WiFi device can’t transmit and receive data at the same time. It has to (very quickly) alternate between both, much like two people talking on a walkie talkie. This differs to an Ethernet connection which is full duplex and can transmit and receive data at the same time like two people having a real world conversation. As you’re transferring data across your WiFi connection you’ll be using one of two data protocols, Transmission Control Protocol (TCP) or User Datagram Protocol (UDP). Because TCP relies on an acknowledgement (ACK) packet (think of a walkie talker user responding to every message saying they’re received it before the other person can send their next message), TCP performance over WiFi will typically be significantly less than UDP performance as UDP does not rely on ACK packets.
In the real world a 802.11b network will deliver maximum speeds of around 5Mbps, a 802.11g network will typically deliver maximum speeds of around 20Mbps, a 802.11n network will with 20Mhz channels (N150) will typically deliver maximum speeds of around 40Mbps, a 802.11n network with 40Mhz channels (N300) will typically deliver maximum speeds of around 80Mbps, and a 802.11ac network will typically deliver between 100Mbps and 350Mbps depending on channel width and the brand of hardware you’re using. These are all maximum real world speeds, and in the real world issues such as signal strength and congestion from other wireless networks will mean you may not get speeds anywhere near these - these issues are exactly what I’ll get onto next. Understanding what impacts WiFi performance isn’t straight forward as many of the key factors rely on a level of radio frequency (RF) engineering and protocols, however I’ll attempt to try to explain a few of the major concepts in something resembling simple English.
Lets start by explaining the most important principal of radio communications - noise floor. Unless you live somewhere with no neighbours within a 20km radius (or live inside a faraday cage), you’re continually being exposed to a myriad of radio waves from other wireless networks along with other sources of RF interference such as microwave ovens, cordless phones, mobile phones, video senders, baby monitors, and Bluetooth devices. Noise floor (like signal strength) is measured in dBm. dBm uses a logarithmic scale for measurement.
A typical noise floor value in the 2.4GHz band can be anywhere from -80 dBm to -110 dBm depending on the level of background noise in your environment from other sources. When you connect a WiFi device to a WiFi access point or router you’ll likely see an indication of signal strength on your device, typically showing bars. This shows a good approximation of signal strength, but isn’t telling you the full story. Many WiFi devices and access points also show the signal strength of the connected device, which will be shown in dBm. If you’re within close proximity of your WiFi access point you’ll potentially see a signal level of -50 dBm, as you start moving away from the access point this level will increase downwards. Once your signal level approaches the noise floor, link quality (and overall WiFi performance) will be affected. Once the signal strength reaches the noise floor, maintaining a WiFi connection will be impossible as each device is unable to “hear” the other device due to the level of background noise. All building materials cause a drop in signal strength which can be anywhere from a few dBm upwards - a regular plasterboard wall at home may reduce WiFi signal strength by 5 dBm, whereas a steel reinforced concrete wall with can easily impact signal levels by 20dBm.
To maintain a good quality WiFi connection you need to ensure your signal level is stronger than the noise floor. This is known as the signal to noise ratio (SNR). A SNR of 20 dBm or greater is recommended, meaning that if your noise floor is in the range of –90 to -95 dBm you’ll need to ensure your signal strength is at least –70 to –75 dBm to ensure a good connection. If you live in a very crowded inner city apartment block with a large amount of background noise and interference from other WiFi networks, you may find your noise floor is well below this figure and it could easily be in the vicinity of –80dB. This type of environment is when major problems start to occur, as it means you will need to be very close to your WiFi access point or router to maintain a good connection – but even then maintaining good performance may be difficult due to the way the WiFi protocol works, which is something I’ll discuss later on.
I like to explain noise floor in a very simple way by comparing it to people talking in a room. If you’re in a large room with one other person you’ll easily be able to have a conversation with them. If 300 other people and a DJ turn up playing music, they’re all creating noise. As this level of background noise increases you’ll find that maintaining a conversation with the person next to you becomes very difficult, and even when you shout you may not be heard. This pretty much mirrors what happens to WiFi when background RF noise effectively creates an environment where maintaining a reliable WiFi connection can be very difficult.
Now that I’ve explained the concept of noise floor, we’ll now look at the issue of frequencies. Two frequency bands are used by WiFi – 2.4GHz and 5GHz. Today 2.4GHz is still the primary frequency used by WiFi devices. Most tablets, phones and laptops sold within the past 1-2 years will also support the 5GHz band, however the number of access points and home routers that support this band is still small. Both of these bands use blocks of spectrum that are openly usable by anybody in most countries in the world without a licence, meaning that there are other devices out there that also use these same frequency bands. In the 2.4GHz band, Bluetooth devices, baby monitors, older cordless phones, video senders and microwave ovens all use the same frequency band as WiFi which means they all have the potential to cause interference and impact the performance of your WiFi. Most modern cordless phones use the DECT standard which uses the 1800Mhz band and will not cause interference with 2.4GHz WiFi, however in New Zealand in particular Uniden sold tens of thousands of 2.4GHz cordless phones until they stopped selling these a few years ago. A 2.4GHz cordless phone is a significant cause of interference with 2.4GHz WiFi and should be replaced with a modern DECT phone to eliminate the issue. Likewise turning on your microwave oven can cause a jump in noise that can impact WiFi.. Baby monitors and video senders that use the 2.4GHz or 5GHz band will also impact WiFi performance.
In the 2.4GHz band there are 13 channels available for use in New Zealand. In many other countries (including the USA) only 11 channels are available. Each of these channels is 10MHz apart. This in itself creates a major problem for 2.4GHz WiFi, as from 802.11g onwards 20MHz channels are now the norm, meaning that to avoid interfering with other WiFi networks, only 3 channels are actually available for use that don’t overlap with other channels. These are 1, 6 and 11. Using channel 13 in New Zealand is often a good idea as it is often less crowded, however it creates plenty of problems as many WiFi capable devices are configured for the US market where only channels 1-11 are allowed, and such devices will not be able to connect to a network transmitting on channel 13.
(Picture borrowed from www.xirrus.com)
Using a 20MHz channel meant a maximum theoretical throughput of 150Mbps with the Wireless N 150 standard, which for many people wasn’t fast enough - people wanted faster speeds. How did they achieve that? They simply moved to 40MHz wide channels, and Wireless N 300 was born. It doesn’t require a genius to realise that suddenly the 2.4GHz spectrum became a lot more crowded, and that overlapping channels from different networks became a major problem.
In the 5GHz band channels are placed 20MHz apart so overlaps don’t exist like they do in the 2.4GHz band if you’re using standard 20MHz channels, however recent additions such as the new 802.11ac standard can use channels up to 80MHz wide (and the standard allowing up to 160MHz channels), meaning the same challenges that exist in the 2.4GHz band will eventually occur in the 5GHz band as the number of devices using this band increases.
Due to the simple way WiFi works all other WiFi networks out there using the same channel or overlapping channels are creating interference which will ultimately impact the performance of your WiFi network. The WiFi standard uses something known as Clear Channel Assessment (CCA) to determine if other nearby WiFi networks on the same channel(s) are transmitting or receiving data. If it detects activity from another network it may not be able to transmit or receive data. Likewise when you’re transmitting or receiving data on your WiFi network, CCA on nearby WiFi networks will detect this activity. Think of this as like multiple conversations in a room with multiple people, where only two people are able to have a conversation at a time and all other conversations must stop before you can talk – in the WiFi world the impact of overlapping networks is a significant slow down in WiFi speeds and/or other performance issues.
For this very reason 2.4GHz WiFi performance in many urban environments is now significantly degraded. If you’re living in an environment such as an inner city apartment where there may be hundreds of nearby networks, you’re basically living in the worst possible environment possible for WiFi, and in some cases really have to be thankful your WiFi even works at all. The harsh reality is that fixing the 2.4GHz WiFi band isn’t possible – the solution is to upgrade your equipment to support the 5GHz band if reliable WiFi performance is important to you. While the 5GHz band will offer better performance, it also doesn’t propagate (travel) through open air and building materials as well due to the higher frequency. This is both the biggest benefit, and also the biggest downside of 5GHz WiFi – your 5GHz signal probably won’t leave your house and cause interference with your neighbours WiFi network, but you will also find that in your home a 5GHz signal will not travel through internal walls as well as a 2.4GHz signal. As we move towards even higher frequency bands for WiFi with a 60GHz standard now in place, we’re moving towards a future where individual access points in each room of your house either cabled back via Ethernet cabling or using Ethernet over Power adapters will become the standard way of providing WiFi coverage in the home.
Now that I’ve explained a few of the reasons why your WiFi can be impacted, I’ll offer a few tips on how to improve your WiFi connection.
First up is placement of your modem/router or access point. Many people are going to have this located next to their primary PC. The simple reality is that a WiFi connection will always work best the closer you are to your router or access point; so if this is located at one end of your home, you could easily have poor coverage at the other end of your home. As explained above, WiFi signals degrade as they travel though building materials within your home so placing your modem/router or access point in a central location is typically going to deliver the best outcome. If you have a big house the reality is that a single WiFi access point will probably not be sufficient to deliver adequate coverage. In this situation the best solution is a secondary access point that’s cabled back to your primary modem/router via Ethernet cable or Ethernet over Power adapters. Different brands of equipment have different levels of performance so upgrading your equipment can also significantly improve your performance. Discussing the pros and cons of different hardware isn’t something I’m going to write about though, so opening a discussion thread in a forum site such as Geekzone is the best place to discuss such things. Newer hardware that supports the 802.11ac standard will also likely use antennas that support beamforming which can deliver significantly improved performance. Ruckus (who arguably make some of the best WiFi equipment on the market) have a great white paper on beamforming that you can read here.
The next step is to ensure that your WiFi network is operating on the best available channel. What determines “best” isn’t something that’s easy to answer. If your WiFi hardware has it’s channel set to auto rather than having a channel manually configured there is a good chance it has probably picked what it believes is the best channel to use. Auto mode does have it’s downsides though, and if you’re in a very noisy environment your device may change channels regularly and performance may be impacted. If you have a 802.11n 300Mbps device it’s probably defaulting to 40Mhz channels, and if you live in a noisy environment with lots of other networks you may find your WiFi performance improves if you drop this back to using 20Mhz channels. There are various applications (my favourite is inSSIDer) for Android mobile phones and PC’s that will show you surrounding networks, and let you see what channels are the least crowded. If you’re using an iPhone you’re out of luck – Apple won’t allow applications such as this in the App Store.
You also need to be aware that not all hardware is created equal – it’s not uncommon to find home routers that can only support somewhere in the vicinity of 10 wireless devices. If you connect more WiFi devices than your router can support, you’ll start to encounter problems. You will also encounter problems if you have lots of devices connected with poor signal strength – if you have devices connected at one end of the house with very low signal strength, this will affect the performance of WiFi devices that are closer and have good signal strength.
And lastly ensure that power levels are set lower, not higher. Many people assume that more power = better, but in reality this is not the case. Turning your router or access point to maximum power settings will typically result in reduced maximum throughput and reduced receive sensitivity, meaning your level of performance may actually be worse. High power devices also mean more noise is being created, and this simply makes the 2.4GHz band worse for others. Setting your power level to a lower setting in many situations will actually improve your WiFi performance.
One common piece of hardware people are buying these days in an attempt to improve performance is a so called “WiFi extender”. These devices work by picking up a WiFi signal and rebroadcasting it, meaning it can improve performance in your home if you have low coverage areas. There are two major issues with these devices however – the first being that they will typically halve the maximum speed of your WiFi network as they have to pick up and then rebroadcast the network, and the second being that they are often not installed correctly. As the extender needs to connect wirelessly to your main access point or router, it’s no good placing it where there is poor coverage – the device needs to be placed in a location where coverage is good such as a mid way point between these areas.
In summary, WiFI is far from simple. These days people have an expectation that just because a WiFI network exists, that it should work, and work well - in reality nothing can be further from the truth.
Disruptive. It’s a word that I don’t like, but it has become a very common catchphrase these days to describe the now common scenario of a new entrant causing grief for existing players in a market. Right now disruptive new entrants in the market are causing grief for taxi companies worldwide.
I’ve done a lot of travelling in my life, and taxis are something that I’ve used a lot both in NZ and overseas. It’s something where the experience is the very similar right across the world, and mysteriously the instances of highly opinionated middle aged male drivers who love talking about politics while listening to talkback radio isn’t just something that’s unique to New Zealand. In many cities in the world the taxi experience is a unique one – catch a Black Cab in London and you’ll find yourself with a driver who’s spent years learning “The Knowledge” – a test involving memorising over 20000 streets and 20000 landmarks in Central London, and can probably tell you more about London and it’s history than any guide book.
The launch of Uber in the US in 2010 really became the first true competition to hit the taxi market. As the taxi market is often regulated or tightly controlled in many countries, the market in many ways mirrored that of a monopoly. While multiple companies may exist in these markets, pricing, and in some cases vehicle requirements are set by authorities, meaning the price and experience is often essentially the same, no matter what company you pick. Uber’s entry into the “rideshare” (as they call it) market was resulted in a product that very different to what already existed. Rather than the old school experience of hailing a cab in the street (something that can prove very difficult in the US) and then having to have cash to pay for it, the whole experience could be done straight from your mobile phone.
I started to write this blog post last month after having used Uber a handful of times in recent trips to the US. Since then I’ve used Uber a couple of times in Auckland, and we’ve also seen the launch of Uber in Wellington a couple of weeks ago. I’ve loved every Uber experience so far, and it’s something I’ll be happy to continue to use in the future.
What I do need to make clear at this point is why I (and plenty of others I know) share this support for Uber. Right now if you’re like many people out there you’ll immediately be thinking the sole reason is price. If you did, you’re wrong. Yes UberX may undercut existing taxi offerings, but Uber Black also exists offering a premium transport solution in luxury vehicles. Why do I really like Uber? The answer is really quite simple – Uber are using technology to let me interact with them, and that’s something that existing taxi companies seem to be largely ignoring.
I love being able to open up the Uber app on my phone, see where the nearest vehicle is, see an estimated quote for a journey, order my ride, see the details of the driver and his vehicle, and then watch in real time as the driver comes to pick me up. The Uber driver sees all this data on a mobile phone and gets GPS based navigation to the destination. Once my journey is over I can literally hop out of the vehicle with payment for the ride being automatically deducted from my credit card with no expensive “electronic card convenience fee” surcharges which most taxi companies in NZ charge. I’m then prompted to rate my driver, and likewise the driver is prompted to rate me as a passenger. Uber really is the perfect product for today’s market and delivers a customer experience that other companies simply can’t offer.
So how are taxi companies in NZ competing against Uber? Improving their service? No. Creating apps to order a taxi? Typically no. Waiving “electronic card convenience fees”? No. The New Zealand Taxi Federation who represents most taxi drivers and taxi companies in NZ are simply spreading FUD (fear, uncertainly and doubt) in the hope of discrediting Uber, with stories such as this on stuff.co.nz on the weekend. I’ve also heard from a couple of taxi drivers in recent months that “Uber is illegal in NZ”, with the Taxi Federation claiming that many drivers are not legally licensed. It’s a requirement to hold a passenger service endorsement to carry passengers for payment here, so my challenge to the Taxi Federation is to front up with some facts to back their claims or to retract them. If drivers are operating illegally it’s the job of the New Zealand Transport Agency to enforce the law, so they should be approaching them with evidence, not bleating to the media and trying to spread mistruths.
So why aren’t taxi companies in New Zealand simply working on the whole customer experience issue and giving customers the ability to book a taxi online or via an app? That really is the million dollar question. Most large taxi companies in NZ all use a taxi dispatch system created by Australian company MTData, who are now one of the largest suppliers of taxi dispatch hardware in the world. In car terminals integrate tightly with their back end system meaning that a taxi company knows exactly where every car is and can dispatch jobs direct to the drivers screen. MTData provide an extensive API meaning that building an app to deliver 100% of the Uber functionality can easily be built. Hutt & City Taxis and Auckland Combined Taxis have launched iOS only apps within the past couple of years based on MTData source code that offer basic booking functionality, but the fact most people have never heard of these really sums up how popular they are. The lack of Android versions of their software also shows it’s not a market they’re serious about, and they also lack any ability to pay for your ride using the app.
The NZ market has seen the introduction of Zoomy, a 3rd party app that tries to replicate the Uber experience by equipping regular taxi drivers with mobile phones running the Zoomy app. An individual can book a taxi using the Zoomy app and pay for this with their credit card. Zoomy however is in my view a dead loss and I don’t see much of a future for it as it relies on convincing regular taxi drivers to sign up for the service, and with so few of them interested in this you’ll typically find that there may not be a Zoomy driver available if you want to order a taxi. Zoomy is also disliked by many large taxi companies and with little hope of getting these on side, it’s probably facing a pretty bleak future.
All of this really poses the question of why organisations such as Blue Bubble Taxis, New Zealand's largest affiliated taxi group which brings together 16 different taxi companies from across NZ, aren’t interested in the app market. My understanding is every single one uses the MTData dispatch system. Why haven’t Blue Bubble built an app (especially when MTData themselves are willing to provide support) to easily allow passengers in any of those markets to order a taxi, pay for that taxi using the app, and more importantly stand out from the market by offering a solution that their competitors don’t offer? Instead of wanting to compete with new entrants, these taxi companies seem stuck in the 1980s mindset believing that there is no need for innovation or change, which in this day in age is a perfect recipe for failure in any industry as new players are certainly more than happy to trample all over your existing business and steal your customers.
I wrote a few weeks ago about Jetstar’s on time performance in New Zealand. Today they’ve taken out banner ads on many major NZ websites promoting they are “New Zealand’s most punctual airline”, and promising a $25 voucher if your flight is more than 10 mins late between the 23rd and 30th September.
The problem with Jetstar’s claims is that they facts don’t quite back up their bold claim of being “New Zealand’s most punctual domestic airline”.
Jetstar back up this claim with some fine print. They also admit that they are not comparing identical statistics as they’re reporting Air NZs entire domestic operations for half of the financial year, and only jet operations for the other half of the year.
*Best domestic airline on-time performance for July 2013-June 2014, at 10-minute departures. Source: ACARS for Jetstar statistics. Monthly reports to NZSX for Air New Zealand statistics. Air New Zealand reports included all aircraft types for July-December 2013 and jet aircraft only for January-June 2014.
It’s certainly worth noting that they’re talking about arrival times with their promise, but only publish scheduled departure statistics (and on NZ flights their scheduled flight times are 5 mins longer than Air NZ to allow a buffer for any delays).
Lets look at the published departure statistics for both airlines:
Looking at the statistics published on the Jetstar website for the July 2013 – June 2014 financial year, Jetstar have an average on time departure performance of within 10 minutes of 86.59%
Lets have a look at the results of Air New Zealand. A quick look at their publically listed statistics from the nzx website includes the following comment.
In the 2014 financial year, 86.5% of Air New Zealand’s Domestic Jet flights departed within 10 minutes of scheduled departure time.
Air NZ operate a much larger fleet and operate significantly more domestic flights across their NZ operations than Jetstar. Air NZ also had a terrible start to the year with major on time issues due to the introduction of their A320 aircraft which they have now recovered from. Despite this Air NZ have an 86.5% on time departure performance percentage across the Domestic Jet Fleet vs Jetstar on 86.59%. Looking at the bigger picture it poses the question – factoring all of this in, is .09% the big difference Jetstar would have you think it is?