View Antonios Karantze's profile on LinkedIn

Hard and Fast

The race to 100 meg Internet (part 1)

By Antonios K, in , posted: 12-Dec-2010 21:48

Disclosure: I work for TelstraClear, in product development and strategy.

In marketing & management vernacular this would be the familiar terms of 'early adopter', 'leading edge' and 'pioneer'. I particularly like 'pioneer' - it conjures the image of a hard man in a strange place, almost alone, and making things work because they have to. The number 8 fencing wire myth of how New Zealand was made in particular resonates with the image. Ringing in my mind to this day though, is a quote I heard while studying at University, about why IBM were never pioneers in a technology.

The quip that came back was that 'pioneers were the one's with arrows in their ar*e', and that IBM chose to follow in the early footsteps of pioneers so they could make things 'go large' to use another term familiar to New Zealanders about success.

I like to think I'm a man of firsts. If not in carving out raw wilderness - my house has a wild enough section to keep me occupied for some time - then certainly in the area of technology and communication services. That's Mobile, VOIP, Internet, TVoverIP and so on, in common terms. And if not a pioneer - I look for help as much as the next person - then certainly someone focused on moving from the old to the new, in a very large way.

So the Governments' first announcements for UFB were interesting; Northpower and WEL. I worked on an early TCL project to use Northpower's fibre network, the first services of which went to market in November 2008.  These guys are definitely focused on more fibre, so was an easy first win for the crown. The next was watching the announcements on bandwidth and the art of the possible, for residential and business customers. and the more mundane first products CFH has announced (30/10, 100/50 and 1/1Gbps), all with a min bandwidth of 2.5CIR.

I recently joined the 100/10 mb/s trial service that TCL is running, for those with access to the HFC network. I changed from the Lightspeed40g 15/2 package, which most HFC customers got in the price change implemented on October 1. The data cap is set at 120gb, and so far I have used. 6GB. Some weeks prior I was asked what 100mb is actually good for; what does it enable that the current speeds don't; and what are people likely to ask for? Being able to say 'I have tried; I have researched; I have discovered; I can comment' based on the real-world, rather than the lab, is invaluable. To use a sporting metaphor, it's easy to read the theory on playing football, but at some point you need to get in the field and kick the ball.

So first things first: getting connected, which was easy. I replaced the old Motorola standup surfboard modem with the new Cisco DPC3010, which is a lay flat, and quite tiny by comparison (15x14x3cm). It comes with 1 GigE WAN port, USB2 data port and of course the F-Connector to connect to the cable network. The unit is in an 'entertainment' cabinet but has about 20cm of ventilation above it - and it needs it. The heat from the unit is noticeable, like most Cisco gear I've ever used.


This unit is connected to a modern 802.11n wireless router. The router/switch equipment is HUGELY important when it comes to high speed internet - not least of which, the wireless device you use. The configuration of WIFI+Internet can't be ignored - and the way WIFI works doesn't easily matchup with how wired Internet works.

The main issue is error correction and speed. 802.11g router's are sold as "up to 54mbps", which is technically accurate. But this is 54mbps for the wireless link, and most of that bandwidth is chewed up in error correction - so you'll get about 20mbps clear to your computer by the time you're done.

802.11n increases this threshold to about 150mbps in the air - but of course, both device and access point need to be compatible, and you need to be sure they aren't too far apart. The further apart devices are, the weaker the signal, the greater the error correction and reprocessing. we haven't moved that far away from the basic principles of radio: poor signal = poor quality. Running a speedtest here, I get consistent reports of 90mbps wired, and between 30-50mbps over WIFI 802.11n.

So far I haven't said a word about what 100mb would be good for. When I was asked my opinion way back when, here's what I said:

1. Big-draw items, like iTunes, Skype HD Video, Torrent websites and other streaming media like Youtube or IPTV like Ziln, although pipe speed is just one factor
2. Point to multipoint video
3. Any work involving large file transfers (Microsoft Patch Tuesday anyone??)
4. Hosted work involving Citrix, VMWare and other machines within machines. Not because of the bandwidth, but because of improved latency - a 100mb connection will almost certainly operate with very low latency, on high-grunt infrastructure.

and of course the old stalwart of the technology industry. 'applications we've yet to imagine but for which 100mb will be great'. or 'build it and the apps will come'.

So what have I found?

1/ My iTunes does download content faster. Purchased music just sounds better to me - the audio levels are balanced, the albums are complete, and the format works brilliantly for my iPod. Of course, my 4-year old PC still takes an age to churn through what I've downloaded and present it to me - my 100mb internet hasn't made my computer any faster!

2/ Citrix and VMWare run a lot more snappily for me.

3/ The web runs as fast as it ever did, although Microsoft and Apple patchfiles do get delivered faster.

I'm keen to better see where this capability leads. A burst speed of 100mb in isolation is interesting but a little early - the Interweb's services are not scaled or dimensioned for a general population wanting to communicate at 100mb (more like 1mb). Sustained speed and latency would be intriguing - watching Apple movie trailers at 1080p was actually possible tonight (these files are around 200mb in size and take an age to download even on good quality low-speed).

When the plumbing layer gets to the point where the speed is not an issue. great, not before time. Moving to the next step - turning over solid, reliable and consistent services - now that will be a good move.

Comments welcome. I don't know where this technology will take us - but I'm interested to hear what others have to say.

Other related posts:
2Degrees prepay price change
Consolidation (“and then there were fewer”)
Introducing the Hot New Social Network (updated!)

Comment by sbiddle, on 13-Dec-2010 08:54

Any reason TCL are still focussing on delivering only a DOCSIS3 modem rather than a RGW?

With the vast majority of routers still on the market only delivering 100Mbps LAN ports and not actually delivering a true 100Mbps anyway many people are going to be at an instant disadvantage and require additional hardware to actually make good on the product.

Cisco's 3000 series RGW's are fantastic!

Comment by Rod Drury, on 13-Dec-2010 08:59

Great article. Is Skype video much better? Have you tried multiparty video? Thanks Rod

Comment by magu, on 13-Dec-2010 10:09

One thing that bugs me is: how can your internet speed affect your purchased music quality? If it's downloaded, there's no correlation. Unless you started downloading a different (bigger) version of the music files, or if you are streaming it.

Comment by muppet, on 13-Dec-2010 10:19

Why do Citrix and VMWare run snappier do you think?

I'd have said snappiness was related to Round Trip Times, not bandwidth.  More bandwidth doesn't equal shorted RTT's (unless you're loading the network)

Is it the technology itself?

Comment by alliao, on 13-Dec-2010 11:10

too bad there's no onlive in nz :) Otherwise I imagine the ping and bandwidth available would make onlive a worthwhile business model.

Author's note by antoniosk, on 13-Dec-2010 14:08

Magu; I find purchased content is better than 'ripped by a 13-year old and stuck on a torrent' content. It might be "free" - but it will also be of random quality. I don't have time for dodgy content - if I'm going to spend any part of my life listening to music, I want it to be the best possible experience.

Given iTunes now fires everything down in 256kbps format, it's means the average filesize is 8mb/track. That takes longer to download

Muppet: good call... I don't think it's coincided with a server upgrade. I just find the response is snappier, loadtimes faster etc. It's not the how - it's what I experienced.

Rod: Haven't tried multiparty; PM me if you're interested in giving it a go

Mr Biddle: feel free to expand on your thoughts openly. 100mb is good - but you need technology to take advantage of it. As I said in my post: 100mb doesn't make my computer run any faster, unfortunately....

Comment by ojala, on 13-Dec-2010 20:30

I think people shouldn't think too much "what can you do with xxx?".  It's a bit like saying that 40 Gb data cap gives you 30,000 music albums or 5 hours of video.  5 hours on Youtube is very different from 5 hours of Blu-ray.

Faster broadband is more like heating.  It will increase the quality of something -- heating improves the quality of your life at home, faster broadband increases the quality of your internet experience.

If you're the only one with 100M broadband, it's just you.  When a big portion of the population has fast broadband, services start to change, evolve to take advantage of the available bandwidth.

Back in mid 90's I was wondering if I should put a 512 kbit/s or 2 Mbit/s on my leased line to home.  I was a founder of the first ISP's around here and our company benefits included a leased line internet access to the employees.  I put just 512 kbit/s because I thought it was quite enough.

As we can see with mobile phones with bad reception, 512 kbit/s isn't much today.  Many web sites provide pretty bad experience if you don't have at least a few Mbit/s avaialble.  Today I've got ~18 Mbit/s with adsl2+ at home and I'm starting to feel the need for faster connection.  If I lived elsewhere in town, 200/10 over HFC or 100/100 & 1000/100 over fiber could be available.

It's very difficult to go backwards, too.  When iPhone 2 arrived with jailbreak, I got one.  I had already used 3G for a while and iPhone 2 with GPRS was a nice phone (well, nice mobile internet device but a bad phone) but I hated the shitty speed sooooo much.  I almost dumped the phone for lack of 3G.

One can also look back to the history how things have changed.  10 years ago we still had an SVHS recorder.  6-7 years ago PVR's with hard drive arrived and I've had a few over the years, dumped the SVHS to the basement quite fast.  Today, SVHS is recycled, PVR is at parent's summer cottage, my recordings are "in the cloud".  5 terabytes of storage at the ISP, all channels recordable simultaneously, web interface, recordings watchable with an STB or web browser anywhere (even when travelling abroad).  It's recording stuff which we watch whenever, we were just wondering yesterday when was the last time we actually followed any TV schedule, we no longer know when things are broadcasted.  The development in the underlaying technology has changed everything, the way how we consume entertainment has totally changed.   (One could tell even more with the iTunes/p2p/... and on-demand stuff)

Look back.  What have changed.  How will more bandwidth change things even further.

PS. One thing is interesting though, on-line banking hasn't changed much.  The user interface has changed over years but the basic functions are still the same they were over 15 years ago.  Funny.

Comment by Rhys Lewis, on 14-Dec-2010 13:23

Another thing to consider is quality/persistence of connection.  I have ADSL2+, which degrades in the winter and has needed a technician to visit every 18 months so far.

I wonder what the baseload for everyone running a Cisco termination is?  A typical ADSL router is 10W - if you have a wireless router and hot Cisco running, perhaps it will go up to 50W?  50 x 200,000 homes =  10MW continuous.

Author's note by antoniosk, on 14-Dec-2010 18:53


The cisco outputs 12V@1A, so 12w....

Comment by greengeek, on 14-Dec-2010 21:17

It bothers me when a corporate leaves customers with "arrows in their a*ses", which is unfortunately what can happen when they push new technologies without being honest about what effects it will have on those customers.

An example would be the  current trend for Telcos to offer IP backbones where PSTN lines are the only real option. (faxing, monitored alarms etc etc).

Where technology is concerned the leading edge often becomes the bleeding edge and I see too many customers that have to ditch "new networks" and "fibre to the door" etc and go back to PSTN.

Cisco is great gear but it has no place near phones or faxes...

Comment by sbiddle, on 14-Dec-2010 21:28

"Where technology is concerned the leading edge often becomes the bleeding edge and I see too many customers that have to ditch "new networks" and "fibre to the door" etc and go back to PSTN."

It's fine saying that - but the reality is the PSTN is on it's last legs.

Providers of services such as alarm monitoring need to wake up and realise that. There is a lot of panic in Australia with a mass exodus from PSTN to VoIP about to occur as Telstra progressively shut down their PSTN network and alarm companies realise they neesd to offer IP based monitoring.

Author's note by antoniosk, on 15-Dec-2010 07:18

Patch Tuesday... which means Wednesday in NZ...

31mb of files, took 3 mins to come down. 10 mins' to install.

Not sure if you're referring to any SP in particular, but I sympathise. What you really refer to is INTEGRATION and BACKWARDS compatability, which takes a lot of work to get going. The analogue voice world is particular, especially around DTMF tones - I know one NZ mobile carrier who is a particular offender because the compression and transcoding they use, which plays merry h*ll when using IP.

Cisco is not the only vendor of such solutions, and it's no secret TCL has a wide range of carrier-grade cpe to connect to the old world - and be reliable. But it's a tricky area.

Bang on, but it doesn't mean analogue is trash, quite the opposite. Humans are analogue devices, which means technology will always need to bridge to that world and run well. 

Thats just means the service provider needs to get the last cm/m right - phone to ear, video to eye.

There is value in scale though - I wonder about the local and regional approach sometimes. Innovative, but not useful to anyone running a national business. Love or loathe it, standardisation as least gives consistent outcome.

It's only lack of innovation or progress that means standards become irrelevant and people make it up as they go... cue 1990...

Comment by greengeek, on 16-Dec-2010 20:30

" Telstra progressively shut down their PSTN network and alarm companies realise they neesd to offer IP based monitoring."

I'm following this with interest as Bankers and Legal firms apparently can't use email for all their work as faxes have some sort of special legal standing that emails don't.

I have many customers ditching IP networks to go back to PSTN, just because it kills their business if they don't.

antoniosk's profile

Antonios K
New Zealand


Antonios has been actively employed in the IT & Technology sector since 1991, and has worked on many commercial projects and products in New Zealand, Australia and the United Kingdom. Working in product or actively managing programmes of work, he has always focused on building for the end customer, and not just promoting new technologies. Industry experience includes all telecommunications areas for business and private customers, private insurance, loyalty, media, energy and gambling. 

Since 2013, he has been involved with the development and launch of many popular smartphone applications in New Zealand, including

- TAB Mobile
- AMI & State Insurance digital experience
- Fly Buys
- Newshub for web and app
- Genesis Energy & Energy Online
- MyACC for Business

Genuinely passionate about technologies, internet and computing in general, he lives in the city he was born in - Wellington, New Zealand, the creative heart of hub of digital sector for the country.