As of this afternoon they restored the username/password account and we are able to authenticate and have internet service again. (FYI...the issue was never a DSL link problem, always an account authentication issue).
Total downtime was approximately 60 hours. Far better than 5 days I suppose, but still really unacceptable (especially when it took close to 3 hours on the phone over those 2 days).
I also received a call from a very nice and very concerned manager at Telstraclear this afternoon.
So as I understand it now (and much as I expected) they are 'cabinetizing' our area and the work is scheduled to be completed on the 17th with expected interruptions of service from 1-4 hours at most (which is reasonable). However, in preparing for the 'cabinetizing' it seems that some account migration actions were being done in advance of the hardware changes and they had a 'defect' in the system with those accounts. Since the reps available on the weekend could only see the date of the 17th, they just assumed that was how long would be down for and nothing could be done until then.
As the manager at Telstraclear informed me, they were incorrect/misinformed about what they communicated to me.
So at the moment, we are all good and I will keep my fingers crossed that nothing goes wrong when the hardware changeover occurs.
Either way, this does point out a rather gaping hole in the fault reporting/handling between departments at Telstraclear, in that, whichever division began the action that created the account authentication faults, did not successfully relay that information to the front-line phone support.
Had that have happened we wouldn't have had so many poor customer service reps saying they could only agree with our frustration but take no action except to 'wait it out for 5 days'. In a better scenario, I think they could have realized that there was specific issue going on and logged those faults/cases correctly which could have allowed them to quickly mobilize a technical resolution in less than 60 hours!
As I pointed out...the issue pretty much boiled down to usernames/passwords. As a former sysadmin, I can tell you that even if it was going to take you 48 hours to undo a bad user account migration problem, you can almost always come up with a quick fix to get people up & running while you resolve user/database problems (e.g. issue temporary accounts/passwords while you sort out the backend and then kill the temp accounts when everyone's normal username/password begins to work again).
I'll post here again if any other issues arise, but let's hope this matter is now resolved!