Spotted on Oz Bargain: https://www.ozbargain.com.au/node/922666
I also managed to cancel my subscription straight away while keeping the 1 year pro
Paypal account required, NZ account will do
Tested and working
Spotted on Oz Bargain: https://www.ozbargain.com.au/node/922666
I also managed to cancel my subscription straight away while keeping the 1 year pro
Paypal account required, NZ account will do
Tested and working
|
|
Any benefits to using this over chat??
Balm its gone!
What a shame I won't ever be reconnecting my payment method to Paypal again after they screwed me around so much the last time something was charged incorrectly.
Thanks for the heads-up.
Subscribed.
I like Perplexity - but boy does it hallucinate sometimes...
Also signed up.
It passed my initial test, answering a question that google AI was unable to.
It's my preferred AI (not that I've done an in-depth comparison).
But I do like to spot check it's answers.
This morning it's done me some lovely graphs of NZ stats over the past 25 years - showing its sources.
Good work.
In the past 3 months - it has stuffed up twice:
(a) was comparing the current women's 1500 m record holder with the men's field in 1936. I asked where she would have placed. The answer said she'd place 4th and gave me all relevant times and who achieved them. By inspection (of the numbers), she'd have placed 3rd - not 4th. I commented on that and Perplexity apologised and said I was correct. An obvious and trivial mistake - but I'd not want a co-pilot to make that sort of error.
(b) I asked Perplexity to calculate beam deflection for a sign on an Aluminum post, subject to a wind load of 80 knots.
The answer was off by a factor of 10. I'm from the slide-rule era, so my factor of 10 sanity-checking is better than it might be. I was frustrated with myself - because I didn't think I was that senile... so I double checked and Perplexity was wrong. I told it so - and once again it agreed & recalculated. But here's the interesting bit... because the (correct) answer showed that the post would deflect by a metre (ie: the post was waay undersized for the job), Perplexity said this was silly and reverted to it's original answer of 100mm. Seemingly, just because it was an 'acceptable' answer - even if incorrect. I didn't have time to pursue it further - I'm not used to arguing with hallucinating machines...
So... check your answers ;-)
pdh:
It's my preferred AI (not that I've done an in-depth comparison).
But I do like to spot check it's answers.
This morning it's done me some lovely graphs of NZ stats over the past 25 years - showing its sources.
Good work.
In the past 3 months - it has stuffed up twice:
(a) was comparing the current women's 1500 m record holder with the men's field in 1936. I asked where she would have placed. The answer said she'd place 4th and gave me all relevant times and who achieved them. By inspection (of the numbers), she'd have placed 3rd - not 4th. I commented on that and Perplexity apologised and said I was correct. An obvious and trivial mistake - but I'd not want a co-pilot to make that sort of error.
(b) I asked Perplexity to calculate beam deflection for a sign on an Aluminum post, subject to a wind load of 80 knots.
The answer was off by a factor of 10. I'm from the slide-rule era, so my factor of 10 sanity-checking is better than it might be. I was frustrated with myself - because I didn't think I was that senile... so I double checked and Perplexity was wrong. I told it so - and once again it agreed & recalculated. But here's the interesting bit... because the (correct) answer showed that the post would deflect by a metre (ie: the post was waay undersized for the job), Perplexity said this was silly and reverted to it's original answer of 100mm. Seemingly, just because it was an 'acceptable' answer - even if incorrect. I didn't have time to pursue it further - I'm not used to arguing with hallucinating machines...
So... check your answers ;-)
Well off topic but LLMs aren't optimised for numbers and calculations. Most of the modern AI applications use different tools under the hood for calculations, but I find I still sometimes still get LLM (language) answers to mathematical problems. My go-to prompt for more complex calculations and formulas is "please write me [python] code to calculate [xyz] and then let's run through the calculation steps together". I haven't used Perplexity but Gemini, Claude and ChatGPT will all helpfully write and run code from within the chat interface and I have much more faith in those answers.
Yes - it's off-topic - but probably significant for anyone planning to use Perplexity enough to get a year's sub...
I agree that large-language models may not excel at calculations - but sheer innumeracy (as illustrated by my example (a)) is a bit concerning.
An awful lot of 'reasoning' (in quotes because I know that LLM's don't actually reason) is difficult if you can't manipulate simple numbers.
Answers to social questions often rely on quantities and comparisons of numbers.
You don't need to delve into engineering to benefit from juggling numbers.
If the calculations in example (b) are a challenge for a 2025-07 Perplexity... then it should say so - not give hopeful BS.
This isn't an exam where partial marks are better than nothing ;-)
|
|