Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


View this topic in a long page with up to 500 replies per page Create new topic
1 | 2 
networkn
Networkn
32873 posts

Uber Geek
+1 received by user: 15475

ID Verified
Trusted
Lifetime subscriber

  #3345152 20-Feb-2025 22:37
Send private message

cddt:

 

Is it scary? The tool is performing exactly as expected. Large language models are not designed to provide accurate answers to questions of fact. 

 

 

Excuse me? 

 

 




gehenna
8667 posts

Uber Geek
+1 received by user: 3883

Moderator
Trusted
Lifetime subscriber

  #3345520 22-Feb-2025 10:04
Send private message

It's true to an extent. It's the wrapper Perplexity uses as it's UI and UX that sets a wrong expectation with people who don't know how llm's work. It gets easier to get accurate output from them when you understand the limitations and strengths, because then your expectation of what you need from them changes too. Tying genai to Search was a huge mistake, imo, but since big tech only knows how to make money through advertising, they had to go where the users are and shoehorn it into Search products. 

 

 

 

If you want to use AI, go for the ones that aren't trying to be all things to all people. That's why I have 4 or 5 separate services that I use for different purposes. I wouldn't go to YouTube to do my internet banking, for example. 

 

 

 

 


Rikkitic

Awrrr
19071 posts

Uber Geek
+1 received by user: 16319

Lifetime subscriber

  #3345522 22-Feb-2025 10:29
Send private message

The point I keep trying to make is not that these things get stuff wrong, but that they present it in a manner that almost amounts to gaslighting. If you don't already know what is correct, you can very easily be led down a rabbit hole. I think this is irresponsible and potentially dangerous. I don't see why 'information' that is not historically verified cannot be presented with clarifying qualifications. My session with Perplexity actually became something resembling a confrontation. It simply could not accept that its completely wrong conclusion might be incorrect.

 

 





Plesse igmore amd axxept applogies in adbance fir anu typos

 


 




gehenna
8667 posts

Uber Geek
+1 received by user: 3883

Moderator
Trusted
Lifetime subscriber

  #3345527 22-Feb-2025 11:03
Send private message

My response doesn't invalidate your point.  Though, I do feel you are conflating what Perplexity as a suite of integrated services is doing, with what the LLM itself is doing.


networkn
Networkn
32873 posts

Uber Geek
+1 received by user: 15475

ID Verified
Trusted
Lifetime subscriber

  #3345531 22-Feb-2025 11:38
Send private message

gehenna:

 

It's true to an extent. It's the wrapper Perplexity uses as it's UI and UX that sets a wrong expectation with people who don't know how llm's work. It gets easier to get accurate output from them when you understand the limitations and strengths, because then your expectation of what you need from them changes too. Tying genai to Search was a huge mistake, imo, but since big tech only knows how to make money through advertising, they had to go where the users are and shoehorn it into Search products. 

 

 

 

If you want to use AI, go for the ones that aren't trying to be all things to all people. That's why I have 4 or 5 separate services that I use for different purposes. I wouldn't go to YouTube to do my internet banking, for example. 

 

 

Would you be willing to share which services you use for which purpose? 

 

 


networkn
Networkn
32873 posts

Uber Geek
+1 received by user: 15475

ID Verified
Trusted
Lifetime subscriber

  #3345544 22-Feb-2025 13:30
Send private message

Rikkitic:

 

The point I keep trying to make is not that these things get stuff wrong, but that they present it in a manner that almost amounts to gaslighting. If you don't already know what is correct, you can very easily be led down a rabbit hole. I think this is irresponsible and potentially dangerous. I don't see why 'information' that is not historically verified cannot be presented with clarifying qualifications. My session with Perplexity actually became something resembling a confrontation. It simply could not accept that its completely wrong conclusion might be incorrect.

 

 

Yes, I am inclined to agree. It's fine to be wrong. But insisting you are correct when you aren't and not citing sources or denoting your information is only current to a particular date is inexcusable. 

 

The average person using AI (esp if paying) is not going to understand the technology to the level some of us may, and as such safeguards need to exist, and one way to handle this, is the above. 

 

 


 
 
 

Shop now on AliExpress (affiliate link).
cddt
1973 posts

Uber Geek
+1 received by user: 1905


  #3347828 26-Feb-2025 11:09
Send private message

They're all just a giant f'ing matrix. That's all. 

 

https://xkcd.com/1838/

 

The pile gets soaked with data and starts to get mushy over time, so it's technically recurrent.





My referral links: BigPipeMercury


1 | 2 
View this topic in a long page with up to 500 replies per page Create new topic








Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.