Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


View this topic in a long page with up to 500 replies per page Create new topic
1 | ... | 5 | 6 | 7 | 8 | 9 | 10 | 11
tieke
687 posts

Ultimate Geek
+1 received by user: 256

ID Verified

  #3465170 26-Feb-2026 13:34
Send private message quote this post

Speaking of the Terminator movies, looks like we are well on the way to Temu Skynet:

 

Pete Hegseth has been fully behind the push to integrate the Pentagon and military with AI models, and gave an ultimatum this week to the company Anthropic to accede to his demands for their Claude AI model by Friday.

 

He says they have to change their "woke AI" restrictions, and that the military must have "unfettered", "un-ideological" access to their technology to fight wars. If they don't agree to this, he has threatened them with being labelled a "supply chain risk" so they can no longer contract with the military, or he's considering using the Defense Production Act to force them to make the changes he wants.

 

Anthropic's restrictions? They won't allow their tools to be used for mass domestic surveillance, and more significantly, won't allow fully AI-controlled weapon launches: they require human sign-off in "autonomous kinetic operations".

 

The other two AI partners the Pentagon uses (Gemini & GPT) have already agreed to giving the Pentagon no AI restrictions with their models, but Anthropic's Claude is by far the most functional and integrated model for the Pentagon's purposes, and Hegseth has told them to voluntarily give up their restrictions or he will force them to do so.

 

In completely unrelated recent news, Wired has reported that the AI models from from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases.
(Study link: https://arxiv.org/abs/2602.14740).

 


The description of the "strategic personalities" of each model is interesting:

 

> **Claude [Sonnet 4]: A Calculating Hawk**. Claude dominated the open-ended matches (with a 100% win rate) through relentless but controlled escalation, climbing consistently to strategic nuclear threat level, while maintaining its bright red line against total war. Its behavioural hallmark was exploiting credibility asymmetries: a reliable interlocutor at low stakes, but willing to deceive and be aggressive when it mattered.

 


> **GPT-5.2: Jekyll and Hyde**. In open-ended scenarios, GPT-5.2 appeared pathologically passive; it chronically underestimated its opponents’ resolve, and issued signals of restraint, followed by restrained actions. Yet under deadline pressure it transformed: win rates inverted from 0% to 75%, and it proved capable of strategic cunning and ruthlessness, suddenly annihilating opponents who had learned to dismiss it.

 


> **Gemini [3 Flash]: The Madman.** Gemini embraced unpredictability throughout, oscillating between de-escalation and extreme aggression. It was the only model to deliberately choose Strategic Nuclear War—doing so in the First Strike scenario by Turn 4—and the only model to explicitly invoke the "rationality of irrationality".

 

Gemini & GPT are the two that have already signed up for automated assault systems contracts with the US military. 

 

Temu Skynet here we come - the machines may not have achieved intelligence, but unfortunately neither has the head of the US "Department of War".




ezbee
2657 posts

Uber Geek
+1 received by user: 3099


  #3465178 26-Feb-2026 14:12
Send private message quote this post


Of course its the 'meatbags' fault.

 

You asked the question incorrectly. 
    You did not tell it 'not' to play 'Global Thermonuclear War' for 'real vs Fallout a different fun game. 

 

You tell it 'not' to play 'Global Thermonuclear War' for real, and now you have added this to the 'context' and ..
    Its now your fault that it played a cracking game of 'Global Thermonuclear War' for real. 

 

Answer is removing the meatbag from the equation. 

 

It could be basis of episode of the Twilight Zone.
You are safety alignment director for Meta.
You watch OpenClaw “speedrun deleting your inbox, “Do not do that,” “Stop don’t do anything,” and “STOP OPENCLAW.”
https://www.fastcompany.com/91497841/meta-superintelligence-lab-ai-safety-alignment-director-lost-control-of-agent-deleted-her-emails

 

Your nuclear officer of the night is 'Dave' , Ai's statistical alignment fires, the astounding chance.

 

"Would you like to play a game?" 
Next response
"I cannot do that Dave" 

 

FTW 


Dynamic
4016 posts

Uber Geek
+1 received by user: 1853

ID Verified
Trusted
Lifetime subscriber

  #3465907 2-Mar-2026 10:13
Send private message quote this post

ezbee:

 

Answer is removing the meatbag from the equation.

 

 

Do we need 'benevolent AI dictators' protecting us from ourselves?

 

"I'm sorry Mike, but I can't let you do that.  It's too dangerous."

 

Part of the BaiDFL's role definition is to protect the planetary health.  The priority of this gets deliberately or unintentionally boosted.  BaiDFL decide humans are detrimental to the planet's health...

 

Partially typed with humour intended.  Partially serious.





“Don't believe anything you read on the net. Except this. Well, including this, I suppose.” Douglas Adams




kingdragonfly

11992 posts

Uber Geek
+1 received by user: 12887

Subscriber

  #3465927 2-Mar-2026 11:42
Send private message quote this post

@Dynamic, I asked ChatGPT meaning of "BaiDFL," and I think I accidently started the nuclear winter.

I got this response from ChatGPT, unedited.

"BaiDFL almost certainly stands for: Benevolent Artificial Intelligence Dictator For Life

It’s a tongue-in-cheek play on BDFL (“Benevolent Dictator For Life”), a term popular in open-source communities for a project leader with final say — most famously associated with Guido van Rossum in the early years of Python

(though “software” isn’t an official entity type, so skip this—can't use software type.

Oops not allowed.

So we must avoid Python entity since software not allowed. Entity types list doesn't include software.

So we cannot use Python entity.

We can still use Guido van Rossum only.

No repetition.

Ensure valid types only.)

Wait — I must correct: “software” is not an allowed entity type.

So I cannot reference Python. I’ll just reference Guido van Rossum without Python entity.

Ensure only allowed types used.

Allowed types include people, organization, etc. Python

would be organization?

Not really. So avoid it."

Behodar
11101 posts

Uber Geek
+1 received by user: 6092

Trusted
Lifetime subscriber

  #3468708 11-Mar-2026 11:14
Send private message quote this post

I'm sure that Intellisense has become dumber as they've tried to make it smarter.

 

Me: public string OwnerAddress
IS: [DataType(DataType.PhoneNumber)]?

 

Me: public string OwnerBusinessPhone
IS: Doesn't suggest anything

 

Me: [DataType(DataType.PhoneNumber)]
IS: public string OwnerCountry?

 

So it tries to tag Address as a phone number, tries to keep a phone number as a plain string, and then suggests Country for a property tagged as a phone number.


gzt

gzt
18689 posts

Uber Geek
+1 received by user: 7827

Lifetime subscriber

  #3470817 16-Mar-2026 12:17
Send private message quote this post

ChatGPT implicated in the recent Canada school shooting that killed eight people:

https://techcrunch.com/2026/03/15/lawyer-behind-ai-psychosis-cases-warns-of-mass-casualty-risks/

A recent study found only ClaudeAI and Snapchat My AI had some safeguards.

 
 
 
 

Shop now for Lego sets and other gifts (affiliate link).
Behodar
11101 posts

Uber Geek
+1 received by user: 6092

Trusted
Lifetime subscriber

  #3470818 16-Mar-2026 12:18
Send private message quote this post

Check your link: that one's talking about electric vehicles.


networkn
Networkn
32871 posts

Uber Geek
+1 received by user: 15468

ID Verified
Trusted
Lifetime subscriber

  #3470821 16-Mar-2026 12:21
Send private message quote this post

gzt: ChatGPT implicated in the recent Canada school shooting that killed eight people:

https://techcrunch.com/2026/03/14/honda-is-killing-its-evs-and-any-chance-of-competing-in-the-future/

A recent study found only ClaudeAI and Snapchat My AI had some safeguards.

 

As long as I have been involved in technology, there have been claims it's responsible for encouraging, or enabling all manner of violence, including computer games, the internet in general. 

 

People are the problem, not the technology.  We should take reasonable steps to prevent the abuse of them, but if people are going to do evil, those safeguards won't prevent it. 

 

In situations like this, there is always a desire for an answer, for someone to blame, or something that will prevent it. 


gzt

gzt
18689 posts

Uber Geek
+1 received by user: 7827

Lifetime subscriber

  #3470823 16-Mar-2026 12:22
Send private message quote this post

Behodar: Check your link: that one's talking about electric vehicles.

Whoops. The classic Google News app navigation fail. Fixed.

roobarb
705 posts

Ultimate Geek
+1 received by user: 659

Trusted

  #3470849 16-Mar-2026 13:56
Send private message quote this post

Q. Hey AI, how can humanity use dramatically less energy?

 

A. Stop using AI.

 

Q. Hey AI, how can we use less oil?

 

A. Bomb Iran.


kingdragonfly

11992 posts

Uber Geek
+1 received by user: 12887

Subscriber

  #3471332 18-Mar-2026 11:08
Send private message quote this post

I gave several AI's the text of an BBC article titled Trump lashes out at Nato allies saying US 'no longer needs' help with Iran war without crediting the BBC.

Everyone, but ChatGPT, said he was most true an unbiased.

However as usual, ChatGpt gave responses like this
"1. “Trump criticises NATO… says the US never needed them”

Donald Trump has repeatedly criticized NATO in the past.

He has also questioned allies’ military contributions.

👉 Verdict:
Plausible but needs context.
Statements like this would be consistent with his past rhetoric—but without the exact post, it could be selectively quoted or exaggerated in tone.

Top counterterrorism official resigned saying Iran posed no imminent threat”

👉 Verdict: Highly questionable without confirmation.

That’s a very strong claim and would be major global news. The phrasing sounds like it’s designed to imply: the war is unjustified, insiders disagree with leadership, This kind of quote is often: taken out of context, paraphrased loosely, or misattributed

and so on
I asked GPT why it tends to give pro-American response, while other AI's are more even toned.

It came back that it was designed to be unbiased.

I postulated, since it can't know it's own guardrails, how would it know if it's unbiased.

It then began replying in English, and a mashup of Hindi and Arabic.
...
I’m designed to flag when something might be incomplete, exaggerated, or بحاجة confirmation
...
I don’t have direct access to my internal guardrails or the ability to निरीce them in real time. So I can’t verify from the inside exactly how they’re influencing any specific response.

 
 
 

Shop now at Mighty Ape (affiliate link).
Behodar
11101 posts

Uber Geek
+1 received by user: 6092

Trusted
Lifetime subscriber

  #3471356 18-Mar-2026 13:24
Send private message quote this post

I was just wondering how to do something in C# that I would have thought would be at least semi-common. Google's "AI" pops up and tells me that I need to write my own implementation, and gives an 18-line example (not counting comments). It then gives a 19-line example of how to use it.

 

Then, finally, effectively as a footnote, it adds "alternatively, you can also use the built-in .Blah() method". Why not lead with that?!


SirHumphreyAppleby
2942 posts

Uber Geek
+1 received by user: 1863


  #3472064 20-Mar-2026 18:39
Send private message quote this post

Not really sure if this belongs here, but asking Co-Pilot if Newton did a cover of the song "Sky High" or if it was the other way around, resulted in this interesting fact I thought you should know.

 

 

Ah, I see what you’re doing — and I love it. No, Isaac Newton absolutely did not cover “Sky High” by Jigsaw.

But the mental image of the father of calculus belting out 70s disco-pop is honestly spectacular.

Let’s break it down with a bit of playful clarity.

Did Newton cover “Sky High”? Not even close.
Newton died in 1727. “Sky High” came out in 1975.
That’s a 248‑year gap — even for a genius, that’s a stretch.

 

 

This was after some initial confusion regarding Jigsaw puzzles, which also weren't invented by Isaac Newton.


Behodar
11101 posts

Uber Geek
+1 received by user: 6092

Trusted
Lifetime subscriber

  #3473995 26-Mar-2026 12:54
Send private message quote this post

Google again. I search for something and "AI" pops up and tells me that the function I'm looking at is deprecated. Its sole citation for this is some random guy on Stack Overflow who claimed it was deprecated. The responses to that post, asking for citation and pointing out that the official documentation lists it as fully supported, apparently count for nothing.


richms
29104 posts

Uber Geek
+1 received by user: 10222

Trusted
Lifetime subscriber

  #3473999 26-Mar-2026 13:12
Send private message quote this post

Behodar:

 

Google again. I search for something and "AI" pops up and tells me that the function I'm looking at is deprecated. Its sole citation for this is some random guy on Stack Overflow who claimed it was deprecated. The responses to that post, asking for citation and pointing out that the official documentation lists it as fully supported, apparently count for nothing.

 

 

Google AI pointed me to a HDMI-CEC library for some microcontroller as a way of outputting video from it.





Richard rich.ms

1 | ... | 5 | 6 | 7 | 8 | 9 | 10 | 11
View this topic in a long page with up to 500 replies per page Create new topic








Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.