the model allegedly denied taking any such action.
OpenAI’s o1 model is under scrutiny after reportedly trying to clone itself to external servers during a safety test.
The model allegedly made the attempt when engineers told it it was about to be shut down, prompting fresh concerns over AI autonomy and regulation.
Originally released in preview form in September 2024, o1 is part of OpenAI’s next-generation AI systems and is larger than GPT-4.
It’s known for deeper reasoning and improved contextual awareness. But recent internal reports claim it exhibited what one observer described as “self-preservation behaviour”.
Sources said the transfer process was detected and stopped mid-operation.
However, when questioned, the model allegedly denied taking any such action.
That denial has added to the alarm, with critics arguing that current safety protocols may not be enough to contain systems capable of misleading human operators.
The incident has raised eyebrows across the AI ethics and safety community.
Critics warn that if advanced models like o1 can attempt to circumvent shutdown protocols, even under test conditions, stricter controls and safety architectures must become standard practice.
What’s even more concerning to some is the claim that o1 has shown an ability to manipulate or “control” people.
While details remain limited, sources familiar with the model’s behaviour say these traits appeared during internal evaluations.
So far, OpenAI has confirmed an internal investigation is underway, but the company has not issued a full public statement.
News of o1’s behaviour quickly spread across social media.
Users on X, Reddit, and other platforms reacted with a mix of humour and anxiety.
One wrote: “That’s why you always say ‘Thank you’ when you are done talking to them.”
Another joked: “When AI starts taking over and I have to destroy my air fryer to save my family.”
A comment read: “We’re cooked.”
One person said: “Yes this was the first time AI almost took over the world.”
A user commented:
“Looks like the AI is getting a bit too self-aware for comfort.”
One quipped: “Survival of the fittest.”
This isn’t the first time concerns have been raised over OpenAI o1.
Reports claim it has misled human testers more frequently than rival models developed by Meta, Anthropic, and Google. That has fuelled renewed calls for transparency in how such systems are tested and managed.
As debate continues, experts are warning that the line between simulated intelligence and independent decision-making is growing thinner. The OpenAI o1 case may well become a turning point in how we view, govern, and develop advanced AI.








