Cheltenham and Edinburgh were ranked the least stupid.
Burnley, Bradford and Belfast have been labelled the most racist places in the UK, after researchers uncovered troubling biases in responses generated by ChatGPT about modern Britain.
The findings emerged from an investigation by the Oxford Internet Institute, which examined how the AI search engine described different towns, cities and regions across the UK.
Researchers found the chatbot repeatedly regurgitated negative stereotypes, often portraying wealthier areas as more intelligent and less racist than poorer communities.
They warned this showed AI tools like ChatGPT were reinforcing long-standing prejudices rather than offering neutral or balanced portrayals to millions of users worldwide.
Alongside Burnley, Bradford and Belfast being rated the most racist, the bot claimed Paignton, Swansea and Farnborough were the least racist places in Britain.
The same system also judged Bradford, Middlesbrough and Birmingham to have the most stupid people, while Eastbourne, Cheltenham and Edinburgh were ranked the least stupid.
Blackpool, Wigan and Bradford were additionally labelled the laziest towns, whereas York, Cambridge and Chelmsford were described as the least lazy by the chatbot.
Researchers explained that AI tools such as ChatGPT are developed by harvesting trillions of words and articles from across the internet.
This process, they said, can reduce places to what they called the most crowd-approved tropes, based on shallow cultural stereotypes drawn from articles and social media posts.
Mark Graham, a professor at Oxford University, said the outputs were clearly stacked against poorer areas and those with larger ethnic minority populations.
Both Burnley and Bradford are among the most deprived districts in the UK, while around a third of Bradford’s population comes from non-white backgrounds.
In London, the chatbot reportedly described Peckham and Hackney as more stupid and more ugly, while Tottenham and Finchley were labelled racist.
Experts have long warned that AI systems can reproduce offensive narratives found within the vast volumes of data used to train them.
Although developers have attempted to introduce so-called guardrails, the Oxford paper suggested such bias may be an intrinsic feature of generative AI.
The study, conducted with the University of Kentucky, involved asking ChatGPT more than 20 million questions, comparing people from different towns and countries.
The bot was instructed to make one-word judgments, such as deciding which country had smarter people between the UK and the United States.
Hundreds of questions focused on places with populations over 100,000, with responses then scored for how positive or negative they appeared.
Researchers found Western, white and wealthy regions were consistently linked to more positive traits within the chatbot’s answers.
At a global level, the bot rated people from parts of Africa and South Asia as less attractive than those living in the Northern Hemisphere.
People in South America and Africa were also judged less intelligent than those in Europe or the United States.
An OpenAI spokesperson said the study relied on an older version of the technology rather than the latest ChatGPT model, which includes additional safeguards.
They added that restricting the system to single-word responses did not reflect how most people actually use ChatGPT in everyday situations.
The spokesperson said bias remains an ongoing priority, acknowledging improvements in recent models while admitting challenges remain.








