Artificial intelligence (AI) took centre stage at Lisbon’s Web Summit, where tech leaders gathered to explore everything from dancing robots to the influencer economy.
In the venue’s warehouse-sized pavilions, a single phrase dominated conversations: “agentic AI.”
Delegates explored AI agents that could be worn as jewellery, integrated into workflows, or discussed in more than 20 dedicated panels.
Agentic AI refers to artificial intelligence that can perform tasks autonomously, such as booking flights, hailing an Uber, or assisting customers.
While the term has become an industry buzzword and even appeared in the Daily Mail’s list of Gen Z’s ‘in’ words, AI agents are far from new.
Babak Hodjat, chief AI officer at Cognizant, developed the technology behind one of the first widely known AI agents, Siri, in the 1990s.
He said: “Back then, the fact that Siri itself was multi-agentic was a detail that we didn’t even talk about – but it was.
“Historically, the first person that talked about something like an agent was Alan Turing.”
Despite their familiarity, AI agents are considered riskier than general-purpose artificial intelligence because they can interact with and alter real-world situations.
Potential issues, such as bias in datasets or unintended consequences, are amplified when artificial intelligence operates independently.
The IBM Responsible Technology Board wrote in its 2025 report:
“Agentic AI introduces new risks and challenges.
“For example, one new emerging risk involves data bias: an AI agent might modify a dataset or database in a way that introduces bias.
“Here, the AI agent takes an action that potentially impacts the world and could be irreversible if the introduced bias scales undetected.”
However, Hodjat argued that the focus should not be on AI agents themselves.
“People are over-trusting [AI] and taking their responses on face value without digging in and making sure that it’s not just some hallucination that’s coming up.
“It is incumbent upon all of us to learn what the boundaries are, the art of the possible, where we can trust these systems and where we cannot, and educate not just ourselves, but also our children.”
Europe’s cautious approach to AI, particularly in contrast to the US, makes his warning feel especially relevant.
But some industry leaders suggest over-regulation could be a greater threat.

Jarek Kutylowski, chief executive of German AI language company DeepL, highlighted the risk of Europe falling behind in the global AI race.
In 2025, the EU AI Act introduced strict rules on how companies can use AI. In the UK, existing laws such as GDPR govern AI, with the future of regulation still uncertain.
When asked whether AI innovation should slow until stricter rules are in place, Mr Kutylowski stated:
“Looking at the apparent risks is easy, looking at the risks like what are we going to miss out on if we don’t have the technology, if we are not successful enough in adopting that technology, that is probably the bigger risk.
“I see definitely a much larger risk in Europe being left behind in the AI race.”
“You won’t see it until we start falling behind and until our economies cannot capitalise on those productivity gains that maybe other parts of the world will see.
“I do not believe personally that technological progress can be stopped in any way, so it is more of a question of ‘how do we pragmatically embrace what is coming ahead?’”
As agentic AI gains traction, industry voices in Lisbon stressed a dual focus: understanding the technology’s boundaries while preparing to harness its potential before global competitors pull ahead.








