logo_header
  • Topics
  • Research & Analysis
  • Features & Opinion
  • Webinars & Podcasts
  • Videos
  • Event videos

Are telcos ready for the EU AI Act?

Europe sets a high bar for AI regulation and telecoms operators will have to up their compliance game as new rules come into force.

Michelle Donegan
10 Oct 2024
Are telcos ready for the EU AI Act?

Are telcos ready for the EU AI Act?

The European AI Act is the world’s most comprehensive legal framework for regulating AI to date. The new rules take effect in stages over the next three years to August 2027 and apply to developers and users of AI. As telcos adopt AI in their networks and business operations, they will need to know how the regulations will affect them.

The EU AI Act aims to protect citizens from the risks of AI and promote the development of safe, trusted AI systems. It sets requirements for the use of AI for “developers and deployers” based on the level of risk posed within four defined categories: unacceptable, high, limited and minimal.

It follows the logic of “the higher the risk, the stricter the rules”, explained Elena Scaramuzzi, Head of Global Research at regulatory research firm Cullen International.

At the unacceptable end, a “very limited” set of harmful AI uses will be banned from February 2025. The European Commission’s list of such uses includes exploitation of vulnerable people, untargeted scraping of facial images and emotion recognition in offices or schools.

What does the AI Act mean for telcos? 

The EU regulation is not likely to change the course of telcos’ AI strategies, but it will create more compliance work and costs to meet the new legal requirements for safety standards.

And they will have to weigh up the risk of different deployments. Use cases such as AI chatbots in customer service are lower risk but will have requirements for transparency (i.e., people should know they are not talking to a human). But some telco use cases could fall into the high-risk category, which would incur extra regulatory obligations.

For example, among the list of high-risk systems, the Act calls out “AI systems intended to be used as safety components in the management and operation of critical digital infrastructure”. Other high-risk systems specified in the Act could also apply to telcos, such as using AI in recruitment to filter job applications.

“The nature of the use cases for AI deployment within telecoms networks – such as network optimization and network security - means that telecoms operators may well find themselves deploying high risk systems for EU AI Act purposes…The compliance burden imposed on deployers of ‘high risk’ systems means that network deployment strategies will need to have a strong focus on regulatory compliance,” said Kimberly Wells, Partner at Bird & Bird.

There are also transparency requirements for general-purpose AI models. This can have implications for operators developing their own large language models, such as the Global Telco AI Alliance and its members Deutsche Telekom, e&, Singtel, SK Telecom and SoftBank.

Furthermore, the EU AI Act applies to companies outside Europe as well, which means telcos around the world will need to be aware of how the rules could affect them.

“The AI Act seeks to regulate every system that affects people in the EU, and extends to providers outside the EU who are placing AI systems in the EU, or using AI systems outside the EU to make decisions about people inside the EU,” said Katy Milner, Partner at Hogan Lovells’ Technology and Telecoms Practice.

Operators join Europe’s AI Pact

Telcos have already started preparing for the new rules, not only by adapting governance strategies but also baking regulatory compliance into AI development and use cases.

Telecom operators, and many industry suppliers, are among the first one hundred companies to sign up to Europe’s AI Pact, a set of voluntary commitments aimed at getting a head start on implementing the AI Act’s requirements.

The first telcos to join the Pact are Deutsche Telekom, KPN, Orange, Telefonica, Telenor, TIM Telecom Italia and Vodafone.

“The AI Pact is important for Orange because it gives us a direct link to communicate and feedback to the European commission to share our experience and challenges around AI, for example on classifying high-risk use cases. It is also an opportunity to motivate our teams to act and be ready for the AI Act before the deadline,” said Emilie Sirvent-Hien, Responsible AI Program Manager at Orange.

The operator said the EU AI Act does not change its approach to AI and that its preparation for the regulation has been helped by its existing “Responsible AI” strategy.

“This means we are using AI responsibly not only in terms of ethics, but also in terms of the environmental impacts, data protection and costs…However, whilst there is no major impact to our activity, we have had to take into account the administration burden but we are prepared for that,” said Sirvent-Hien.

A spokesman for Telenor said the operator “welcomes” the EU AI Act and believes it will be “a significant contribution to the global standard for the use and development of AI in a responsible way”.

The AI Pact is “a great opportunity for knowledge building and cross-learning between companies leading on responsible AI development and preparing for compliance with the AI Act,” he said.

“We are committed to using AI technologies in a way that is lawful, ethical, trustworthy, and beneficial for our customers, our employees and society in general. Therefore, it was natural for Telenor to join the AI Pact,” he added.

Will other countries follow EU’s lead?

Europe has led the way with its risk-based approach to AI, but how much regulation is needed is still hotly contested. In a sign of the fissures in the tech sector, Apple and Meta did not sign up to the EU’s AI Pact, for example.

“Regulators are really struggling with whether and how to comprehensively regulate AI, given how near-limitless number of use cases and the broad range of associated risks. Using AI for customer service and using AI for medical diagnoses are very different contexts,” said Hogan Lovells’ Milner.

She explained it has been “an easier task” to pursue a sector-specific approach as is the case in the U.S. For example, the Federal Communications Commission would evaluate the use of AI in telecom services or the U.S. Department of Treasury looks at AI uses in financial services.

"On the regulatory front, the global debate over the possible increased government supervision in the deployment and use of AI will keep momentum in 2025," said Cullen’s Scaramuzzi.

She said AI governance models are also being debated in initiatives led by the United Nations and OECD. "Several jurisdictions around the world are also studying possible policies and regulations applicable to the pervasive use of AI across all economic sectors including on patents, liabilities, and security amongst other aspects," she added.

For now, though, the EU’s risk-based approach has created the broadest set of rules for AI.

“Other countries, such as the UK and US, are more hesitant to introduce AI regulation that is as comprehensive as the EU AI Act, citing concerns that it could stifle AI innovation and a thriving tech industry. China has also taken a different approach to the EU with its AI legislation. However, even though other regions may not follow the EU’s approach, the EU AI Act is expected to become a global standard for AI system compliance,” said Kate Deniston, Tech Regulation Lawyer at Bird & Bird.