logo_header
  • Topics
  • Research & Analysis
  • Features & Opinion
  • Webinars & Podcasts
  • Videos
  • Event videos

Frameworks and standards are required for AI

At TM Forum Action Week, members are working on a framework to ensure that artificial intelligence technology is used correctly, safely and reliably – and that it is explainable and accountable.

26 Sep 2019
Frameworks and standards are required for AI

Frameworks and standards are required for AI

Artificial intelligence (AI) proponents and skeptics alike should be comforted by the activity taking place within the TM Forum Collaboration Program aimed at developing policy, processes and rules for AI management and assurance.

Taking a refreshingly sober approach, the AI & Data Analytics team met this week in Dallas at TM Forum Action Week. The group is working to advance the use of AI in telecom by protecting users, operators and customers at the outset. No other emerging technology has prompted developers to put so much effort into limiting, throttling, monitoring and protecting against ill effects, before unleashing it on the marketplace.

Yet, if done properly, the protections the team is currently building into AI management specifications for testing, training and certifying its AI models will ultimately enable better adoption because resulting solutions will have been built above all for trust.

Ensuring AI explainability


Rob Claxton, Chief Researcher at BT Research, explained that to create AI management standards and manage automation in AI, the industry needs to build a framework to ensure the technology is used correctly, safely and reliably – and that it is explainable and accountable.
BT Research's Rob Claxton
“The problems we want AI to solve are very critical from a business point of view,” Claxton said. “If we are going to do this at scale, we must have the methods and frameworks for managing it,” Claxton said.

The AI collaboration team is trying to protect against building models that are too specific and support only a narrow set of applications, while also allowing them to be adaptable for an unknown future. But the models also must not be too general.

The group is working on processes for tracking AI throughout its lifecycle to gauge changes in live models versus the configuration when originally deployed. When deploying AI, operators need to understand the constraints placed on the technology and be able to test at any point in the model’s lifecycle whether those constraints are true and still hold.

TM Forum is driving a range of best practices, thought-leading collaboration activities and research, in addition to services such as training and coaching in data analytics and realizing the business value of AI. The project teams are currently working on an AI data model for telcos, management standards for AI, creation of an AI data training repository, an AI maturity model and metrics, and AI user stories. To learn more or to join the group, please contact Aaron Boasman-Patel.