DTW (Digital Transformation World)
Telcos need safety checklist for AI
Telcos need new management standards for AI and the equivalent of an aviation safety checklist if they’re going to deploy the technology safely and reliably, according to BT’s Chief Researcher Rob Claxton, who spoke at Digital Transformation World on Tuesday.
15 May 2019
Telcos need safety checklist for AI
Communications service providers (CSPs) need new management standards for artificial intelligence (AI) and the equivalent of an aviation safety checklist if they’re going to deploy the technology safely and reliably, according to BT’s Chief Researcher Rob Claxton, who spoke at Digital Transformation World on Tuesday.
Claxton, who co-leads the Forum’s AI Management Standards workstream, made the case for AI management standards starting with the point that AI is not one solution, but potentially hundreds of thousands of models across businesses doing a whole range of things. Some will end up being linked together, and not always by design. And there are many ways this can go wrong once AI starts doing unexpected things.
To get a handle on corralling all those AI solutions and use cases into a management framework, Claxton said a good place to start is looking at how the aviation industry classifies incidents that happen with aircraft. For example, for categories like “loss of control in-flight”, the AI deployment equivalent would be “asleep at the wheel” accidents, where an external change invalidates an AI model, or an operator loses track of where the model has been deployed, or even where it originated.
CSPs need a standardized framework to manage these issues and reduce the likelihood of deployment problems and their subsequent consequences, much in the same way aviation has embedded standards that have reduced the number of airline fatalities per year over the decades, Claxton said.
Because it would be impossible (and impractical) to develop a massive pile of prescriptive processes covering every possible AI use case, TM Forum is building a checklist (not unlike an airline safety checklist) designed to embed good behaviors, best practices and standards that are convenient to consume.
The Forum’s AI workstream is also developing technical enablers such as “management wraps” – a set of standard interfaces that can be easily wrapped around any AI system regardless of vendor, model or use case.
“They'll provide a consistent management framework so that at the enterprise level, we can understand what’s going on with individual models,” Claxton said. “We can see what’s going on the systems and we can get an enterprise-wide view of what our AI is doing, what it’s up to, where it’s been and so on.”
To find out more about TM Forum’s AI collaboration projects, please contact Aaron Boasman-Patel.
Claxton, who co-leads the Forum’s AI Management Standards workstream, made the case for AI management standards starting with the point that AI is not one solution, but potentially hundreds of thousands of models across businesses doing a whole range of things. Some will end up being linked together, and not always by design. And there are many ways this can go wrong once AI starts doing unexpected things.
Look to aviation
To get a handle on corralling all those AI solutions and use cases into a management framework, Claxton said a good place to start is looking at how the aviation industry classifies incidents that happen with aircraft. For example, for categories like “loss of control in-flight”, the AI deployment equivalent would be “asleep at the wheel” accidents, where an external change invalidates an AI model, or an operator loses track of where the model has been deployed, or even where it originated.
CSPs need a standardized framework to manage these issues and reduce the likelihood of deployment problems and their subsequent consequences, much in the same way aviation has embedded standards that have reduced the number of airline fatalities per year over the decades, Claxton said.
Because it would be impossible (and impractical) to develop a massive pile of prescriptive processes covering every possible AI use case, TM Forum is building a checklist (not unlike an airline safety checklist) designed to embed good behaviors, best practices and standards that are convenient to consume.
“It’s a simple set of things just before you take off to make sure you’ve done the basics, and that they can actually be cross-checked,” Claxton explained.
‘Management wraps’
The Forum’s AI workstream is also developing technical enablers such as “management wraps” – a set of standard interfaces that can be easily wrapped around any AI system regardless of vendor, model or use case.
“They'll provide a consistent management framework so that at the enterprise level, we can understand what’s going on with individual models,” Claxton said. “We can see what’s going on the systems and we can get an enterprise-wide view of what our AI is doing, what it’s up to, where it’s been and so on.”
He added: “Collectively, we have an enormous amount of knowledge, but it’s very hard to deploy that safely and reliably as an individual. That's why we're trying to do the work within TM Forum to build some standards that we can all rely on.”
To find out more about TM Forum’s AI collaboration projects, please contact Aaron Boasman-Patel.