Building trust to unleash the full potential of AI
Current levels of AI usage barely scratch the surface of the huge opportunity that AI offers for businesses. CSPs have started to experience some benefits from early implementations, but there remains a large trust gap between AI and its beneficiaries that could deter further adoption and limit its potential.
Building trust to unleash the full potential of AI
The Catalyst project, Measurements of trust in AI environment, aims to build a system of trust within the artificial intelligence (AI) environment that adheres to the ethics and governance policies of communications service providers (CSPs) and other enterprises and organizations. The system will be based on a comprehensive framework that uses qualitative and quantitative measures to assess trust factors throughout the AI lifecycle.
The champions of this Catalyst project are Dialog Axiata and Ncell Axiata, while other participants include Subex, Axiata Digital Labs, BolgiaTen, Brytlyt and Amazon Web Services (AWS).
Professor Paul Morrissey is one of the guiding forces behind the project in his roles as global ambassador for AI, big data analytics and customer experience at TM Forum as well as chairman of BolgiaTen.
“The idea of digital trust is something that we’ve been very interested in for a long time, and we’ve decided to look at it from a technological perspective,” Professor Morrissey said. “We’ve picked some use cases and plan to examine them against certain measurements, including data anonymization, levels of bias, exploitability, regulation and compliance and more.”
Embracing the AI opportunity
Although AI techniques are increasingly being adopted, current levels of usage barely scratch the surface of the huge opportunity that AI offers for businesses. CSPs have started to experience some benefits from early AI implementations, such as reducing costs and introducing revenue-generation schemes. But there remains a large trust gap between AI and its beneficiaries that could deter further adoption and limit its potential.
CSPs are harnessing the power of AI to process and analyze the huge volumes of big data in order to extract actionable insights and provide better customer experience, improve operations, and increase revenue through new products and services. However, they also
need certainty that AI models and algorithms will not expose their brands to negative or legal risks by violating their own corporate ethics and governance policies. Thus the communications industry must take adequate measures to mitigate AI technology’s negative impact.
Indeed, establishing a system of trust will ensure any AI system is safe, technically robust, transparent, accountable, non-discriminative and able to mitigate bias. A trusted ecosystem that includes internal and external model development will be able to expose potential risks as early as possible.
One thing is becoming increasingly clear:
for AI to gain mass adoption, it needs to be trusted by governments, enterprises, and users directly or indirectly impacted by such applications. A failed AI program can not only lead to customer churn, reduced profitability and litigation, but can also result in loss of reputation and regulatory scrutiny.
For instance, when a Tesla car crashes into a stationary police car, it is hard to answer why the car suddenly decided to do that. Similarly, when Amazon’s recruitment engine is found to be gender-biased, the company found it difficult to explain why the algorithm worked that way. Many such examples highlight that AI systems, despite all efforts, have largely remained black boxes.
Building trust
Given all these challenges, how do we make AI systems more trustworthy? Participants in the Catalyst
plan to
address the trust issues in AI systems by building a comprehensive framework to measure trust. They have picked three use cases to achieve this:
Churn prediction
This use case is championed by Ncell Axiata and demonstrated by Subex.
Churn prediction is a common use case in the machine learning domain. It is critical for a business to have an idea about why and when customers are likely to churn. While the overall purpose of having a robust churn prediction system is to be proactive in customer engagements and customer problem resolutions, many operators engage in sporadic offer recommendations as a retention strategy. This is not only ineffective, but can lose customers forever. Our goal is to build a trustworthy churn prediction model that is not only robust in prediction possible churners, but also helps illustrate the AI trust framework.
Maintaining trust in real-time analytics
This use case aims to develop and deploy models in a real time analytics framework, executing real-time analytics use cases with the assurance of strong and equitable outcomes at every stage of the model lifecycle.
The Catalyst will use
Sagemaker Clarify — using a suite of metrics to balance traditional measures of model performance and stability against measures of bias and fairness. Model deployment will be demonstrated using the Right Time Analytics (RTA) framework — bringing the best of real-time and batch analytics to respond instantly to customer, device, network events with appropriate and fair modelling outcomes.
Credit score
The objective of this use case is to create affordability to drive 4G smartphone conversion and data adoption, which increases customer loyalty and stickiness with CSP Dialog.
This project offers a micro loan facility for existing Dialog customers to purchase devices on an installment scheme. The target customers for this program are selected based on the credit score output from the model and rule-based logic from the business.
The Catalyst also intends to use TM Forum processes in its work, principally TM Forum IG 1238 or AI Canvas, which is a prescriptive template on assessing how the AI solution to a problem would perform in a production environment; IG1190, which relates to the TM Forum work stream “AIOps – Redesigning Operations Processes”; and GB1021, a series of AI checklists to support practitioners in the safe and effective deployment of AI systems at scale.
In summary,
the Catalyst team believes that ethical values need to be integrated in the applications and processes we design, build, and use. Only then can we make AI systems that people can trust.
for more information from the team.