logo_header
  • Topics
  • Research & Analysis
  • Features & Opinion
  • Webinars & Podcasts
  • Videos
  • Dtw

How CSPs can use ethical AI to support mental wellbeing

The Promoting sustainable mental health and diversity in a trusted AI domain - Phase III Catalyst has created a trust measurement system for AI, demonstrating how ethical AI algorithms can improve the mental wellbeing of people subject to distress, addiction and natural disasters

Ryan Andrew
30 Oct 2023
How CSPs can use ethical AI to support mental wellbeing

How CSPs can use ethical AI to support mental wellbeing

Commercial context

Ethical AI is often something more discussed than invested in. As AI veteran Ian Hogarth pointed out earlier this year, companies were investing far more in research on capabilities than alignment with human values. In part, this is because many see a sharp delineation between the ROI for the two research areas – with capability being seen as a far greater commercial priority.

Yet such an approach does not fully appreciate growing trends towards consumer demand for ethical AI which is more transparent, better regulated, and designed not just for its smartness, but its ability to genuinely help the full range of people. Perhaps more importantly, people are increasingly concerned about the role of AI in daily life. Data from Pew Research Center indicates that 52% of US citizens feel more concerned than excited about increased use of AI, compared to just 10% who said the reverse. Such data points to a step-change in how the market will favor companies that use AI ethically compared with those that do not – meaning organizations who break from the current paradigm and use AI ethically have a real opportunity to strengthen their brand in the long term.

The solution and pilot applications

This is where the Promoting sustainable mental health and diversity in a trusted AI domain – Phase III Catalyst comes in. Building on the achievements of the first two phase of this Catalyst, this iteration was designed to help CSPs provide services that are fair, ethical, and transparent, to nurture trust and promote the welfare of their users. This takes the form of a trust measurement system to ensure that AI models and algorithms are built, delivered and maintained to be inclusive, responsible, and ethical. The trust measurement system is based on continuous assessment of qualitative and quantitative factors throughout the lifecycle of an AI solution.

In so doing, the project demonstrated how intent-based AI can be used ethically for two practical use cases – fair credit scoring for those affected by poor mental health, and disaster response. To support both applications, the project team used TM Forum assets IG1238 AI Canvas and IG1232 AI Model Data Sheet Specification to help understand the psychological and sociological contexts customers may face, and how to support them. Both applications are built around the existing principles of Grade of Service (GoS) - a critical measure of service quality in telecommunication environments. It quantified the likelihood of call blockage or delay beyond an acceptable limit.

As CSPs use more sophisticated and accurate means of understanding credit scoring of their customers, the Catalyst team believes they could use AI to identify vulnerable customers, and treat them differently. Once identified, the solution recommends and applies intervention strategies for vulnerable customers, with a focus on gambling addiction and fraud vulnerabilities. The second application is designed to support customers struck by an abrupt and severe change of circumstance, such as floods or storms. In such instances where these events occur and disrupt physical network infrastructure, such as towers and base stations, AI can prioritize network traffic. For example, emergency services are given network priority, and where necessary, large consumption of data is restricted to ensure everyone affected by disaster can communicate.

Wider value

The Catalyst shines as an example of collaboration, innovation, and responsible application of technology. It brought together a diverse set of participants, each with a key contribution. In this sense the Catalyst is a blueprint for how the industry can engage with the wider ecosystem to understand the needs of customers – particularly those with vulnerabilities – more clearly. The Catalyst also sets an example for how the tech sector can use its capabilities to achieve UN Sustainability Goals – in this case ‘good health and well-being’, ‘decent work and economic growth’, ‘industry, innovation and infrastructure’ and ‘reduced inequalities’. Although only applied to two specific applications, the longer-term value of the Catalyst is to expand this use of AI to other use cases that support broader human needs.

According to Hamim Hamid, Manager & Data Engineer at Robi Axiata and Project Champion, “this Catalyst is a beacon for those seeking to understand how to implement improved decision-making – especially for humanitarian causes. With AI-based flood relief analytics and decisioning, we are much better equipped to make informed decisions about how to allocate resources and prioritize interventions. This also goes a long way in building trust for AI implementations elsewhere - what’s more, when people trust the analytics used to make decisions about flood relief, they are more likely to accept those decisions and cooperate with interventions we may have to make. The project can also be used to inform disaster risk reduction strategies at the national and international levels as well, helping to reduce the vulnerability of communities to flooding and other hazards.

Overall, we are delighted with the outcome of the Catalyst because it has allowed us to play a genuine role in improving the lives of people who need help the most, and who may otherwise not be reached.”

Catalyst: Promoting sustainable mental health and diversity in a trusted AI domain - Phase III