The ‘Mighty Minions: Unleashing domain-specific GenAI via SLMs - Phase III’ Catalyst demonstrates how telcos can automate high-volume support calls using agentic AI powered by small language models (SLMs). It offers a low-latency, on-premises solution designed for real-time decision-making, data control, and regulatory alignment.
How to automate call center queries with context-aware, agentic AI
Commercial context
Telecom call centers are under strain from rising volumes of routine customer calls such as billing inquiries, service faults, and upgrade requests. These low-complexity issues account for a large share of interactions. They still, however, require human handling due to the limits of legacy systems.
Alternatives, such as traditional chatbots and scripted flows don’t support multi-turn dialogue or access to real-time customer data. As a result, contact centers face growing backlogs, inconsistent resolution quality, and high support costs.
CSPs need to find a solution that reduces cost-to-serve while also protecting customer data. Most genAI tools are cloud-based and resource-heavy, which raises concerns around privacy, latency, and scalability. What telcos need is a low-footprint AI solution that can handle domain-specific tasks reliably. They also shouldn’t sacrifice data control or operational flexibility.
The solution
The 'Mighty Minions: Unleashing domain-specific GenAI via SLMs - Phase III' Catalyst seeks to provide an alternative approach that industry can follow. The project's approach is based on agentic AI powered by small language models (SLMs). These agents don’t rely on scripts. Rather, they understand context, make decisions, and handle full customer interactions with minimal human input.
The initial deployment focuses on automating high-volume call types, such as billing disputes, network issues, and upgrade requests. An SLM-based agent handles these calls by accessing real-time data, making decisions, and generating dynamic responses.
The system integrates tightly with existing OSS/BSS and CRM infrastructure via TM Forum Open APIs. This allows the AI to retrieve customer records, diagnose network states, trigger workflows, and update account data during live interactions. Rather than following predefined scripts, the agentic AI dynamically generates spoken or text responses based on live system data and conversation history.
Each AI agent includes a built-in reasoning layer. It evaluates sentiment, tracks task progress, and assesses resolution confidence. If the agent determines that the issue exceeds its scope, it escalates the case, with full conversational context, to a human agent.
The team chose SLMs for their low-latency, lightweight architecture, and ability to run on-premises, which is ideal for telecom environments with strict data controls. Their small footprint reduces latency and compute costs while enabling on-premises or edge deployment, which is critical for data sovereignty and compliance. Teams can fine-tune these models with domain-specific language and workflows to improve intent recognition and dialog relevance.
The system includes a closed-loop learning layer aligned with TM Forum’s GB1064 guidelines. As a result, customer interactions are continuously fed back into the model lifecycle. This improves both intent classification and decision strategy over time.
To ensure interoperability and future extensibility, the Catalyst applies the AI Agent Specification Template (IG1412). This enforces consistency by clearly defining agent capabilities, interfaces, and behaviours to ensure they function as peers within the broader ODA component ecosystem.
Wider application and value
Through this project and its agentic AI underpinnings, CSPs can reduce the number of basic interactions handled by humans. This frees up time, lowers costs, and improves staffing efficiency. Many telcos could shift 40–60% of queries to AI without harming the customer experience. Prasad Jayasinghe, Head of Software Engineering at Ezecom, one of the Project Champions, explains. "For the wider telecommunications industry, this project sets a blueprint for scalable, low-latency agentic AI adoption using small language models. Given the regulatory sensitivity around telecom data, the ability to deploy lightweight, domain-finetuned models on-premises opens up new possibilities for secure genAI implementations."
Customers get faster, more reliable support too, even during peak hours or when live agents aren’t available. Because the AI understands telecom-specific language and accesses live data, its responses feel relevant and accurate. For the industry, this project sets an important precedent. It shows how telecoms can adopt genAI in a scalable, controlled way. Using SLMs avoids the risks of large, general-purpose models and makes on-premises deployment practical.
This architecture also translates well across industries. Sectors like banking, insurance, and utilities share similar service challenges. With minimal changes, they could apply this solution to their own customer support processes. Longer term, this Catalyst supports a shift in how we think about customer service. It shows how AI can complement human roles. It does so by handling repetitive tasks, while human agents focus on complex issues and value-added services. Together, they create a hybrid model that boosts quality, efficiency, and trust.