logo_header
  • Topics
  • Research & Analysis
  • Features & Opinion
  • Webinars & Podcasts
  • Videos
  • Dtw

DTW24-Ignite: DT and BT explain how to revolutionize CX and refactor legacy code using GenAI

Chief architects at Deutsche Telekom (DT) and BT Group shared at DTW24-Ignite how they are putting GenAI and LLMs to work in different ways.

Dawn BushausDawn Bushaus
27 Jun 2024
DTW24-Ignite: DT and BT explain how to revolutionize CX and refactor legacy code using GenAI

DTW24-Ignite: DT and BT explain how to revolutionize CX and refactor legacy code using GenAI

Deutsche Telekom and BT are putting GenAI and large language models (LLMs) to work in conjunction with the TM Forum Open Digital Architecture (ODA), with goals of transforming interactions with customers and refactoring legacy code into microservices.

Speaking during the Global Architecture Forum (GAF) session on Tuesday at DTW24-Ignite, Shekhar Kulkarni, Chief IT Architect at Deutsche Telekom (DT), and Campbell McClean, the incoming Chief Architect at BT Group, explained how their companies are using GenAI and LLMs in different ways.

A key initiative at DT focuses on integrating LLMs with the company’s Magenta IT Reference Architecture (MIRA), which is based on ODA, to improve customer experience (CX). Indeed, DT has already launched AI-aided billing capabilities in Germany with promising results.

BT, meanwhile, is embarking on an effort to use GenAI to analyze and refactor legacy code, such as COBOL and Java, into microservices aligned with the ODA.

DT’s LLM Operating System

DT is aiming to provide highly personalized and contextual interactions with customers by combining the power of LLMs with DT’s knowledge management systems and Magenta APIs. The goal is to provide a “much more dynamic experience” for customers, according to Kulkarni.

“What that means is that we really need to look at AI, particularly GenAI, as a core of our architecture,” he said. “And this is where we started working in the last six to nine months, and we created something we call LMOS – our Large Language Model Operating System.”

While DT is integrating with OpenAI to improve its Frag Magenta customer-facing chatbot (in German, the word “frag” means “ask”), the company is also part of the Global Telecom AI Alliance, which is “fine-tuning open-source LLMs to really cater for our needs”, according to Kulkarni.

“So, from architecture point of view, we need to abstract that – it doesn’t really matter whether we use OpenAI or some other open source [LLM] or closed source, or in the future maybe our own,” he explained. “We need to have the ability to really replace and effectively use – depending on the use cases – the underlying LLM.”

Using LMOS, DT has developed an open-source AI model augmentation tool called Agents ReaCtor, or ARC, which is the orchestration engine powering the LLM agents supporting Frag Magenta. This is important because LLMs are not able to interact directly with support systems – a billing system, for example.

“Now, the LLM layer can decide based on your prompt and training that, I need to make a call for the billing API because the customer is talking about that his bill has been, let’s say 20 euros larger than that last month,” Kulkarni explains. “The LLM can have underlying capabilities integrating and getting the information from your knowledge base, knowledge management systems, and also from your APIs.”

Kulkarni sums it up: “The fundamental idea here is that from an architecture point of view, LLMs are going to be a critical part of your architecture – at least in our case – and they are going to really drive the [customer] experience.”

BT aims to refactor code

Like Deutsche Telekom, BT is also using GenAI and LLMs to improve customer experience. But McClean highlighted a different use case that the company is just starting to explore: the potential for GenAI to achieve up to 80% accuracy in generating microservices from well-structured legacy code like COBOL. (AI’s accuracy on newer Java code isn’t as good but can be improved, according to McClean.)

By using ODA as its foundation, BT aims to deliver everything as code – security, policy, deployment and API exposure – enabling engineers to focus on creative coding while leveraging AI to rewrite existing applications to be cloud native. The company is also working on establishing a single data fabric to underpin its digital platforms.

“We’re going to take large chunks of code – you can call it an application or wherever you want – and … we’re going to run it through human-assisted GenAI,” McClean said.

He explained BT has already demonstrated real-time generation of a product catalog, using GenAI, ODA and a product framework to create a CPQ (customer price quote). “We want to do this at scale and see if we can radically change the time it’s going to take us to get from today until tomorrow,” he said.

“I will say it’s a bit of a punt in the dark, but what we want to do is to use ODA as the framework against which we build everything,” McClean added. “We cannot be in the game of taking legacy, sticking it in a Kubernetes container and putting it on GCP or AWS or Azure. It doesn’t work – the numbers don’t work.”

He emphasized the need for an automated, cloud-native approach to leverage the non-functional capabilities of the hyperscalers’ platforms – for example, services like identity management, API management and observability for monitoring, which are part of the ODA reference Canvas.

“The cost of continuing human support on a hyperscaler for us doesn’t work. I’m pretty good at manipulating numbers – I can’t make those numbers work,” McClean concluded. “And even if we can automate this, the business case is still pretty tight. But we think it’s worth doing.”