Data Analytics & AI

Telefónica’s Horacio Goldenberg on AI and what’s next for OSS/BSS

Communications service providers’ businesses are changing rapidly as they embrace virtualization and cloud, and ponder how to use artificial intelligence (AI) and machine learning. The changes are extreme, and they are forcing operators like Telefónica to completely rethink business models and how to architect operational and business support systems (OSS/BSS) for the future. At TM Forum Action Week in Vancouver, I sat down with Horacio Goldenberg, Chief Architect, Telefónica Global, to talk about AI and what’s next for OSS/BSS.

DB: Do CSPs need to have an overall AI architecture in mind as they experiment with the technology? Many operators are saying you need to take it slow but at the same time have a big picture view. Do you agree?

HG: Absolutely, I agree. We have quite an ambitious idea around AI. There are a lot of different ways to look at AI. There are some areas where we can get benefits almost immediately, in customer-facing applications, specifically for the call center – for example, using chatbots or machine-based voice communications. The voice is not as reliable yet, but it will get there. The chatbots are much more reliable, and there are ways to drive voice conversations into chatbots.

But what we are missing really – and I don’t know if anybody knows – is how we would architect the use of AI. Do we have centralized AI, just one brain that controls all customer experiences and addresses internal operational needs? And will it also take care of network needs? Or will we have specialized AI and somehow connect them when required?

In the customer-facing area, which is where our project is targeting, the question is whether we inject AI into the different steps of our current processes or whether we should rearchitect everything and have the customer journey being determined by AI instead of by the processes. Looking further, I’m trying to imagine which would be the way to design and introduce AI into our operations. I have many questions but few answers.

There’s also a lot of hype, so we have to be careful. That’s why we’re saying we should take some small steps and look at the low-hanging fruit and quick wins and start proving the business case. Vendors try to excite the market saying AI is going to solve all of your problems. There are some core AI platforms that are too broad. They need a lot of adaptation and training if you want to apply them to a specific use case. I think that in terms of doing these small steps, we should be driven by specific use cases, either customer-facing or internal and go from there, figure out which are the most relevant and then match that to what is feasible and find the right balance.

DB: Tell me a bit more about the Aura project that was announced at MWC.

HG: Aura is a global project from a design point of view, but execution in runtime will happen locally. It’s quite ambitions. We want to drive the customer experience through Aura and facilitate frictionless interaction while taking good care of privacy, and alert our customers to what they’re exposing as they engage in and relate to the digital world. Aura is the result of a lot of internal effort by expert software developers.

DB: Is it a bigger step to implement machine learning in operations, or is just different from using it for customer experience?

HG: When we use machine learning to address customer experience, we are impacting our customers directly in terms of the experience, and it’s an opportunity for us to bring more revenues to the company. We are providing the right advice to the customer in a timely manner, in a contextual manner, providing the next best action to that customer in a frictionless way. So that is very attractive in terms of defining our priorities.

But we are also looking at how to affect the customer service experience while at the same time reducing our operational costs (for example, in the call center). On the network side, we’re looking at how to improve individual customer experience and at the next best investment, and trying to define where to invest and do predictive maintenance.

There are other opportunities where we’re not applying AI yet but we are using regression BI [business intelligence]modeling, fraud detection and churn detection. These are all candidates for machine learning. Fraud is something that is mutating all the time. You solve one issue and people find a way to commit fraud in a different format – you have to keep moving and learning, so that would be a good use case.

DB: What’s the primary driver for adopting AI and machine learning? Is it more about improving customer experience or reducing operational expense?

HG: That’s a very difficult question. It’s both. The complexities of the digital world are growing exponentially, so our transactions will grow exponentially. When we virtualize the network, it’s going to be very difficult for technicians or engineers to sort out what’s going on – it’s beyond human capabilities – so in some cases, there is no other choice. It’s also driven by competition and is about being more lean and improving customer experience. I don’t think these are dissociated.

DB: What about 5G? Do we need to think now about how automation, AI and machine learning will be used?

HG: It’s not a long way off – it will happen in 2020 or 2021 – so we have to be prepared. We need to be prepared in all of our end-to-end processes so that we can really take advantage of the benefits of 5G and exploit them. We are transforming our OSS/BSS. Those are hard transformation projects because you are changing the way everything operates, but we are taking steps to modernize our OSS/BSS infrastructure so that when 5G arrives, we are ready for it.

DB: What was your impression of the OSS/BSS of future workshop at Action Week? Are we on the right track with the Open Digital Architecture (ODA)?

HG: I think we’re on the right track. We have to land it a little bit more – it’s still a little too high level; we have to make it a little more concrete. I think the TM Forum is the right place to do that piece in a collaborative way. These are the first steps. The session was very productive in terms of validating some of our thoughts, and also we have identified areas where we were not clear enough, areas we need to land and ground a lot more with detail.

DB: I thought it was interesting that the session highlighted that there really are no agreed definitions of OSS and BSS – what they are and where they overlap.

HG: It’s an arbitrary sort of decision. There’s no definition by the TM Forum, which the session highlighted; it’s always just been how the different telco organizations treat OSS and BSS. OSS was network and BSS was IT, but everyone had different perimeters around them. Even within Telefónica, different operating companies have different boundaries. It’s really arbitrary.

BSS is customer-facing but also partner-facing. BSS has to be able to support platforms – ecosystems – and different business models. It has to address OTT [over-the-top] partnering with digital providers to deliver services that are not based on the network. It’s becoming more important for BSS to be partner-facing. OSS is more related to network services. With virtualization, of course, it has a more relevant role to play in allowing for innovation in network services.

DB: Do telcos have to agree on how the systems are represented, or will there be enough abstraction that it won’t matter where telcos draw the arbitrary lines? How much standardization is needed?

HG: There is a need to address common problems in a common way. There are enough open source initiatives focusing on OSS. I feel we are OK there; we are not too fragmented. But there is definitely a need to collaborate; otherwise we will end up with per-vendor silos, and that’s exactly what operators don’t want. We want an open platform where we can virtualize functions or services in an agile way from whoever provides them. All CSPs have the same ambition there.

DB: What do you need from suppliers to make that happen faster?

HG: Virtualization is disruptive for the traditional vendors. They can be defensive or they can realize being defensive won’t work. Some vendors were initially very reluctant to participate in open source, but I think now it’s clear that it’s unstoppable. There are opportunities for them to monetize their solutions on top of open source. They need to figure out how to take advantage of the collaborative R&D in the open source community.

DB: Do you think collaboration is also necessary among open source groups and standards bodies?

HG: There is a risk that there are too many open source initiatives and that will produce fragmentation, but I think the market will sort it out. You see a lot of fragmentation in the open source tools for cloud, for example. But with these big, ambitious projects, the market will sort it out. ONAP is the merger of two big ones, so the market has already started to take care of it.

DB: Orange’s Laurent Leboucher has suggested that it would be advantageous for open source groups to work together using Agile principles, so that they’re not just passing documents back and forth for comment but actually working together. Would that be useful?

HG: Yes, it would allow for some coordination and interoperability. It could eliminate overlaps and conflicting data schemes. But there is also the risk of inhibiting innovation. There has to be a balance.



Advertisement:
Share.

About The Author

Managing Editor

Dawn Bushaus began her career in technology journalism in 1989 at Telephony magazine, which means she’s been writing about networking for a quarter century. (She wishes she didn’t have to admit that because it probably gives you a good idea of how old she really is.) In 1996, Dawn joined a team of journalists to start a McGraw-Hill publication called tele.com, and in 2000, she helped a team at Ziff-Davis launch The Net Economy, where she held senior writing and editing positions. Prior to joining TM Forum, she worked as a freelance analyst for Heavy Reading.

Leave A Reply

Back to top