AT&T and Microsoft are to move all their non-network applications to Microsoft Azure – a crucial vote of confidence in public cloud. CSPs will soon follow suit.
Telcos will embrace public cloud – they can’t afford not to
AT&T and Microsoft’s announcement last month that the telco is moving all its non-network applications to the Azure platform is a crucial vote of confidence in public cloud. It’s just a matter of time before other communications service providers (CSPs) follow suit and embrace it for their IT operations. In the first of this two-part series, TM Forum’s George Glass, formerly Chief Systems Architect at BT, explains why telcos shouldn’t hesitate. In the second part, he explains how TM Forum’s Open Digital Architecture can help them make the leap. In July, AT&T and Microsoft announced a $2 billion, multiyear deal to collaborate on cloud, artificial intelligence (AI) and 5G. As part of the partnership agreement, Microsoft will be AT&T’s preferred cloud provider for non-network applications.
“AT&T is becoming a ‘public cloud first’ company by migrating most non-network workloads to the public cloud by 2024,” the company said in a joint statement with Microsoft. “That initiative will allow AT&T to focus on core network capabilities, accelerate innovation for its customers, and empower its workforce while optimizing costs.”
This is a big step for AT&T, one that all telcos eventually must take. Embracing public cloud is not a question of ‘If?’, but ‘When?’ Many other industries are further ahead with the adoption of public cloud, but this is because fundamentally telcos are IT companies. They built large data centers to run their many legacy applications, and as they modernized and virtualized their data center infrastructure, they continued to sell hosting services, becoming cloud providers almost by accident. However, while telcos may have spent tens or hundreds of millions of dollars investing in data center infrastructure and its management, public cloud providers such as Microsoft (Azure), Amazon (AWS), Google (GCP) and IBM (IBM Cloud), closely followed by Oracle and Alibaba, are spending billions to deliver public cloud offerings.
‘Cloud-native’ is a term used to describe container-based environments: Cloud-native technologies are used to develop applications built with services packaged in containers, deployed as microservices and managed on elastic infrastructure through Agile DevOps processes and continuous delivery workflows. Containers isolate an application and its dependencies into a self-contained unit that can run anywhere. In this environment, hardware and operating systems are virtualized, which means the same operating system is shared with other hosted applications.
In a traditional IT environment, operations teams manage the allocation of infrastructure resources to applications manually. In a cloud-native environment, applications are deployed on infrastructure that abstracts the underlying compute, storage and networking primitives. Cloud-native platforms handle the following tasks through software tools that manage containers:
Public cloud providers have invested heavily in the automated management of their cloud infrastructure, and this is something that CSPs can leverage with the move to public cloud. The scale of public cloud is such that the promise of compute or storage on demand can be a reality, which is especially powerful if you have applications that experience significant spikes in resource demand. The flexibility and scalability, together with the speed of response in a public cloud environment, are compelling when compared to having to build your own infrastructure. Many telcos have implemented virtualized infrastructure within their data centers, and by packaging entire applications into containers they have been able to move off legacy infrastructure, which has resulted in cost savings. However, whilst this cloud-enabled approach may help in the transformation of the IT estate, it really should only be used for legacy applications that are likely to be around for a few years, and where it is not cost effective to rewrite them entirely using containerized services and microservices to leverage the full power of a cloud-based environment.
Many organizations cite concern about security and privacy as reasons for not moving to the public cloud, but these worries are being addressed. All public cloud providers can tell you where your data is located, based on the type of storage used and the geographical location of that storage. They will also allow you to control the format, structure and security of your data, including whether it is masked, anonymized or encrypted and where that takes place. Public cloud providers offer tools that allow you to manage access controls such as identity access management, permissions and security credentials, and they can provide high availability and disaster recovery as built-in capabilities. Because of the scale of the operations, public cloud provides ‘unlimited’ resources on demand, with a business model that means you only pay for what you need. Contrast that with most private cloud or dedicated infrastructure environments where you are required to pre-provision infrastructure based on peak loads, disaster recovery or forecast growth. All the major public cloud providers now have a global footprint of data centers interconnected with their own secure private network, providing edge locations in nearly every country in the world, which addresses the issue of proximity of the data center and the latency of response time to applications. As CSPs build trust with public cloud providers, they will move applications to the public cloud, eliminating the need to build and run data centers. They will embrace public cloud because it’s much less expensive, more scalable and easier to manage.
The next installment in this series will look at how the TM Forum Open Digital Architecture can help CSPs and their suppliers evolve to embrace public cloud and cloud-native applications.