The Project Aura Catalyst demonstrates how AI and RAN can run together on a unified platform to improve network efficiency and support low-latency edge services.

A shared architecture for AI-enabled RAN and edge services
Commercial context
For years, bringing computing power nearer to the user has been seen as a way to improve everyday digital experiences. Yet many CSPs still struggle to meet the demand for intelligent services and quick decisions at the edge. AI workloads continue to rise, data volumes grow, and many applications now need a response in the moment. Still, most networks rely on centralized platforms far from subscribers. This adds delay, increases cost, and limits the value operators can deliver. It also prevents the real-time insight that today’s markets expect.
RAN sites contain powerful compute resources, but these remain dedicated to radio functions. They cannot support the broader AI needs emerging across industries. Yet AI models grow more complex and place new strain on central systems. CSPs need a way to distribute intelligence without expanding hardware footprints or creating additional silos.
These pressures expose a structural gap. CSPs need a design that supports AI where it offers the most impact: at the edge, close to subscribers and enterprise environments. They also need a model that improves RAN performance while creating new revenue potential.
The Project Aura (AI + RAN) Catalyst addresses this by merging AI for RAN and AI on RAN in one architecture. It uses shared infrastructure to enhance efficiency, reduce cost, and enable edge-native services. This shifts the RAN from a cost center to a strategic platform for growth.
The solution
The project has developed a system supports two intelligence streams running on shared, high-performance hardware. One stream improves RAN performance through AI-driven power saving, interference cancellation, and link adaptation. The other stream supports edge-native applications such as video analytics, drone orchestration, and location-based services. Both run on a single architecture that consolidates data pipelines, orchestration, and GPU acceleration.
The platform blends OSS and BSS logic with CAMARA APIs, enabling consistent integration with systems. Its hybrid-cloud model allows AI workloads to run beside RAN software without reducing radio performance. This improves hardware utilization and reduces the need for dedicated edge devices.
TM Forum assets have guided the design. GB1002 shaped the structure of AI use cases. IG1179A informed the approach to slice-enabled monetization. IG1223 provided a framework for evaluating return on investment across multiple sectors. These assets ensured alignment with industry standards and supported the creation of a repeatable deployment model.
The project itself is a collaborative effort between numerous leading organizations within the industry. StarHub leads architectural development. CelcomDigi and Etisalat UAE validate operational fit across diverse markets. SynaXG provides GPU-based 5G RAN and AI software. Red Hat supplies the cloud stack. OREX SAI integrates the components into a deployable configuration.
Their collective proof of concept demonstrates some fascinating outcomes. Firstly, AI for RAN shows improvement in power use, interference control, and link adaptation, supported by live KPI data. Video analytics run on the same platform using GPU acceleration, proving that computer vision tasks operate locally with very low latency. Both workloads then run together on shared hardware, confirming performance isolation and strong orchestration. This proves that RAN sites can support communication and compute functions on one system without loss of quality.
Application and wider value
The Project Aura architecture creates new commercial opportunities by effectively turning each RAN site into a small data centre. CSPs can offer generative-AI-as-a-service, onboard enterprise models, and run low-latency analytics near the point of data creation. These capabilities increase the value of 5G investment and support new service models for growth.
Several industries stand to benefit from this. For example, manufacturing gains local computer vision for instant quality checks and fewer production defects. Transport hubs can process security-related data within controlled environments through local breakout. Logistics firms can support autonomous vehicles and robots that rely on predictable, low-latency decision-making. These scenarios show how the AI-enabled RAN supports applications that previously faced latency or bandwidth limitations.
Operational gains are also significant. Power savings reduce energy and cooling costs. Consolidating workloads reduces site hardware and simplifies maintenance. Central monitoring systems face lower demand because more intelligence runs at the edge. Customers gain more control by choosing preferred applications and LLMs. The orchestration layer speeds service deployment and will gain further agility through planned OpenShift adoption in Phase II.
The project’s long-term value lies in how naturally its approach can be taken forward by others. It provides a shared foundation for developing edge-native AI services with genuine collaboration in mind. As more partners build on this work, CSPs move closer to delivering next-generation networks shaped by customer needs and practical edge performance.