Advertisment

Red Hat’s vision: Driving innovation through open-Source, AI, and edge computing

Red Hat discusses its open-source strategy, AI innovations, and edge solutions, driving telecom advancements and fostering collaboration with ecosystem partners in India.

author-image
Punam Singh
New Update
Abhishek Shuka

Abhishek Shukla, Director, Regional Head at Red Hat

At the India Mobile Congress 2024, we had the opportunity to sit down with Abhishek Shukla, Director, Regional Head at Red Hat to discuss the company’s strategic approach to driving innovation in India’s telecom sector. With a focus on open-source platforms, AI, and edge computing, Red Hat is playing a pivotal role in transforming how telecom operators build and manage networks.

Advertisment

 In this exclusive interview, Abhishek sheds light on Red Hat’s strategies for the coming years, their partnerships with key industry players, and how they are leveraging India’s talent pool to stay ahead in the evolving telecom landscape.

As we look towards the Indian market, what are Red Hat’s strategies for the upcoming years, particularly for 2025 and 2026?

Red Hat has had a significant presence in the Indian telecom market for several years, with our open-source platforms deeply embedded in telco data centers. Red Hat Linux is widely used in the industry, particularly for core network functions. Over the last five years, we’ve evolved from simply providing operating systems to offering cloud-based services for telecom operators. Historically, telcos relied on appliance-based solutions from single vendors, but we’ve advocated the power of open-source solutions, enabling them to run multiple vendor technologies within the same environment. This shift toward open and interoperable ecosystems, particularly in the 4G and 5G eras, is now the foundation of our work with Indian telecom operators.

Advertisment

Our approach in the coming years will focus on edge computing and monetisation of the edge. We plan to extend our cloud solutions to the edge, allowing for a seamless cloud experience from core applications to edge applications targeting enterprises. This will be crucial as operators look to deploy edge data centers and maximise use cases across industries. By building an open cloud ecosystem, we ensure multiple use cases can coexist, allowing small vendors and innovators to build applications that can run on our platform, bringing flexibility and scalability.

The next major focus is generative AI. Much like we democratised 4G and 5G technologies, we aim to democratise AI. Our strategy will be to provide an open, flexible AI/ML platform that allows operators to build generative AI applications and manage AI models seamlessly across their networks.

Could you share more about the platforms and products you are developing to meet modern trends like AI, open-source adoption, and IoT?

Advertisment

At the heart of our strategy is OpenShift, Red Hat's leading Kubernetes platform, which enterprises use to develop containerised applications. We've evolved this platform to include OpenShift AI, an AIOps solution that integrates AI/ML with enterprise infrastructure. We offer the flexibility to use community-based LLM models such as Llama, or our own proprietary models like Granite, developed in collaboration with IBM Research. This platform not only provides the tools to manage the lifecycle of AI/ML applications but also offers enterprises the ability to innovate while minimising costs.

OpenShift AI is designed for two main purposes: saving money by optimising internal operations, such as building self-healing autonomous networks, and making money by creating new services for customers. For example, operators can use our platform to deploy AI-driven self-healing networks, enabling closed-loop service assurance through event-driven architectures. This is already demonstrated through some of our partnerships at our IMC booth.

Who are some of your key partners, and what strategic collaborations have you established?

Advertisment

We work closely with TM Forum and several global operators to bring AI and generative AI capabilities into telecom operations. At the India Mobile Congress, our key partners include Airspan and Avencus, who are showcasing real-world applications of AIOps and AI-driven solutions. However, our ecosystem extends far beyond software; we also collaborate with hardware acceleration partners like NVIDIA, Intel, and AMD, as well as storage partners. These partnerships allow us to offer a comprehensive AI solution that spans the entire value chain, empowering our customers with the flexibility to choose the best solution for their needs.

India is often seen as a global talent pool for the IT industry. How is Red Hat leveraging this talent, and what initiatives have you undertaken for hiring and training?

India is indeed a rich talent pool, and we’re investing in future-proofing this talent. While we focus on upskilling our internal teams in new-age technologies, we’re also working closely with academia. We’ve partnered with universities to introduce students to open-source technologies early in their education. Through training courses and certifications, we ensure that when these students enter the workforce, they’re already equipped with the knowledge needed to contribute to Red Hat’s ecosystem.

Advertisment

Our strategy includes nurturing fresh talent by offering programs that help them grow within Red Hat. Whether they’re in technology roles or sales, we’re committed to ensuring that they can drive the next wave of innovation for the telecom industry.

 Security is a major concern in AI, data, and telecom. How is Red Hat addressing security challenges as you expand your AI footprint globally?

Security is a critical issue, particularly when working with AI and open-source platforms. Many of the LLM models available today are open source, which means we don’t always know their origin. This presents a security risk when we expose sensitive data to these models. Red Hat’s approach is to create community-driven, open-source models, much like how we developed Linux and OpenShift. Our Granite LLM model, backed by IBM Research, is a great example of a secure, open-source AI model.

Advertisment

Open-source models are inherently more secure because they’re subject to community oversight. Millions of eyes are constantly reviewing the code, compared to the limited oversight that exists in closed-source systems. This transparency is key to ensuring the long-term durability, sustainability, and security of AI applications. Our goal is to position security at the forefront of the AI value chain by promoting open, community-driven development.

Advertisment