In this episode of The Landscape, we spoke with Jan Wozniak and Jorge Turrado Ferrero, maintainers of the Kubernetes Event-Driven Autoscaler (KEDA), to uncover how this CNCF project simplifies scaling workloads in Kubernetes environments.
KEDA bridges cloud-native technologies, enabling users to scale anything measurable with Kubernetes scaling resources. Its integrations with tools like NGINX and support for diverse workloads make it a flexible and powerful choice for developers facing complex autoscaling challenges. KEDA is a graduated CNCF landscape project.
What you’ll learn in this episode:
- KEDA’s core functionality: Discover how KEDA empowers users to autoscale workloads seamlessly, eliminating the complexity of Kubernetes internals.
- Real-world adoption stories: From Black Friday traffic spikes to AI model training, learn how companies like Alibaba, Azure, and Grafana leverage KEDA.
- Integrations across the CNCF landscape: Explore how KEDA connects with other tools like Helm, NGINX, and Kubernetes APIs to expand scaling capabilities.
- When not to use KEDA: Understand its limitations, including scenarios where infrastructure management tools like Cluster API or Karpenter are more suitable.
- AI workloads and predictive scaling: Learn how KEDA supports AI model training with GPU-specific scaling and the potential for proactive, AI-driven autoscaling.
KEDA continues to evolve, with features like scaling modifiers enhancing its ability to adapt to user-defined scaling needs. Whether you’re scaling event-driven workloads or exploring CNCF project integrations, this episode offers valuable insights into the capabilities and future of KEDA.
Bart:
In this episode of The Landscape, I had the chance to interview not one but two maintainers of the KEDA project: Jan Wozniak and Jorge Turrado Ferrero. KEDA, the Kubernetes Event-Driven Autoscaler, addresses critical challenges in Kubernetes by simplifying and extending autoscaling for workloads. It allows you to scale everything with anything, as long as it’s measurable and aligns with Kubernetes scaling resources. This project bridges cloud-native technologies, offering a flexible and user-friendly solution for Kubernetes adopters overwhelmed by complex autoscaling configurations.
In this episode, we explore key technical features such as scaling modifiers, integrations with tools like NGINX, and how KEDA enhances event-driven autoscaling workflows. We also discuss real-world adoption by major companies, its growing ecosystem, and potential applications in AI workloads. Now, let’s dive into the episode with Jan and Jorge
Bart:
So, what problem does KEDA solve?
Jan:
In my opinion, it allows you to scale everything with anything. Now, there are some caveats. Everything has some restrictions—usually, it has to be a resource in Kubernetes that has a scale sub-resource. The “anything” part also needs to be a measurable metric. But essentially, it allows you to bridge all sorts of technologies together and make sure that your infrastructure is capable of handling pressure.
Bart:
All right, good. I think that’s a nice description. Jorge, is there anything you’d like to add?
Jorge:
Yes, I’d summarize it a bit more: we aim to make autoscaling in Kubernetes simple. It’s inherently quite complicated, with many edge cases and factors to consider. Our goal is to take care of the complexity and provide users with an easy way to scale their workloads without dealing with Kubernetes’ autoscaling internals.
Bart:
What’s your favorite feature?
Jan:
In terms of a characteristic, I love how KEDA bridges and connects different technologies, allowing them to work together seamlessly. From a technical perspective, I really like the recent addition of scaling modifiers. In the past, scaling was tightly defined, but with scaling modifiers, you can essentially scale anything with everything.
Jorge:
I’d have to agree. Scaling modifiers have been one of the most impactful features in recent releases. They give users the power to craft their own scaling stories based on their rules and knowledge, making it incredibly versatile.
Bart:
Jan, can you share some success stories from KEDA’s end users?
Jan:
I’d say any user who installs KEDA, makes it work in their setup, and finds value in it is a success story. Whether it’s improving reliability or saving costs—like avoiding hiccups during Black Friday shopping peaks—those are fantastic stories. It’s even better when users contribute back by filing issues, fixing bugs, or participating in other ways.
Bart:
Any big companies you’d like to highlight?
Jorge:
Definitely. Companies like Alibaba and Microsoft Azure have built services on top of KEDA. Grafana also uses KEDA under the hood to scale its services. Additionally, Selenium has started contributing recently because they offer autoscaling based on KEDA. For me, it’s even more impressive when smaller players integrate KEDA as a dependency. They may not have the resources to invest thousands of hours adapting to their business cases, but KEDA makes it easy to integrate scaling into their workflows.
Bart:
KEDA integrates with several CNCF tools. Can you give some examples?
Jan:
At this point, it might be harder to find a tool that doesn’t integrate well with KEDA! But for a specific example, let me pass the mic to Jorge.
Jorge:
Sure. For example, Kubernetes’ NGINX Helm chart has a section to deploy a scaled object for autoscaling based on KEDA. While it’s not the default option—since third-party dependencies can’t be set as defaults—it’s easily enabled with just a few uncommented lines in the configuration.
Bart:
When would you say KEDA isn’t the right tool?
Jan:
KEDA’s focus is on pod-level autoscaling, so it’s not suited for tasks unrelated to autoscaling. For example, scaling infrastructure like nodes is outside its scope. Integrating KEDA with node-scaling tools like Karpenter can be tricky. While we wish Karpenter used Cluster API for smoother integration, this remains an area for improvement.
Jorge:
Exactly. Our focus is on scaling, not managing infrastructure. Anything related to management rather than scaling is beyond KEDA’s purpose.
Bart:
AI is a big topic right now. Are there any examples of KEDA being used for AI workloads?
Jan:
Yes, absolutely. For model training, you can monitor live prices and use KEDA to ramp up infrastructure when resources are cheaper and scale down when costs rise. For running AI models, KEDA supports specific hardware requirements, like GPUs, ensuring the right resources are available. Additionally, there’s potential for predictive scaling—using AI to scale proactively rather than reactively.
Bart:
What companies support KEDA or sponsor its maintainers?
Jan:
Two key sponsors are Lidl, a major European retailer, and Clarify, a startup founded by a long-time KEDA maintainer. Of course, we also have support from Microsoft and Alibaba.
Bart:
If people want to contribute to KEDA, what’s the best way to get started?
Jan:
There are many ways to contribute. You can join the Slack community to discuss issues, improve documentation, suggest new features, or help with our CI/CD pipelines. Whether you’re into coding or community support, every contribution is valuable.
Bart:
Thank you both for your time and the incredible work you’re doing with KEDA. I look forward to seeing its continued growth in the cloud-native ecosystem.
Jan:
Thank you!
Jorge:
Thanks!