
Jay Miracola
Cloud Native
An experiment using Crossplane v2 operations and an LLM shows how Kubernetes resources can be watched and modified using plain-English rules instead of custom controller code. The post positions this as a glimpse into future, low-code infrastructure automation despite current limitations.
A thought experiment using Crossplane Operations and LLMs to create a plain English, code-free Kubernetes controller that watches and enforces the desired state.
The Problem
Sometimes we want to have control loops that watch for state, and make changes based on that state. In Kubernetes that's a controller, but writing a Kubernetes controller in Go is a non-trivial task. It requires knowledge of Kubernetes, golang, and sometimes lots of complex logic. What if there were a way we could author these controllers without code? We want to solve problems that are sometimes wildly complex to very trivial but either lack the knowledge of authoring and/or the time to complete the task. Imagine the example below as a thought experiment rather than a direct use case. Let's jump on the AI hype train together if not for just a moment to explore how it might solve our daily challenges.
New Crossplane Primitives
Recently Crossplane released v2 which among things like namespace scoped resources also included operations. Operations were made to solve a lot of day 2 problems like backups or even configuration validation. They can be extended to an enormous amount of use cases but today, we will use the watch operation to monitor and change a deployment based on my requirements. I will be using Ollama to run a locally running LLM, namely gpt-oss:20b in combination with the open-ai function as it has been extended to allow calling any AI API that uses OpenAI’s API format. As grandpa used to say “a token saved is a token earned” or something like that.
The Controller
My example written at https://github.com/jaymiracola/configuration-english-controller will allow you to run an operation that will watch deployments in the default namespace and regardless of the deployment applied, ensure that they are all scaled to 3 replicas. As promised, all the plain english.
In order to run it, most of the steps are taken care of in the repository provided other than needing an LLM (Ollama, OpenAI, etc) to connect to. My example is setup currently for Ollama locally with no auth. Past that, all you need to do is the following:
Edit the secret in the example folder with your credentials for your LLM.
The configuration will be packaged and applied to a locally created kind cluster
Now everything should be ready to go! Apply the example deployment with a single replica and watch the magic happen. The operation should show the deployment with a single replica, the LLM picks that up, makes a change to 3, and the process is complete. Change it again manually if you’d like to see it in action again.
Some Caveats
As I stated before, this is simply a thought experiment and I have certainly taken some creative liberties around calling it a controller. It is in its simplest form something for you to look at and think about what else could be possible. Maybe an operation that denies changes to be applied on Fridays? A step in your Infrastructure that watches for database resource utilization and scales as needed? Hallucinations and non-deterministic behaviours become less problematic as LLMs and prompts mature. A world where we solve real problems while applying our institutional knowledge in organizations may be closer than once thought.
About Authors

Jay Miracola







