Reports To: Director of Cloud Infrastructure
About the Role.
We’re seeking a skilled Prompt Engineer specializing in Kubernetes and platform engineering tools to design and optimize prompts that enable Large Language Models (LLMs) to automate complex container orchestration and infrastructure management tasks. You will create precise, context-rich prompts that guide AI models to generate Kubernetes manifests, manage deployments, and interact with platform engineering workflows, boosting developer productivity and operational reliability.
Your work will bridge AI and cloud-native infrastructure, enabling seamless AI-driven automation for Kubernetes clusters, Helm charts, CI/CD pipelines, and platform tooling.
What You’ll Do
- Develop and refine prompts that instruct LLMs to generate, validate, and optimize Kubernetes YAML manifests, Helm charts, and platform automation scripts.
- Apply advanced prompt engineering techniques such as zero-shot, few-shot, and chain-of-thought prompting tailored for infrastructure-as-code and container orchestration contexts.
- Collaborate with DevOps, SRE, and platform engineering teams to understand deployment patterns, best practices, and pain points to craft domain-specific prompt templates.
- Integrate prompts with AI orchestration frameworks (e.g., LangChain, AutoGen) and Kubernetes management tools to enable autonomous or semi-autonomous platform operations.
- Continuously evaluate prompt outputs for accuracy, security, and compliance with Kubernetes best practices (e.g., pod scheduling, resource quotas, readiness/liveness probes).
- Document prompt designs, usage guidelines, and best practices to empower platform teams and AI developers.
- Stay up-to-date with Kubernetes ecosystem advancements and AI-driven infrastructure automation trends.
Required Skills & Experience
- Proven experience with prompt engineering for LLMs (OpenAI GPT-4.x, Anthropic Claude, Google Gemini, etc.) especially applied to Kubernetes or cloud infrastructure automation.
- Strong understanding of Kubernetes architecture, deployment best practices (Helm, taints/tolerations, autoscaling, probes), and platform engineering workflows.
- Familiarity with infrastructure-as-code tools (Helm, Terraform, Kubernetes manifests) and container orchestration concepts.
- Proficiency in Python or TypeScript for scripting and integrating AI prompts with platform tooling.
- Experience with AI orchestration frameworks such as LangChain, AutoGen, or Semantic Kernel.
- Knowledge of vector databases (Pinecone, Weaviate, Chroma) and semantic search to enhance prompt context retrieval.
- Ability to craft clear, positive, and domain-specific prompts that reduce ambiguity and improve AI output quality.
- Understanding of security and compliance considerations in cloud-native environments.
Preferred Tools & Technologies
Category | Tools & Frameworks |
LLM APIs | OpenAI GPT-4.x, Anthropic Claude 3.x, Google Gemini 2.5, Cohere Command R |
Prompt Engineering | LangChain, AutoGen, Semantic Kernel, PromptLayer, LangSmith |
Kubernetes Tools | kubectl, Helm, Kustomize, Terraform |
Vector Databases | Pinecone, Weaviate, Chroma |
Orchestration | LangChain Agents, AutoGen, crewAI |
DevOps & Cloud | Docker, Kubernetes, AWS, GCP, Azure, CI/CD (GitHub Actions, Jenkins) |
Observability | Prometheus, Grafana, Kube-state-metrics |
Why Join Us?
- Work at the intersection of AI and cloud-native technologies to redefine platform automation.
- Collaborate with experts in AI, DevOps, and platform engineering to build innovative solutions.
- Influence the future of autonomous infrastructure management powered by prompt engineering.
- Access to cutting-edge AI tools and continuous learning opportunities.
This role is ideal for prompt engineers passionate about Kubernetes and platform engineering who want to leverage LLMs to automate and optimize cloud infrastructure management through expert prompt design and AI orchestration.