Companies you'll love to work for

DNX Ventures
companies
Jobs

Senior LLMOps Engineer - AI Enabler Team

CAST AI

CAST AI

Software Engineering, Data Science
Multiple locations · Bulgaria · Croatia · Cyprus · Czechia · Estonia · Greece · Hungary · Italy · Latvia · Lithuania · Poland · Portugal · Romania · Slovakia · Slovenia · Ukraine
EUR 6,500-9k / month + Equity
Posted on Aug 22, 2025

Why Cast AI?

Cast AI is the leading Application Performance Automation (APA) platform, enabling customers to cut cloud costs, improve performance, and boost productivity – automatically.

Built originally for Kubernetes, Cast AI goes beyond cost and observability by delivering real-time, autonomous optimization across any cloud environment. The platform continuously analyzes workloads, rightsizes resources, and rebalances clusters without manual intervention, ensuring applications run faster, more reliably, and more efficiently.

Headquartered in Miami, Florida, Cast AI has employees in more than 32 countries worldwide and supports some of the world’s most innovative teams running their applications on all major cloud, hybrid, and on-premises environments. Over 2,100 companies already rely on Cast - from BMW and Akamai to Hugging Face and NielsenIQ.

What’s next? Backed by our $108M Series C, we’re doubling down on making APA the new standard for DevOps and MLOps, and everything in between.

About the role

In the AI Enabler team, our day is usually full of R&D challenges. Have you ever encountered a situation where you need to expand your AI infrastructure so that the applications can automatically pick the right large language models (LLMs) that are both more cost-efficient and better performing? Most of us probably do nowadays, or at least understand the complexity of making such decisions while keeping track of our cloud budget.

One of the team's responsibilities is ensuring that whenever a customer makes AI-related decisions regarding their K8s infrastructure, they are implemented automatically without unnecessary costs or hassle. This is just one small piece of a bigger puzzle. To get into a more detailed perspective, ask yourself the following questions:

  • How often do you use LLMs?
  • What is the least expensive LLM you can pick for a given prompt without degrading the quality of the response?
  • How much do your applications cost per 1 million tokens and how can you improve it?
  • Which API keys have the biggest waste?
  • How can you improve your frequently running prompt to use fewer tokens?
  • What is fine-tuning and how to do it efficiently?
  • What is a transformer?

These are just several of the many questions that are part of the daily work of this team.

Being a part of this team would involve design and decision-making end-to-end while collaborating with colleagues from other teams. Cast AI, being a technical product, encourages not only coding something as written in the JIRA ticket but also coming up with new features and potential solutions to customers' problems. Given that the team is working on a technical greenfield project, you will have the opportunity to impact it in many ways positively.

Here are some of the tools we use daily:

  • Python
  • ClickHouse and PostgreSQL for persistence
  • GCP Pub/Sub for messaging
  • gRPC for internal communication
  • REST for public APIs
  • Kubernetes, which our product is evolving around
  • AWS, GCP, and Azure cloud providers, which are currently supported in our platform
  • We use GitLab CI with ArgoCD as our GitOps CD engine
  • Prometheus, Grafana, Loki, and Tempo for observability.

Requirements:

  • Experience with designing a production-grade machine learning system
  • Strong software engineering skills in Python
  • Ability to move fast in an environment where things are sometimes loosely defined and may have competing priorities or deadlines
  • Experience with the design and implementation of efficient model training and inference pipelines end to end
  • 5+ years of hands-on experience in Data Science and Machine Learning, with a proven track record, demonstrated through a robust portfolio of projects
  • You have to be physically in any of the European countries GMT 0 to GMT +3
  • Strong English skills
  • Strong verbal and written communication skills
  • Ability to work independently and collaborate in a group.

Responsibilities:

  • Evaluate and Analyze LLM performance
  • Fine-Tune LLMs
  • Optimize AI Models for Cost Efficiency
  • Develop and implement data science solutions
  • Architect and build inference and training pipelines, directly contributing through hands-on design, model training pipeline, and deployment strategies
  • Stay up to date with industry trends.

What’s in it for you?

  • Competitive salary (€6,500 - €9,000 gross, depending on the level of experience)
  • Enjoy a flexible, remote-first global environment.
  • Collaborate with a global team of cloud experts and innovators, passionate about pushing the boundaries of Kubernetes technology.
  • Enjoy a flexible, remote-first global environment.
  • Equity options.
  • Private health insurance.
  • Get quick feedback with a fast-paced workflow. Most feature projects are completed in 1 to 4 weeks.
  • Spend 10% of your work time on personal projects or self-improvement.
  • Learning budget for professional and personal development - including access to international conferences and courses that elevate your skills.
  • Annual hackathon to spark new ideas and strengthen team bonds.
  • Team-building budget and company events to connect with your colleagues.
  • Equipment budget to ensure you have everything you need.
  • Extra days off to help maintain a healthy work-life balance.