About DaVinci Commerce:
DaVinci Commerce is the industry’s first AI-native platform for commerce media campaign personalization. It unites creative, audience, and media in one workflow - enabling brands, retailers, and CMNs to launch personalized, retailer-compliant campaigns in under five minutes. By combining generative and agentic AI, DaVinci Commerce automates creative production, and drives a measurable increase in sales across AI shopping agents, onsite, offsite, social, video/CTV, and in-store commerce media channels. Trusted by Nestlé, Procter & Gamble, Unilever, Giant Eagle, Stop & Shop, and SimpliSafe, DaVinci Commerce delivers speed, scale, and precision personalization to make commerce media a sales growth engine. For more information, visit davincicommerce.ai
Job Description
We are looking for a skilled DevOps Engineer to design, implement, and maintain scalable infrastructure and CI/CD pipelines. The ideal candidate will work closely with development and operations teams to improve deployment processes, ensure system reliability, and optimize cloud infrastructure.
You will play a key role in automating workflows, managing cloud resources, and ensuring high availability of applications.
Key Responsibilities
- Design, build, and maintain CI/CD pipelines
- Manage and provision infrastructure using Infrastructure as Code (IaC) tools like Terraform
- Deploy, manage, and scale applications using Kubernetes
- Work with AWS services (EC2, S3, IAM, VPC, EKS, etc.)
- Automate tasks using Python or shell scripting
- Monitor systems and ensure reliability, availability, and performance
- Troubleshoot production issues and perform root cause analysis
- Implement security best practices in infrastructure and deployments
- Collaborate with developers to streamline release processes
- Maintain documentation for infrastructure and processes
Job Requirements
Must-Have Skills
- 2–5 years of experience in DevOps / Cloud / SRE roles
- Strong hands-on experience with:
- Terraform (modules, state management, remote backends)
- Kubernetes (deployments, services, STS , troubleshooting)
- AWS (core services and architecture)
- Linux (system administration, networking basics)
- Python or Bash scripting
- Experience with CI/CD tools (e.g., Jenkins, GitHub Actions, GitLab CI)
- Understanding of containerization (Docker)
- Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, Datadog etc.)
Good-to-Have
- Experience with Helm charts
- Knowledge of security best practices (IAM roles, secrets management)
- Exposure to multi-cloud environments
- Experience with Infrastructure monitoring and alerting
- Understanding of networking concepts (DNS, load balancing, firewalls)
- Experience with Advertising platform
Behavioral Expectations
- Strong problem-solving skills
- Ability to work independently and in teams
- Good communication skills
- Ownership mindset for production systems
- You are willing to “carry the pager” but strive to build a system reliable enough that you don’t get paged.
- Imagine, architect, develop, deploy, and evolve our Cloud infra and CI/ CD systems for the next disruptive data analytics platform.