Location: New York City, USA (In-office 3+ days/week)
Type: Full-time
Start Date: Immediate
Our client is pioneering the next era of media applications. Today, the creative vision of gaming and graphics studios are trapped by a computational bottleneck. They simply cannot build the hyper-realistic worlds, complex environmental effects, or perfect rendering their audience’s demand. It’s too expensive and, in many cases, computationally impossible on classical hardware
To solve challenges our client is building the quantum foundation for the Media & Entertainment industry. Their hardware-agnostic software platform unlocks the power of quantum computing for any creator. It enables tasks that were once science fiction: think perfect, atomic-level simulations of materials and fluid dynamics, generating entire universes with emergent NPC behaviours, or rendering light and materials with true quantum-level physics.
They aren't just building a tool; They're building the platform for the entire next generation of media. As a company operating at the forefront of technology and culture, they are a team of innovators across science, engineering, and creative design, working together to solve the problems that others accept as unassailable limitations.
If you are driven to build the technology that will define human culture for the next century, click apply.
The Role
We are seeking a Senior Cloud Engineer to own and evolve the cloud infrastructure and DevOps operations that power Archaeo, our quantum-classical computing platform. You will be responsible for ensuring our revolutionary quantum workflows are reliable, automated, and production ready as we transition from R&D demonstrations to customer pilots. Your primary mission is to build and maintain the cloud infrastructure, orchestration, and deployment automation that enables our quantum applications team and research scientists to run compute-intensive workloads seamlessly across AWS and quantum hardware providers.
This role is perfect for a cloud/DevOps engineer who is excited by:
- Novel infrastructure challenges: Orchestrating hybrid quantum-classical workflows with unique constraints (45-minute quantum hardware delays, small data but massive compute)
- Startup autonomy: Wearing multiple hats, making architectural decisions, and building from the ground up
- Working with scientists: Supporting quantum physicists and application engineers without needing to become one yourself
- Emerging technology: Learning quantum computing orchestration (specifically Covalent) and working at the frontier of what's possible.
Key ResponsibilitiesCloud Infrastructure & Operations (Primary Focus)
- Own AWS infrastructure: Design, deploy, and maintain our primary cloud environment, currently hosted on Cloudflare
- Kubernetes management: Operate and optimize our K8s clusters for compute-intensive quantum-classical workflows
- Infrastructure as Code: Maintain and extend Terraform configurations for consistent, reproducible environments
- Cost optimization: Monitor and optimize cloud spending for compute-heavy workloads with unique resource constraints
Quantum Workflow Orchestration
- Covalent integration: Work with Covalent cloud orchestration software to manage hybrid quantum-classical workflows
- Workflow automation: Build and maintain pipelines that coordinate jobs across classical compute (AWS) and quantum hardware providers
- Queue management: Handle the unique challenges of batch-oriented workflows with 45-minute+ quantum hardware delays
- Multi-environment coordination: Orchestrate workloads across on-prem, cloud, and quantum hardware resources
DevOps & Automation
- CI/CD pipelines: Maintain and improve deployment automation for the Archaea platform and supporting services
- Python automation: Write scripts and tools to automate infrastructure operations and workflow management
- Monitoring & observability: Implement and maintain monitoring, logging, and alerting for platform reliability
- Deployment management: Ensure smooth deployments and rollbacks for our application and research teams
- Code review & quality contribution: Participate in code review and quality processes, particularly when translating research code into production—this is a SHARED responsibility with the software engineering team, where you contribute your DevOps and infrastructure perspective
Collaboration & Support
- Support application scientists: Provide infrastructure support for 2 quantum physics PhDs running experiments and demonstrations
- Collaborate with software engineers: Work alongside 2 software engineers (one application site engineer, one managing DevOps transition)
- Cross-timezone coordination: Collaborate with international team members in Switzerland and UK (7am EST availability required)
- Customer-facing readiness: Help prepare infrastructure for customer pilots and demonstrations
Required Qualifications
Cloud & Infrastructure Expertise:
- 5+ years of hands-on experience with AWS (EC2, S3, networking, security)
- Strong experience with Kubernetes in production environments
- Proficiency with Infrastructure as Code (Terraform preferred, Ansible acceptable)
- Deep understanding of cloud architecture, security, and cost management
DevOps & Automation:
- Proven track record building and maintaining CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, etc.)
- Strong Python scripting skills for automation and tooling
- Experience with monitoring and observability tools (Prometheus, Grafana, Datadog, etc.)
- Version control expertise (Git workflows, branching strategies)
Work Style & Culture Fit:
- Startup mentality: Comfortable with ambiguity, autonomous decision-making, and wearing multiple hats
- Early riser: Availability starting 7am EST for coordination with international team
- In-office presence: Located in or willing to relocate to NYC; in-office 3+ days/week (10am attendance required)
- Self-directed: Can work independently, prioritize effectively, and drive projects to completion
- Previous startup experience strongly preferred over large tech company background
Specialised Experience:
- Familiarity with Covalent or similar workflow orchestration software (you'll learn this on the job, but existing experience is valuable)
- Experience with HPC (High-Performance Computing) environments or scientific computing workflows
- Data science/data management background or experience supporting data-intensive research teams
- Security expertise: Cloud security, compliance, secrets management, vulnerability scanning
- Exposure to quantum computing frameworks (Qiskit, PennyLane) or willingness to learn (2-3 month learning curve expected)
- Experience working closely with research scientists or PhD-level technical teams
- Batch job scheduling systems (Slurm, PBS, etc.) or queue-based workflow management
- Multi-cloud experience (GCP, Azure) though AWS is our primary platform
What Makes This Role Unique Small Data, Big Compute:
Unlike typical web services (where scaling means handling millions of requests), our challenges are:
- Compute-intensive workloads with unique hardware constraints
- Hybrid workflows spanning classical cloud compute and quantum processors
- 45-minute delays possible due to quantum hardware queuing
- Orchestrating workflows where each job is expensive but infrequent
Quantum Context (Demystified): You don't need to understand quantum physics or write quantum algorithms—our quantum physicists handle that. Your job is to:
- Provide reliable infrastructure for them to run their experiments
- Orchestrate workflows that integrate classical and quantum compute
- Learn enough about quantum computing to understand the constraints and requirements (we'll teach you)
R&D to Production Transition: We're not operating at Netflix scale, and we're not a mature product company. We're:
- Running demonstrations and pilot projects (like our Roblox collaboration)
- Building toward customer pilots over the next 1-2 years
- Pre-revenue, R&D-focused, but with real external stakeholders
- Moving from "works on my laptop" to "works reliably for customers"
Autonomy & Impact: You'll have significant influence over our infrastructure direction:
- Make architectural decisions about our cloud environment
- Choose tools and technologies for DevOps/automation
- Define best practices as we scale from demos to pilots to production
- Be a key technical voice in a small, collaborative team
- Participate in code review and quality processes (shared responsibility, not sole owner)