Unico Connect is hiring a DevOps Engineer on contract to support active client engagements across cloud infrastructure, CI/CD, and production operations. You will work with solution architects and engineering teams to build and run cloud environments that are reliable, secure, and cost-efficient. Suited for engineers with up to three years of hands-on experience who have owned real production workloads. As an AI-first organisation, we equip our engineers with AI-assisted tooling to accelerate delivery and improve quality.
Responsibilities
- Translate client requirements into infrastructure and deployment plans in collaboration with solution architects and engineering teams.
- Provision and manage cloud infrastructure on AWS (and Azure where applicable) using Terraform and Ansible.
- Operate and scale Kubernetes clusters (EKS/AKS) with autoscaling across dev, staging, and production.
- Build and maintain CI/CD pipelines using GitHub Actions, Jenkins, ArgoCD, Azure DevOps, or CodePipeline.
- Manage container lifecycle with Docker and ECR, including image builds, optimisation, and vulnerability scanning.
- Set up observability and alerting using Prometheus, Grafana, Loki, ELK, or OpenTelemetry.
- Support multi-region, multi-AZ, and DR architectures including backups and failover testing.
- Implement security controls covering VPC, IAM, Secrets Manager, KMS, TLS automation, and network segmentation.
- Participate in incident response, RCA, and compliance workflows (DPDP Act)
Requirements
- Up to 3 years of hands-on experience in a DevOps, Cloud, or SRE role.
- Working knowledge of core AWS services (EC2, S3, VPC, IAM, RDS, Route53, CloudWatch, EKS, Lambda).
- Practical experience with Kubernetes, Docker, and Helm.
- Proficiency with Terraform; familiarity with Ansible or AWS CDK is a plus.
- Hands-on with at least one CI/CD toolchain: GitHub Actions, Jenkins, ArgoCD, Azure DevOps, or CodePipeline.
- Strong Linux fundamentals and shell scripting; working knowledge of Python or PowerShell.
- Solid networking basics: DNS, load balancers, security groups, NACLs, VPN, ingress/egress.
- Familiarity with monitoring stacks (Prometheus, Grafana, Loki, ELK, CloudWatch) and secrets management (Secrets Manager, KMS).
- Exposure to Redis, RabbitMQ, Kafka/MSK, or SQS is preferred.
- Experience with incident, change, and problem management.
- Exposure to GPU workloads or AI/ML pipeline infrastructure is a plus.
- Cloud certifications (AWS, GCP, Azure) preferred but not mandatory.
- Bachelor's degree in Computer Science, IT, or a related discipline.