Devops Engineer(2-3 years experience)
Location: Hyderabad
Company: Algohire Technologies Private Limited
Openings: 1
About the role::
Algohire Technologies is looking for a hands-on DevOps Engineer with a strong foundation in containerization, automation, and hybrid infrastructure. You’ll be part of the team building and maintaining our hybrid environment, combining on-premises Kubernetes clusters, Docker Swarm deployments, and AWS-based services.
You’ll work alongside senior engineers to enhance reliability, automation, and observability across our systems involving Kafka, ClickHouse, Redis, PostgreSQL, and Prometheus-Grafana monitoring.
This role is ideal for engineers who already have solid practical experience and are eager to take ownership of production-grade deployments at scale.
Key Responsibilities::
1. Kubernetes & Docker Swarm
- Deploy, configure, and maintain Kubernetes and Docker Swarm clusters (on-premises and hybrid).
- Ensure application availability, networking, and scaling.
2. Containerization & CI/CD
- Containerize applications using Docker.
- Develop and maintain build and deploy pipelines using Jenkins or GitHub Actions.
3. Infrastructure Automation
- Use Terraform and Ansible for provisioning and configuration.
- Manage both on-premises VMs and AWS resources (EC2, S3, RDS, IAM, VPC).
4. Monitoring & Observability
- Configure Prometheus, Node Exporter, Redis Exporter, Kafka Exporter, and Blackbox Exporter.
- Create Grafana dashboards for health, performance, and resource visibility.
5. Logging & Metrics
- Implement centralized logging using ELK Stack or Loki for containers and system services.
6. Networking & Load Balancing
- Manage HAProxy, ingress controllers, VPNs, and firewall rules for traffic routing.
7. Secrets Management
- Handle secure credentials with Doppler, Docker Swarm Secrets, or Kubernetes Secrets.
8. Message Streaming & Data Integration
- Support Kafka, Kafka Connect, and ClickHouse integrations with proper schema validation.
9. Database Reliability
- Monitor and maintain PostgreSQL (Patroni clusters), Redis, and ClickHouse performance and backups.
10. Disaster Recovery & Security
- Assist in implementing backup strategies, access controls, and incident response procedures.
11. Collaboration & Growth
- Work closely with backend and monitoring teams.
- Contribute ideas to improve deployment workflows and learn advanced hybrid DevOps practices.
Required Qualifications::
- 2–3 years of DevOps or Infrastructure Engineering experience.
- Good understanding of Kubernetes and Docker Swarm architecture and operations.
- Experience with Terraform and Ansible for automation.
- Experience building CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Familiarity with AWS (EC2, S3, RDS, IAM, VPC).
- Working knowledge of Kafka, Redis, ClickHouse, and PostgreSQL.
- Understanding of Prometheus-Grafana, exporters, and alert systems.
- Strong grasp of networking concepts — ingress, VPNs, DNS, and firewalls.
- Exposure to Vault or Doppler for secrets management.
- Proficiency in Bash or Python scripting.
- Strong debugging and problem-solving mindset.
Preferred Skills::
- Experience with GitOps tools like ArgoCD.
- Understanding of service mesh (Istio).
- Knowledge of ClickHouse-Kafka integration and monitoring.
- Familiarity with AWS EKS, Elastic Beanstalk, and Firebase deployment workflows.
- Awareness of security compliance, IAM roles, and encryption practices.
- Prior experience in startup or agile environments.
What We Offer::
- Opportunity to work with hybrid (on-prem + cloud) infrastructure.
- Mentorship and learning from experienced DevOps engineers.
- Competitive salary, performance incentives, and growth opportunities.
- Collaborative and innovative work culture.