Job Description:
DevOps -
Azure
Databricks
Terraform
Github
We are seeking a highly skilled Data DevOps Engineer to join our operations team. You will be responsible for building, automating, and maintaining scalable data pipelines on the Azure platform. This role bridges the gap between data engineering and IT operations, with a heavy focus on high-quality code, automated deployments, and system reliability.
Key Responsibilities:
Data Pipeline Engineering: Design and optimize scalable processing solutions using Azure Databricks with a focus on Scala and Python.
CI/CD & Infrastructure: Build and manage automated deployment pipelines using GitHub Actions and Jenkins while provisioning infrastructure via Terraform.
Quality & Testing: Own the reliability of the codebase by writing comprehensive unit tests(using frameworks like pytest or ScalaTest) and performing thorough end-to-end (E2E) validations.
Operations & Monitoring: Participate in the DevOps/Operations lifecycle, including monitoring production pipelines, troubleshooting performance issues, and ensuring data integrity.
Collaboration: Communicate complex technical requirements clearly to both technical and non-technical stakeholders.
Required Qualifications:
Experience: 3+ years in a Data Engineering or DevOps role.
Languages: Strong proficiency in Scala and Python for big data applications.
Cloud Platform: Hands-on expertise in Azure Databricks (Spark, Delta Lake, Unity Catalog).
Automation Tools: Proven experience with Git, GitHub Actions, Jenkins, and Terraform.
Testing Mindset: Demonstrated experience in building testable code and performing system-wide validation checks.
Soft Skills: Excellent verbal and written communication; ability to work effectively in a collaborative, operations-focused team.