Job Description
DevOps/Platform Engineer
Role Summary:
The DevOps/Platform Engineer ensures that all ingestion and ETL pipelines are deployed reliably through automated CI/CD processes, infrastructure is provisioned as code, and the platform operates efficiently with proper autoscaling and cost controls. This role bridges the gap between pipeline development and production-grade operations.
Key Responsibilities:
- Build and maintain CI/CD pipelines for automated testing, validation, and deployment of ingestion and ETL workloads across dev, staging, and production environments.
- Implement infrastructure-as-code (IaC) for all pipeline-related cloud resources using Terraform, CloudFormation, or CDK.
- Configure and manage secrets, credentials, and cross-account/VPC connectivity required for source system integration.
- Develop autoscaling patterns for ingestion compute resources to handle variable data volumes cost-effectively.
- Implement cost monitoring dashboards and provide ongoing optimization recommendations for ingestion compute spend.
- Collaborate with Boeing's platform, security, and network teams for approvals, quota increases, and connectivity provisioning.
- Ensure deployment processes support zero-downtime updates and rollback capabilities.
- Manage environment configurations, branching strategies, and release processes aligned with Boeing's CI/CD guidelines.
- Support the QA engineer with test environment provisioning and pipeline validation automation.
Required Skills & Qualifications:
- 5–7 years of experience in DevOps, platform engineering, or site reliability engineering (SRE) with a focus on data platforms.
- Expert-level AWS experience: IAM, VPC, CloudFormation/CDK/Terraform, CodePipeline, CodeBuild, ECS/EKS, S3, KMS.
- Strong experience building CI/CD pipelines for data workloads (not just application deployments).
- Proficiency in infrastructure-as-code: Terraform (preferred) or CloudFormation/CDK.
- Experience with container orchestration (ECS, EKS) and serverless compute (Lambda, Glue).
- Hands-on knowledge of secrets management (AWS Secrets Manager, Parameter Store) and cross-account IAM patterns.
- Understanding of cost optimization levers for AWS data services (Glue, EMR, Kinesis, S3).
- Familiarity with monitoring tools: CloudWatch, Grafana, and alerting configuration.
- Scripting proficiency in Bash, Python, or similar.
- Experience working in regulated or security-conscious environments.
Preferred Skills:
- AWS certifications (Solutions Architect, DevOps Engineer).
- Experience with GitOps workflows (ArgoCD, Flux).
- Familiarity with FinOps practices and AWS cost management tooling.
- Prior experience supporting data lake or lakehouse infrastructure.