Onebridge, a Marlabs Company, is a global AI and Data Analytics Consulting Firm that empowers organizations worldwide to drive better outcomes through data and technology. Since 2005, we have partnered with some of the largest healthcare, life sciences, financial services, and government entities across the globe. We have an exciting opportunity for a highly skilled AWS Data Engineer to join our innovative and dynamic team.
Employment Type: Contract
Location: Indianapolis, IN - Open to Remote
Industry: IT & Services.
AWS Data Engineer | About You
As an AWS Data Engineer, you are responsible for designing and operating cloud-native data pipelines that transform complex operational data from legacy systems into modern, scalable products. Your work is foundational to how the company onboards clients, integrates acquisitions, and evolves into a true platform business. You’re motivated by building reusable infrastructure that reduces complexity, improves data quality, and accelerates growth, and you’re excited to help shape a Platform team focused on schema management, validation frameworks, integrations, and automation.
AWS Data Engineer | Day-to-Day
- Build and maintain robust ETL/ELT pipelines that ingest, validate, and transform operational data from legacy systems.
- Design and enforce platform data standards, including schema governance and reusable models.
- Develop serverless workflows using AWS-native services like Step Functions, Lambda, S3, ECS, and Terraform.
- Create automation tools that empower non-engineering teams to execute data migrations reliably.
- Embed observability into pipelines with logging, error handling, rollback paths, and monitoring.
- Collaborate with engineering, product, and client-facing teams to define platform standards and long-term vision.
AWS Data Engineer | Skills & Experience
- 5+ years of experience building AWS-native data infrastructure using services like Step Functions, Lambda, S3, ECS, IAM, and Terraform.
- Proven track record designing and delivering complex ETL/ELT pipelines for operational product data.
- Strong proficiency in Python and SQL for data transformation and pipeline development.
- Expertise in schema design, data modeling, validation frameworks, and scalable error handling.
- Familiarity with PostgreSQL and Microsoft SQL Server (T-SQL), with experience bridging data models across systems.
- Solid understanding of infrastructure-as-code (Terraform preferred) and DevOps automation practices.
- Excellent communication and collaboration skills across technical and non-technical teams.