Candidates for this position are preferred to be based in Bangalore, India and will be expected to comply with their team's hybrid work schedule requirements.
Who We Are:
Wayfair runs the largest custom e-commerce large parcel network in the United States, approximately 1.6 million square meters of logistics space. The nature of the network is inherently a highly variable ecosystem that requires flexible, reliable, and resilient systems to operate efficiently.
The Data Services & Data Enablement team is looking for smart, passionate and curious people who are excited to help us scale, support, and engineer our database, distributed analytic, and streaming infrastructure. With the broad reach of the technologies we are using you will have the opportunity to grow your network and skills by being exposed to new people and ideas who work on a diverse set of cutting-edge technologies. If you are the type of person who is fascinated by engineering extremely large and diverse data systems and if you are passionate about troubleshooting challenging technical problems in a rapidly innovating cloud environment, you could be a great fit.
What You’ll Do
- Write clean, high-performance, and well tested, infrastructure code with a focus on reusability. (Puppet / Python / Terraform / Packer)
- Build out a team of 8 engineers.
- Create and maintain detailed documentation
- Establish, maintain, and adhere to Wayfair technical standards, policies, and procedures
- Recommend and implement infrastructure best practices in alignment with standard SRE principles and provide guidance on system performance and throughput expectations.
- Support development teams in designing, scaling, and operating production data infrastructure potentially including, but not limited to, CloudSQL, Firestore, Pub/Sub, Kafka, Dataproc, Airflow.
- Path finding missions; taking existing platforms at Wayfair and helping move them to the cloud
- Leverage software development skills to enable self-service deployment of distributed systems.
What You Have
- 10+ years of relevant industry experience in DevOps and/or work on cloud data systems in a senior or technical lead role
- Prior experience leading/managing engineers in an Agile environment
- Experience designing and deploying infrastructure in the cloud with durability and resilience in mind
- Experience with using cloud database, storage, and data services as part of an application (GCP, AWS, or Azure)
- Excellent communication skills and the ability to work effectively with engineers, product managers, and business stakeholders alike
Nice to Have
- Experience with real-time streaming tools and frameworks, such as Kafka, Pubsub, and Dataflow
- Experience with modern orchestration tools and frameworks, such as Airflow and/or Composer.
- Experience with GCP’s data stores including Cloud SQL, Spanner, Firestore, and BigQuery.
- Experience with distributed data processing systems including Spark, Hive, and/or Presto.
- Experience with Kubernetes containerization, Java, and microservices