About McDonald’s:
One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe.
Job Description:
We are excited to announce an opening for Data Engineer III at MCC India.
Please find below the details of the role and its responsibilities.
Skills Required:
Data Engineering, Software Engineering, Data Engineer, Python, Java, Gradle, Maven, Terraform, New Relic, Open Telemetry, DevOps, CI/CD, Kubernetes, Docker, Cloud Run, ECS, GCP, AWS, Confluent, Kafka, mParticle, Braze, streaming data
Experience Range: 7 - 11 years
Job Description:
Position Summary: We are seeking an experienced Engineer to lead the design, development, and support of Marketer Customer Data Platform (mCDP), a large-scale, customer data processing platform. This role focuses on building backend systems that ingest, process, and distribute high volumes of customer data in both real-time and batch environments and that power critical customer engagement initiatives and key revenue drivers. As an experienced member of the team, you will guide technical decisions, mentor junior engineers, and collaborate closely with architects, product owners, and cross-functional teams to deliver capabilities that are reliable, scalable and high performing.
Who we are looking for:
Primary Responsibilities:
- Lead the design, implementation, and evolution of scalable, secure, and resilient backend architectures supporting McDonalds’ Marketer Customer Data Platform
- Develop, test, and maintain cloud-native applications using Java and modern development practices.
- Build and support high-volume, low-latency transactional and data processing systems.
- Implement and maintain event-driven solutions (e.g., Kafka or equivalent) and streaming data pipelines.
- Drive technical best practices, code quality, and adherence to security and compliance standards.
- Mentor and coach junior and mid-level developers through code reviews, pair programming, and knowledge sharing.
- Participate in architecture discussions and provide guidance on system design, performance, and reliability.
- Collaborate across product, architecture, security, DevOps, and operations teams in a distributed, global environment.
- Contribute to DevOps and CI/CD practices, including automated deployments, infrastructure-as-code, and observability solutions.
- Implement monitoring, logging, tracing, and alerting to ensure production-readiness and operational excellence.
- Assist with investigations into complex incidents, performance tuning, and continuous improvement initiatives.
Required Qualifications:
- Extensive professional experience with Java (modern versions) and enterprise backend application development.
- Hands-on experience with Git for source control, collaboration, and CI/CD pipelines.
- Proficiency with build and dependency management tools such as Maven and/or Gradle.
- Deep experience with cloud-native development (AWS, Azure, or GCP), including identity and access management.
- Proven experience designing and supporting high-volume, distributed, and low-latency systems.
- Experience with event-driven architecture and streaming platforms (e.g., Kafka, Pub/Sub).
- Experience with containerized applications and orchestration platforms (Docker, Kubernetes).
- Experience working in Agile/Scrum SDLC environments and collaborating with distributed, global teams.
- Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
- Experience in large-scale, multi-national, or enterprise organizations.
- Experience integrating backend services with marketing technology tools (e.g., mParticle, Segment, Adobe Campaign, Braze).
- Familiarity with API gateways, microservices architectures, and cloud-native data pipelines.
- Proven ability to navigate ambiguity and lead technical initiatives in a distributed environment.
- Experience with observability and monitoring platforms & frameworks (Prometheus, Grafana, New Relic, Kibana, OpenTelemetry)
- Strong understanding of application and platform security, including secure coding practices and security tools (Snyk, SAST, DAST, SCA).
Key Competencies:
- Strong ownership mindset with focus on quality, performance, and security.
- Ability to mentor, guide, and inspire team members.
- Strategic thinking with emphasis on scalability, reliability, and system design.
- Excellent collaboration and communication skills across technical and non-technical stakeholders.
- Continuous learner with a passion for modern engineering and cloud practices.