About Groww
We are a passionate group of people focused on making financial services accessible to every Indian through a multi-product platform. Each day, we help millions of customers take charge of their financial journey.
Customer obsession is in our DNA. Every product, every design, every algorithm down to the tiniest detail is executed keeping the customers’ needs and convenience in mind.
Our people are our greatest strength. Everyone at Groww is driven by ownership, customer-centricity, integrity and the passion to constantly challenge the status quo.
Are you as passionate about defying conventions and creating something extraordinary as we are? Let’s chat.
Our Vision
Every individual deserves the knowledge, tools, and confidence to make informed financial decisions. At Groww, we are making sure every Indian feels empowered to do so through a cutting-edge multi-product platform offering a variety of financial services.
Our long-term vision is to become the trusted financial partner for millions of Indians.
Our Values
Our culture enables us to be what we are — India’s fastest-growing financial services company. It fosters an environment where collaboration, transparency, and open communication take center-stage and hierarchies fade away. There is space for every individual to be themselves and feel motivated to bring their best to the table, as well as craft a promising career for themselves.
The values that form our foundation are:
- Radical customer centricity
- Ownership-driven culture
- Keeping everything simple
- Long-term thinking
- Complete transparency
EXPERTISE AND QUALIFICATIONS
What youʼll do:
- Providing 24X7 infra & platform support for the Data Platform infrastructure setup hosting the workloads for the Data engineering teams and also building processes and documenting “tribal” knowledge around the same time.
- Managing application deployment & GKE platforms - automate and improve development and release processes.
- Creating, managing and maintaining datastores & data platform infra using IaC.
- Owning the end-to-end Availability, Performance, Capacity of applications and their infrastructure and creating/maintaining the respective observability with Prometheus/New Relic/ELK/Loki.
- Owning and onboarding new applications with the production readiness review process.
- Managing the SLO/Error Budgets/Alerts and performing root cause analysis for production errors.
- Working with Core Infra, Dev and Product teams to define SLO/Error Budgets/Alerts.
- Working with the Dev team to have an in-depth understanding of the application architecture and its bottlenecks.
- Identifying observability gaps in application & infrastructure and working with stakeholders to fix them.
- Managing outages and doing detailed RCA with developers and identifying ways to avoid that situation.
- Automate toil and repetitive work.
What We're Looking For:
- 6+ Years of experience in managing high traffic, large scale microservices and infrastructure with excellent troubleshooting skills.
- Has handled and worked on distributed processing engines , distributed databases and messaging queues ( Kafka , PubSub or RabbitMQ etc
- Experienced in setting up , working on data platforms, data lakes, and data ingestion systems that work at scale.
- Write core libraries (in python and golang) to interact with various internal data stores.
- Define and support internal SLAs for common data infrastructure
- Good to have familiarity with BigQuery or Trino , Pinot , Airflow , and Superset or similar ones ( good to have familiarity with Mongo and Redis )
- Experience in troubleshooting, managing and deploying containerized environments using Docker/containerd, Kubernetes is a must.
- Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing.
- Expertise in GitOps, Infrastructure as a Code tool such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible.
- Expertise in Google Cloud (GCP) and/or other relevant Cloud Infrastructure solutions like AWS or Azure.
- Experience in building the CI/CD pipelines with any one the tools such as Jenkins, GitLab, Spinnaker, Argo etc.