full time
Chandigarh
Posted 2 weeks ago

Role : Senior Analytics Engineer (Customer Success)

– Shift : 4 PM – 12 AM (Overlap with USA – Eastern Time)

– Experience : 5 – 6 years (startup background strongly preferred)

– Focus mix : Data Analytics 50%, Data Engineering 30%, Customer Success 20%

Company Overview :

We are a unified, financial-services CRM with an AI agent co-pilot that connects fragmented data, automates workflows, and powers outcome-driven customer journeys – end-to-end from lead to funding, especially in lending. The platform is expanding across broader financial-services use cases beyond mortgage.

Role Purpose :

Our customers (banks, NBFCs, lenders, fintechs) want trustworthy, decision-grade data and clear insights embedded in CRM workflows. Youll be the hands-on owner who models the data, builds scalable pipelines, ships crisp BI, and partners with customer teams to drive measurable business outcomes (conversion, funding velocity, retention/upsell, agent productivity).

Key Responsibilities :

Data Analytics – 50% :

– Translate customer goals into analytical frameworks, certified datasets, and BI assets (Power BI/Tableau/Metabase) with semantic layers and documentation.

– Build cohorts, funnels, lifetime value and propensity analyses; run A/B experiments and readouts; turn findings into actions inside CRM workflows.

– Define source of truth metrics (lead?app?approval?funding, NCA/roll rates overlays, agent productivity) and set up robust monitoring.

Data Engineering – 30% :

– Design/operate lakehouse stacks on AWS (S3 + Glue Catalog + Apache Iceberg) feeding Redshift/Postgres; build ELT/ETL in PySpark/Python.

– Optimize models for cost and performance; implement data contracts, tests, lineage, and CI/CD for data.

– Build reliable ingestion from product/CRM events and financial-services systems (e.g., loan origination, servicing, core-banking, bureau, KYC).

– Publish curated, self-serve datasets that power BI and CRM/AI-agent actions.

Customer Success – 20% :

– Lead analytical onboarding : KPI design, data readiness, success plans, enablement/training.

– Run QBRs with quantified impact; prioritize roadmap asks by value; turn recurring insights into playbooks inside Company

– Advise on compliant data usage and controls in regulated environments.

Technology Stack :

– Cloud/Data : AWS, S3, Glue Catalog, Apache Iceberg, Redshift, Postgres, PySpark, Python

– Analytics/BI : SQL (expert), Python (pandas/NumPy), Power BI/Tableau/Metabase

– Nice to have : dbt, Airflow/Step Functions, Terraform, GitHub Actions, event streaming (Kinesis/Kafka), reverse ETL

Qualifications :

– 5-6 years across analytics + data engineering, ideally in startup or high-ownership environments.

– Fluency in SQL and dimensional/data-vault modeling; you turn messy multi-source data into clean, documented, high-trust datasets.

– Hands-on lakehouse experience (Iceberg on Glue), plus Redshift/Postgres performance tuning.

– Analytical storytelling : you connect metric movements to operational levers and ship changes that move the funnel.

– Domain comfort with financial-services data (PII handling, consent, encryption, data residency; familiarity with SOC 2/GDPR/PCI principles).

– Customer-facing strength : translate technical detail into business impact; handle exec and ops stakeholders with ease.

Good to have :

– Experience instrumenting product/CRM events and mapping to lending life-cycle stages.

– Exposure to AI/agent-driven workflows (prompted actions, guardrails, evaluation).

– Building cost-aware data stacks and usage-based BI governance at scale.

Job Features

Job CategoryDeveloper

Apply For This Job

A valid phone number is required.