Senior Data Engineer
Job Description:
Location: Ahmedabad
Role: Build, optimize, and maintain data pipelines and data platform components to support analytics and AI use cases for an e-commerce business.
Key Responsibilities
-
Develop and maintain batch & streaming data pipelines from transactional, clickstream, and third-party sources.
-
Work on data lake / lakehouse / warehouse environments across cloud platforms (e.g., AWS, Azure, Google Cloud Platform (GCP)).
-
Build data models and implement medallion architecture (Bronze / Silver / Gold).
-
Ensure data quality, reliability, lineage, and performance.
-
Follow best practices for data governance, security, and PII handling.
-
Collaborate with architects, analysts, and ML teams to deliver scalable data solutions.
-
Optimize SQL, ETL/ELT workflows, and orchestration jobs for efficiency.
Required Skills
-
3–4+ years of experience in data engineering.
-
Strong SQL & Python skills.
-
Experience with streaming tools (Kafka, PubSub, Spark, Flink).
-
Hands-on experience in building cloud-based data pipelines.
-
Knowledge of data modeling: dimensional, data vault, medallion.
-
Familiarity with lakehouse/warehouse tools (Databricks, Snowflake, BigQuery, Redshift, ClickHouse).
-
Experience with orchestration tools (Airflow, Cloud Composer, etc.).
-
Understanding of data governance, lineage, and privacy concepts (GDPR/CCPA).
-
Experience with e-commerce or digital data is a plus.
-
Data Engineer
-
Data Pipelines
-
Streaming (Kafka / Spark)
-
SQL
-
Python
-
Data Lake / Warehouse
-
Medallion Architecture
-
Cloud Platforms (AWS / Azure / Google Cloud Platform (GCP))
-
Databricks
-
Snowflake
-
Data Quality
-
Data Governance
-
Company Profile
Apply Now
- Interested candidates are requested to apply for this job.
- Recruiters will evaluate your candidature and will get in touch with you.