Country/Region:  IN
Requisition ID:  33860
Work Model: 
Position Type: 
Salary Range: 
Location:  INDIA - NOIDA- BIRLASOFT OFFICE

Title:  Azure Databricks Developer

Description: 

Area(s) of responsibility

    
About Us:
      Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities.

Job Title:
Senior Data Engineer – Databricks, PySpark & Data Modeling
Experience Level:
8 to 10 Years – Grade 5A – Senior Technical Lead
Location:
Mumbai, Pune, Noida, Bangalore
________________________________________
Job Description:
We are seeking a highly skilled Senior Data Engineer with strong expertise in Databricks, PySpark, and Data Modeling to lead the design and development of scalable data solutions on Azure.
Key Responsibilities:
•    Design, develop, and deploy robust data engineering solutions using Azure Databricks, PySpark, SQL, and Azure Data Factory (ADF).
•    Build and maintain scalable and efficient data pipelines to support growing data volume and complexity.
•    Lead data modeling efforts including conceptual, logical, and physical models for structured and semi-structured data.
•    Develop Modern Data Warehouse solutions leveraging Azure Data Lake, Delta Lake, and Databricks.
•    Implement batch and streaming data processing solutions for structured and unstructured data sources.
•    Create and maintain technical design documents, including application interface designs and data flow diagrams.
•    Collaborate with BI, analytics, and business teams to understand data requirements and deliver high-quality solutions.
•    Optimize data workflows and ensure data quality, consistency, and governance across platforms.
Required Skills & Experience:
•    2–3 years of hands-on experience with Azure Databricks in enterprise-scale projects.
•    2+ years of experience in PySpark for data transformation and processing.
•    2–3 years of experience in Azure Data Factory (ADF) for orchestration and integration.
•    Strong proficiency in SQL for data querying and manipulation.
•    Proven experience in data modeling (dimensional modeling, normalization, star/snowflake schema).
•    Solid understanding of Delta Lake architecture, data partitioning, and performance tuning.
•    Experience working with structured and semi-structured data (e.g., JSON, Parquet, Avro).
•    Familiarity with CI/CD pipelines, version control (Git), and agile methodologies is a plus.
________________________________________