Title: AWS Data Platform Framework Development Engineer
Area(s) of responsibility
Empowered By Innovation
Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities.
Role: AWS Data Platform Framework Development Engineer
Location: Pune
Experience: 6-8 years
Shift: General/Afternoon
Role Overview
We are seeking a passionate and experienced AWS Data Platform Framework Development Engineer to join our team. You will help design, build, and maintain reusable, scalable frameworks for data ingestion, processing, and integration.
This role requires a mindset focused on framework development, emphasizing Code reusability, Scalability, Maintainability rather than developing one-off solutions. If you believe strongly in foundational software engineering principles like OOP and SOLID and have experience building frameworks or libraries that empower teams, this role is for you.
Key Responsibilities
-Design and develop reusable and scalable data processing frameworks and libraries for data ingestion, processing, and ETL pipelines on AWS alongside the platform development team.
-Collaborate closely with framework developers, data engineers, architects, and analysts to standardize data pipelines and processing patterns.
-Develop and enhance Debezium Kafka CDC pipeline frameworks to enable rapid instantiation of CDC data ingestion workflows.
-Build and maintain AWS Glue PySpark job frameworks aligned with medallion architecture principles.
-Implement and maintain ETL frameworks for loading data into Snowflake.
-Develop Infrastructure as Code using Terraform and Github to automate provisioning and deployment of platform components.
-Ensure platform reliability, scalability, and observability.
-Contribute to improving development standards, code reviews, and best practices focused on framework and platform engineering.
Required Skills & Experience
-Masters degree in software engineering, computer science or equivalent
-AWS certifications (Solutions Architect Associate, Developer Associate, Data Engineer Associate).
-Strong software engineering background with expertise in Python, especially PySpark.
-Experience with and thorough understanding of Kafka and Kafka Connect concepts.
-Proven track record developing reusable frameworks or libraries focusing on scalability and maintainability.
-Sound understanding and practical application of OOP and SOLID principles (encapsulation, inheritance, polymorphism, abstraction).
-Hands-on experience with AWS services including Glue, ECS, S3, Kafka (including Debezium), and Snowflake.
-Experience building and orchestrating data pipelines using Airflow or similar tools.
-Proficient in Infrastructure as Code using Terraform.
-Familiarity with CI/CD workflows using GitHub or similar platforms.
-Strong problem-solving skills and ability to write clean, modular, and well-documented code.
-Excellent communication skills and ability to work collaboratively in an international team of highly skilled IT engineers.
Preferred Qualifications
-Experience with Iceberg or other open table formats in a data lakehouse environment.
-Prior experience working on CDC (Change Data Capture) pipelines or Kafka streaming frameworks.
-Experience with big data processing frameworks is considered a plus.
-Understanding of medallion architecture and data lakehouse design patterns.