Title: Technical Lead-Testing Services
Area(s) of responsibility
About Birlasoft:
Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose.
About the Job –A quality-focused QA Lead to be a key part of our team building generative AI applications. Primarily develop and execute testing strategies to ensure our generative models and AI platform meet quality standards before release. Must have literacy on Generative AI applications and environments.
Job Title - Manual Tetser
Location: Noida
Educational Background: Bachelor’s degree in computer science, Information Technology, or related field.
Mode of Work- Hybrid
Experience Required - 6+ years
Mandatory skills
• Work with data scientists, ML engineers and product managers to understand requirements for generative AI models and applications under test
• Develop and implement testing plans, test cases, scripts to evaluate generative model performance, outputs and overall platform functionality.
• Perform integration, regression, usability, localization testing for generative AI components and systems
• Identify key evaluation metrics and analyze generative model outputs for accuracy, bias, toxicity, grammar etc. against test criteria
• Document and log defects, issues, anomalies in Jira and communicate status/results to cross-functional team members
• Create and maintain up-to-date test documentation, requirements traceability matrices, and risk analysis reports
• Proactively recommend improvements to QA processes, automated checks, and performance metrics to enhance generative AI quality
• Stay updated on latest best practices and tools for testing responsible AI systems
Requirements:
• 6-8 years software QA experience, with exposure to testing ML systems
• Hands-on experience with Python/R, SQL, Jupyter Notebooks. Familiarity with TensorFlow/PyTorch.
• Familiarity with Microsoft Azure Machine Learning Studio
• Strong analytical skills and quality-focused mindset
• Excellent communication skills to collaborate across technical teams
• Good understanding of bias, transparency, robustness challenges for AI systems
• Good to have – LangChain,Llamaindex etc.