Country/Region:  IN
Requisition ID:  32707
Work Model: 
Position Type: 
Salary Range: 
Location:  INDIA - PUNE - BIRLASOFT OFFICE - HINJAWADI

Title:  Sr Technical Lead-Testing Services

Description: 

Area(s) of responsibility

Job Description

Testing Lead – Agentic AI & Generative AI Platforms (Grade 5B)

Role Summary

We are seeking a Testing Lead (Grade 5B) with strong hands-on expertise in testing Agentic AI and Generative AI platforms. The role requires deep understanding of GenAI systems, data structures, data pipelines, and automation. The Testing Lead will design and execute AI-centric test strategies for LLM-based applications, RAG pipelines, autonomous agents, APIs, and data workflows while mentoring junior team members.

Key Responsibilities

1. GenAI & Agentic AI Testing Strategy

Define and implement testing strategies for GenAI and Agentic AI solutions including LLMs and RAG systems.

Design test approaches for non-deterministic AI behavior such as hallucinations, bias, and prompt injections.

Establish evaluation metrics beyond exact-match validation.

2. Agentic AI & Workflow Validation

Validate multi-step agent workflows including tool orchestration, memory, retries, and guardrails.

Ensure predictable behavior and controlled variability in autonomous agents.

4. RAG, LLM & Model Validation

Validate RAG pipelines end-to-end including ingestion, chunking, embeddings, retrieval, and grounding.

Test LLM responses across single-turn, multi-turn, and regression scenarios.

5. Test Automation & AI-Assisted Testing

Build and maintain automation using Python frameworks such as PyTest or UnitTest.

Automate API testing, data validation, and regression suites.

Leverage AI-assisted testing techniques such as synthetic data generation.

6. Performance, Reliability & Responsible AI

Conduct performance and scalability testing for inference, agents, and APIs.

Validate responsible AI requirements including bias, drift, security, and compliance.

7. Collaboration & Mentorship

Mentor junior testers and review test artifacts.

Collaborate with architects, data scientists, and engineers to embed quality early.

 

Required Skills & Experience

Must-Have:

• 7–9 years of experience in QA / Testing

• Hands-on experience in AI/ML or GenAI testing

• Experience testing APIs, data pipelines, and LLM-based systems

Good-to-Have:

• Experience with RAG systems and vector databases

• Exposure to Agentic AI frameworks and cloud platforms (Azure preferred)

• CI/CD and DevOps testing exposure

Education

Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or related fields.

 Suggested Questions: -

  1. How many agentic AI / Gen AI projects have you worked on ?
  1. How is testing an AI / GenAI application different from testing a traditional software application?
  2. What types of testing would you perform for APIs or data pipelines in an AI‑based system?
  3. Why are data quality and validation critical when testing AI or GenAI systems?