Date:  Feb 18, 2026

AI Validation & Model Assurance

Location: 

ID

Level:  Supervisor
Employment Status:  Permanent
Department:  Office of Chief Data & AI
Description: 

Role Purpose

Ensuring that AI and machine learning models are robust, reliable, fair, and fit for purpose throughout their lifecycle.

Role Description

The AI Validation & Model Assurance role is responsible for ensuring that AI and machine learning models are robust, reliable, fair, and fit for purpose throughout their lifecycle. This role provides independent assurance that models meet defined performance, fairness, and stability standards before and after deployment, and that appropriate human oversight is embedded in AI-assisted decision-making.

The AI Validation & Model Assurance function establishes and enforces model validation standards across the AI lifecycle, from development and testing through deployment and ongoing monitoring. Working closely with data science, engineering, business, and risk stakeholders, the role designs and executes validation activities to assess model accuracy, bias, robustness, and long-term performance stability.

This role plays a critical gatekeeping function by reviewing validation evidence, approving model promotion into production, and requiring remediation actions where models fail to meet defined thresholds. It also ensures that Human-in-the-Loop (HITL) and human oversight controls are appropriately designed and applied to support accountable and trustworthy AI use.

Key Responsibilites

  • Define and maintain model validation and testing standards across the AI lifecycle
  • Design and execute pre-deployment and post-deployment validation activities
  • Assess and document model bias, fairness, accuracy, robustness, and performance stability
  • Monitor and evaluate model drift (data drift, concept drift, performance drift)
  • Define and enforce Human-in-the-Loop (HITL) and human oversight controls for AI-assisted decisions
  • Review validation evidence and gate model promotion from pre-production to production
  • Require remediation, retraining, or rollback where models fail to meet defined thresholds

Key Deliverables

  • Model Validation Standards
  • Bias & Fairness Assessment Reports
  • Model Approval / Rejection Decisions
  • Monitoring & retraining criteria

Requirements

Required Knowledge & Skills

  • Strong understanding of end-to-end AI/ML model development lifecycle, including problem framing, data preparation, feature engineering, model training, evaluation, deployment, and monitoring
  • Knowledge of bias and fairness assessment techniques and limitations
  • Experience with model performance metrics, stress testing, and robustness testing
  • Understanding of explainability and transparency methods appropriate to risk level
  • Ability to define control thresholds, acceptance criteria, and approval conditions
  • Ability to operate independently from model developers to provide objective assurance
  • Experience working with model documentation (model cards, validation reports, testing logs)
  • Familiarity with model approval committees and formal sign-off processes

Experience

  • 3–7+ years of experience in model validation, model risk management, AI assurance, or data science governance
  • Experience validating ML and/or Generative AI models in production environments preferred
  • Exposure to regulated or high-risk decision systems is an advantage

Behavioral Competencies

  • Strong analytical and critical-thinking skills
  • Detail-oriented with a strong quality and control mindset
  • Confident in challenging model readiness and deployment decisions
  • Clear communicator with technical and non-technical stakeholders