Date:  Feb 18, 2026

AI Risk & Compliance

Location: 

ID

Level:  Supervisor
Employment Status:  Permanent
Department:  Office of Chief Data & AI
Description: 

Role Purpose

Ensuring that the organisation’s use of artificial intelligence is ethical, compliant, and aligned with regulatory, legal, and internal governance requirements.

Role Description

The AI Risk & Compliance role is responsible for ensuring that the organisation’s use of artificial intelligence is ethical, compliant, and aligned with regulatory, legal, and internal governance requirements. This role provides oversight across the AI lifecycle from use-case ideation to deployment and ongoing monitoring by identifying, assessing, and mitigating risks related to AI systems while enabling responsible innovation.

The AI Risk & Compliance function works closely with business units, technology teams, legal, privacy, and risk stakeholders to evaluate and manage risks arising from the development and deployment of AI use cases. The role ensures that AI systems comply with applicable data protection laws (including PDPA), internal policies, and sector-specific regulations, and that appropriate controls are in place for high-risk and sensitive use cases.

Key Responsibilities

  • Conduct AI risk assessments covering use-case context, model design, data sources, and deployment
  • Together with relevant stakeholders to Identify and assess ethical, legal, operational, privacy, and reputational risks associated with AI systems
  • Ensure AI use case complies with PDPA, internal governance policies, and applicable sector regulations
  • Define and apply risk controls for high-risk and sensitive AI use cases
  • Manage third-party and vendor AI risk
  • Support internal audits, regulatory examinations, and compliance reviews related to AI Use Case

Key Deliverables

  • AI Risk Framework & Control Library
  • AI Use Case Risk Assessment Reports
  • Compliance checklists & audit evidence
  • Vendor AI risk assessment

Competencies

Required Knowledge & Skills

  • Strong understanding of AI/ML model lifecycle, having experience in end to end AI Model development will be advantageous
  • Strong understanding of AI risk management concepts across data, model, and decision layers
  • Practical knowledge of PDPA and data protection principles (consent, purpose limitation, data minimization, retention)
  • Familiarity with technology, model, or operational risk frameworks
  • Experience assessing vendor AI solutions, including black-box models and cloud-based AI services
  • Ability to interpret regulatory requirements and translate them into actionable controls

Experience

  • 3–7+ years of experience in AI governance, technology risk, data privacy, compliance, or model risk
  • Experience working in regulated environments preferred
  • Exposure to audits or regulatory engagements involving AI, data, or advanced analytics

Behavioral Competencies

  • Strong risk and control mindset
  • High attention to detail and documentation discipline
  • Ability to challenge AI use cases constructively and independently
  • Clear communication with technical, legal, and business stakeholders