AI Safety Lead

Google DeepMind·London, UK

AI SafetyLeadFull-time
£180K-£300KPosted 2 weeks ago

About the Role

Lead Google DeepMind's AI Safety team in developing evaluation frameworks, red-teaming methodologies, and safety guardrails for Gemini models. You will set the research agenda, collaborate with policy teams, and ensure safe deployment of frontier AI systems.

Requirements

  • 8+ years in AI/ML with 3+ years in safety or alignment
  • PhD in Computer Science, AI, or related field
  • Track record of leading research teams
  • Published work in AI safety, alignment, or robustness
  • Experience with safety evaluation at scale

Nice to Have

  • Experience with AI governance and policy
  • Background in formal verification or testing
  • International collaboration experience
  • Media or public communication experience

Benefits

Google L7+ compensation
Leadership equity package
Relocation to London supported
Private healthcare for family
Generous research budget
Sabbatical program

Skills

AI SafetyLeadershipResearchRed-TeamingEvaluationPolicy

Related Jobs

Preparing for Your AI Career?

Vincony has all 400+ AI models in one place — compare responses, AI debate, Image/Video/Voice generator, and 20 more tools to help you learn and build with AI.

Visit Vincony.com