Research Engineer — Alignment

OpenAI·San Francisco, CA

AI SafetySeniorFull-time
$230K-$400KPosted 2 months ago

About the Role

OpenAI is seeking a Research Engineer to work on superalignment — developing techniques to align AI systems that may become more capable than humans. You will build evaluation systems, develop alignment techniques, and work at the frontier of AI safety.

Requirements

  • 4+ years of ML engineering or research engineering
  • Strong implementation skills in Python and PyTorch
  • Understanding of alignment concepts and challenges
  • Experience with large-scale model training
  • Strong analytical and problem-solving skills

Nice to Have

  • Publications in AI safety or alignment
  • Experience with interpretability tools
  • Background in formal methods or verification
  • Familiarity with scalable oversight techniques

Benefits

Top-tier equity package
Full health benefits
Unlimited PTO
Free meals
Learning budget
Mission-driven work

Skills

AI SafetyAlignmentPyTorchPythonResearch EngineeringEvaluation

Related Jobs

Preparing for Your AI Career?

Vincony has all 400+ AI models in one place — compare responses, AI debate, Image/Video/Voice generator, and 20 more tools to help you learn and build with AI.

Visit Vincony.com