Research Scientist — Alignment

Mistral AI·Paris, France

AI SafetySeniorFull-time
€130K-€220KPosted 2 months ago

About the Role

Mistral AI is hiring a Research Scientist to work on alignment techniques for open-weight language models. You will develop methods for steering model behavior, design evaluation frameworks, and contribute to making powerful open models safer.

Requirements

  • PhD or equivalent in ML with focus on alignment or safety
  • Research experience with RLHF, DPO, or constitutional methods
  • Strong implementation skills in Python and PyTorch
  • Publications in AI safety or alignment venues
  • Deep understanding of LLM behavior and failure modes

Nice to Have

  • Experience with open-weight model alignment
  • Background in reward modeling
  • Familiarity with European AI regulation
  • Multilingual evaluation experience

Benefits

Equity in European AI leader
French social benefits
Central Paris office
Conference budget
Research freedom
Relocation support

Skills

AlignmentRLHFDPOPyTorchResearchAI Safety

Related Jobs

Preparing for Your AI Career?

Vincony has all 400+ AI models in one place — compare responses, AI debate, Image/Video/Voice generator, and 20 more tools to help you learn and build with AI.

Visit Vincony.com