AIM Intelligence stress tests LLMs at scale. We are the robustness layer eliminating the risk of using language models in any setting. To prevent these systems from failing, we preemptively discover all the ways in which they can fail and continuously eliminate them in deployment.
We are looking for Research Engineers to help us to develop fundamental safety tooling for LLMs. Your work will not only set the standard both in terms of research, but also in terms of how LLMs are tested, verified, and applied across customers, companies, and industries. You will directly influence how the world responsibly uses LLMs.
Responsibilities
Qualifications
Location policy: 6 days a week, in person, in Seoul.
We encourage you to apply even if you do not believe you meet every single qualification: We’re open to considering a wide range of perspectives and experiences, and would love to chat with you.
Compensation and Benefits: AIM Intelligence provides generous salary, equity, and benefits.
We are not here to write GPT wrappers or get rich quick off the AI bubble. We're here to work on the hardest, most fundamental research problem in AI: making it reliable and robust. Come here to push yourself, learn fast, experience excellence, and kickstart your life's work. We value our team above all else, and firmly believe that greatness begets greatness.