Job description
Research Engineer – AI Reliability Platform – NYC (On-Site)
A cutting-edge AI startup is seeking Research Engineers to advance its mission of making AI systems safer, more reliable, and production-ready. This role offers the chance to develop innovative frameworks, algorithms, and tools for testing and optimizing large language model (LLM) applications.
Key Responsibilities:
- Develop optimization, fuzz-testing, and synthetic data generation methods to stress-test LLM systems.
- Design and implement automated evaluation models and anomaly detection systems.
- Rapidly prototype research ideas and iterate experiments to deliver practical solutions.
- Collaborate with customers to adapt tools for diverse applications and domains.
Ideal Candidate:
- Proven research background with first-author publications in top ML venues (NeurIPS, ICML, ICLR).
- Hands-on engineering experience with ML systems—clean, production-quality code is a must.
- Familiarity with concepts such as active learning, weak supervision, reinforcement learning, and automated evaluation.
- Action-oriented mindset—excited to experiment, iterate quickly, and build solutions that scale.
- Experience in dynamic testing, optimization, or observability tools is a plus.
Why Join?
- Competitive compensation ($250,000 – $600,000) with equity and benefits.
- Work with a high-talent, impact-driven team that includes top researchers, engineers, and AI experts.
- Opportunity to tackle one of AI’s toughest challenges—building robust, safe, and scalable LLM applications.
- In-person, NYC-based culture designed for collaboration, excellence, and growth.
Ready to Shape the Future of AI Reliability? Apply now to join a company pushing the boundaries of AI safety and performance!