Job Overview
Introduction The Center for AI Safety (CAIS) is a leading research and field-building organization on a mission to reduce societal-scale risks from AI. Alongside our sister organization, the CAIS Action Fund, we tackle the toughest AI issues with a mix of technical, societal and policy solutions. Our work includes publishing a global statement on AI Risksigned by Geoffrey Hinton, Yoshua Bengio and CEOs from the major AI labs, leading the charge on a major AI safety bill (CA SB 1047), and running a large compute cluster for global academic researchers working on AI safety research. Our founder and Director, Dan Hendrycks, is a leading AI researcher (contributed the most widely used LLM benchmark and has over 25,000 citations) and AI Safety Advisor to xAI. As a research engineer intern here, you will work very closely with our researchers on projects in fields such as Trojans, Adversarial Robustness, Power Aversion, Machine Ethics, and Out-of-Distribution Detection. We will assign you a dedicated mentor throughout your internship, but we will ultimately be treating you as a colleague. By this we mean you will have the opportunity to debate for your own experiments or projects, and defend their impact. You will plan and run experiments, conduct code reviews, and work in a small team to create a publication with outsized impact. You will leverage our internal compute cluster to run experiments at scale on large language models. We hope you will view this opportunity as the start of a long-term collaboration with CAIS. You might be a good fit if you:
Are able to read an ML paper, understand the key result, and understand how it fits into the broader literatureAre comfortable setting up, launching, and debugging ML experimentsAre familiar with relevant frameworks and libraries (e.g., pytorch)Communicate clearly and promptly with teammatesTake ownership of your individual part in a projectHave co-authored a ML paper in a top conference
About Us The Center for AI Safety is a non-profit dedicated to ensuring the safety of future artificial intelligence systems. We believe that artificial intelligence will be a powerful technology which will dramatically change society and that AI safety must therefore be pursued proactively. To this end, we conduct research into machine learning safety and facilitate field-building projects which accelerate the growth of the safety community. Join us in steering the future of AI. If you have any questions about the role, feel free to reach out to hiring@safe.ai. The Center of AI Safety is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Some studies have found that a higher percentage of women and underrepresented minority candidates won’t apply if they don’t meet every listed qualification. The Center for AI Safety values candidates of all backgrounds. If you find yourself excited by the position but you don’t check every box in the description, we encourage you to apply anyway!
Job Detail
Related Jobs (416)
-
Senior Data Analyst on October 6, 2024
-
Software Engineering Intern – Kohler Ventures on October 6, 2024
-
Développeur / Développeuse full-stack (H/F) on October 6, 2024
-
GPON Engineer DIGI on October 6, 2024
-
Pega Developer on October 6, 2024
-
Consultor/a Técnico Dynamics 365 & Desarrollador/a Azure – Eviden on October 6, 2024
-
Software Engineer – Site Reliability on October 6, 2024
-
Java2 Developer – Group A on October 6, 2024
-
Technical Lead – Machine Learning on October 6, 2024
-
Application Engineer on October 6, 2024