Research Fellow - Saxe Lab - Deep Learning Theory & AI Safety
Not sure if you're a good fit?
Upload your resume and TixelJobs AI will compare it against Research Fellow - Saxe Lab - Deep Learning Theory & AI Safety at UCL. Get a match score, missing keywords, and improvement tips before you apply.
Free preview · Your resume stays private
About the Role
About us
The Gatsby Unit (GCNU) at UCL has been at the forefront of theoretical neuroscience and machine learning since its establishment in 1998. It maintains a singular and cohesive research culture, emphasising interaction and collaboration both within the unit and beyond. The Sainsbury Wellcome Centre (SWC) brings together world-leading scientists to investigate how brain circuits process information to generate perception, form memories and guide behaviour in its mission is to generate experimentally testable theories of brain function.
Research in both units benefits from close links to the exceptional scientific communities at UCL in neuroscience, machine learning and AI. We actively encourage collaboration within and outside our Units and support this by a generous annual travel allowance to support conference, workshop and research visits.
Further details of our research are at GatsbyUnit Research and SWC Research Overview
About the role
We are now inviting applications for a post-doctoral training Fellowship under the guidance of Professor Andrew Saxe. The Saxe Lab works across Gatsby Unit and SWC and is focussed on understanding learning in biological and artificial systems.
This role lies at the interface of deep learning theory and AI safety. You will conduct research into artificial deep networks using techniques from applied mathematics and statistical physics. A particular focus is on understanding how depth affects learned representations and network behaviour, with applications to AI safety including emergent misalignment, unlearning and the efficacy of safety fine-tuning.
The project is a collaboration with the Sarao Mannelli group at Chalmers University of Technology, where several additional Research Fellows will be based. Funding for collaborative visits is included.
This post is funded for two years by a grant from OpenAI's Alignment Team, awarded by the UK AI Security Institute, through the Alignment Project.
About you
You should have a PhD in Computer Science, Physics or closely related discipline (Engineering, Theoretical Neuroscience, Mathematics) or have submitted your final thesis by the agreed start date of the position. A proven track-record of publishing work as a lead author relating to theory of deep learning or AI safety is essential as is a demonstrable interest in mathematical analysis of artificial neural network models.
A demonstrable interest in AI safety and track record of running controlled empirical experiments on large deep network models on HPC or via frontier model APIs is desirable.
To apply, please click Apply Now and submit your CV which should include names of 3 referees; and in the Attachments section (Research Paper 1) a statement covering research accomplishments. There is no requirement to upload any papers you have authored.
What we offer
We offer competitive salaries and an award-winning work environment. You will work in a vibrant, interactive and collaborative environment, with world-class PhD programmes, generous core funding and travel allowances. Our facilities include an on-site high-performance computer platform, an extensive seminar programme and interaction space, an on-site brasserie, and outdoor spaces. Our staff are entitled to UCL's full range of staff benefits, including a generous annual leave entitlement, family-friendly policies such as occupational shared parental pay, pension schemes and a range of financial benefits such as a season ticket loan scheme and staff discounts.
The pay scale for this Grade 7 role is £45,103 - £52,586. Grade 7 Research Fellows will also be eligible for a departmental allowance of up to £5,000 p.a.
Our commitment to Equality, Diversity and Inclusion
As London’s Global University, we know diversity fosters creativity and innovation, and we want our community to represent the diversity of the world’s talent. We are committed to equality of opportunity, to being fair and inclusive, and to being a place where we all belong. We therefore particularly encourage applications from candidates who are likely to be underrepresented in UCL’s workforce in this type of role. Interest from women and those from an ethnic minority background is particularly welcome, as they are under-represented within UCL at these levels. We also positively recognise and welcome applications from applicants who have had career breaks.
The Athena SWAN Charter recognises commitment to advancing women's careers in science, technology, engineering, maths, and medicine (STEMM) employment in academia. GCNU/SWC currently holds an Athena Swan Bronze Award.
Ready to apply?
This job is active. Apply now to get in early.