Careers
Careers

job details

Back to jobs search

Jobs search results

3,937 jobs matched
Showing 2421 to 2440 of 3937 rows
Back to jobs search

Research Engineer, Frontier Safety Loss of Control, DeepMind

DeepMindSan Francisco, CA, USA
Applicants in San Francisco: Qualified applications with arrest or conviction records will be considered for employment in accordance with the San Francisco Fair Chance Ordinance for Employers and the California Fair Chance Act.

Minimum qualifications:

  • Bachelor's degree in Computer Science, Machine Learning, or a related technical field, or equivalent practical experience.
  • 5 years of experience in engineering and agentic assistance, including software development in Python.
  • Experience working in a frontier AI research and development environment.
  • Experience working in a professional software engineering or research team environment.
  • Experience working with technical stakeholders.
  • Experience in frontier model risk.

Preferred qualifications:

  • Experience of engineering or product design for AI tools or assistants, especially those focused on ML Research and Development (R&D).
  • Experience with cybersecurity detection and response.
  • Experience with collaborating or leading an applied ML project.
  • Experience with research, and with LLM training and inference.
  • Knowledge of AI control, chain-of-thought and other monitoring, faithfulness and monitorability and related research areas.

About the job

Our team develops monitoring and control for potentially misaligned AI to mitigate risks of extreme harms. We are looking for an engineer who can rapidly iterate to solve never-before-seen problems with creativity and thoroughness.

Our team mitigates risks from potentially misaligned AI. Currently, this primarily involves: designing, building, and testing monitors for potentially dangerous behaviours; developing and implementing response policies to preserve AI usefulness while mitigating risks; and foreseeing ways in which our control tools might be bypassed or degraded.

DeepMind is a dedicated scientific community, committed to ‘solving intelligence’ and ensuring our technology is used for widespread public benefit.

The Loss of Control team contributes to a defense in depth against the risk of misaligned AI systems being deployed. We take the possibility of very advanced AI seriously. We don’t think control is a suitable alternative to alignment in the limit of advancing intelligence. But while AI remains effectively monitorable, we think that control is an important part of an overall strategy for building safe AI.

We are looking for a research engineer for the Frontier Safety Loss of Control team within the AGI Safety and Alignment Team based in either San Francisco or London.

In this role, the core responsibility is to help Google prepare for the internal use of potentially misaligned AI systems. That means building defense-in-depth against AI that might persistently pursue goals that users and system developers did not intend.

Artificial intelligence will be one humanity’s most transformative inventions. At Google DeepMind, we are a pioneering AI lab with exceptional interdisciplinary teams focused on advancing AI development to solve complex global challenges and accelerate high-quality product innovation for billions of users. We use our technologies for widespread public benefit and scientific discovery, ensuring safety and ethics are always our highest priority.


We are pushing the boundaries across multiple domains. Our global teams offer diverse learning opportunities and varied career pathways for those driven to achieve exceptional results through collective effort.

The US base salary range for this full-time position is $174,000-$252,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Identify ways that misaligned agents could cause harm.
  • Identify strategies for preventing harm.
  • Identify strategies for detecting that an agent might imminently cause harm.
  • Implement technical controls to monitor agent thoughts, behaviour and respond to mitigate potential harms.
  • Stitch together various agent behaviour signals from across the organisation to inform response policies.

Information collected and processed as part of your Google Careers profile, and any job applications you choose to submit is subject to Google's Applicant and Candidate Privacy Policy.

Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google's EEO Policy, Know your rights: workplace discrimination is illegal, Belonging at Google, and How we hire.

If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.

To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.

Google apps
Main menu