TixelJobs

Security Engineer Jobs at AI Companies

Security Engineers at AI companies protect some of the most high-profile and sensitive systems in tech. From securing AI model APIs against adversarial attacks to protecting user data and ensuring compliance, these roles combine traditional security expertise with the unique challenges of AI systems.

Last updated: May 13, 2026

0
Open positions
0
Companies hiring

Latest Security Engineer Jobs at AI Companies

View all jobs

No jobs found in this category yet

New jobs are added daily. Check back soon!

Browse all jobs

Frequently Asked Questions

What does a Security Engineer do at an AI company?

Security Engineers at AI companies protect AI APIs from abuse and adversarial attacks, secure model training infrastructure and data pipelines, implement access controls for sensitive model weights, ensure compliance with AI regulations, and build security tooling for rapid product development. Unique challenges include prompt injection attacks, model extraction attempts, training data privacy, and the security implications of agentic AI systems. The role spans application security, infrastructure security, and the emerging field of AI-specific security.

What is the salary for security engineers at AI companies?

Security engineers at AI companies earn $150K-$220K at mid-level and $200K-$350K+ for senior and staff roles. The premium reflects both the critical nature of security at AI companies and the scarcity of security engineers who understand AI-specific threats. Companies handling sensitive AI capabilities, government contracts, or enterprise data are especially willing to pay top-of-market rates for experienced security professionals.

What skills do I need for security roles at AI companies?

Core security skills are the foundation: application security, network security, identity/access management, and incident response. Experience with cloud security (AWS/GCP), Kubernetes security, and API security is essential. What sets AI company security apart is knowledge of AI-specific threats: prompt injection, model extraction, training data poisoning, and adversarial attacks. Familiarity with AI safety concepts, red-teaming of AI systems, and understanding of emerging AI regulations (EU AI Act, etc.) is increasingly valued.