Principal Security Architect, AI Governance and Compliance (Remote)
Not sure if you're a good fit?
Upload your resume and TixelJobs AI will compare it against Principal Security Architect, AI Governance and Compliance (Remote) at Businessolver. Get a match score, missing keywords, and improvement tips before you apply.
Free preview · Your resume stays private
About the Role
Since 1998, Businessolver has delivered market-changing benefits technology and services supported by an intrinsic responsiveness to client needs. The company creates client programs that maximize benefits program investment, minimize risk exposure, and engage employees with easy-to-use solutions and communication tools to assist them in making wise and cost-efficient benefits selections. Founded by HR professionals, Businessolver's unwavering service-oriented culture and secure SaaS platform provide measurable success in its mission to provide complete client delight.
**Please be aware of recruitment scams. Businessolver does not make job offers outside of our official hiring process or request payment or sensitive personal information. You will never receive an offer of employment without meeting a hiring authority and having a "live" and face-to-face conversation.**
Job Overview:
The Principal PM, AI Governance and Compliance owns the technical and operational control layer for AI governance and compliance across the company’s AI-enabled capabilities. This role ensures that AI systems are supported by the right technical standards, review workflows, control points, documentation, evidence, and risk management practices so they can be deployed and operated safely.
This leader works across Security, Legal, Privacy, Product, Engineering, and Architecture to establish practical governance mechanisms that fit how AI systems are designed, built, integrated, monitored, and changed over time. The role requires technical depth in AI system lifecycles, software delivery practices, model and prompt controls, vendor assessments, and evidence-based compliance operations.
The Gig:
Technical Governance for AI Systems
- Define and maintain the governance framework for AI-enabled capabilities across the software and model lifecycle, including intake, design review, implementation controls, testing expectations, deployment review, and ongoing monitoring.
- Establish technical control requirements for AI systems, including documentation standards, model and prompt inventories, traceability, approval paths, and change management expectations.
- Ensure governance requirements are practical for engineering teams and embedded into delivery workflows where possible.
AI Compliance Operations
- Operate the processes required to support internal and external compliance expectations for AI-enabled products and internal AI use cases.
- Maintain evidence, decision records, inventories, risk assessments, and control mappings needed for audits, client diligence, investor diligence, and internal reviews.
- Coordinate responses to AI-related diligence requests and partner with subject matter experts to ensure responses are accurate and supportable.
Risk Controls and Review Paths
- Partner with Security, Privacy, Legal, and Engineering to identify and manage risks related to model behavior, data handling, access patterns, third-party AI services, output quality, explainability, and system changes.
- Build and run review paths for new AI use cases, material updates, and exceptions requiring elevated scrutiny.
- Define escalation criteria, mitigation tracking, and approval workflows for higher-risk AI implementations.
Technical Partnership with Product and Engineering
- Work directly with product and engineering teams to translate policy and control requirements into technical implementation guidance.
- Help teams design compliant approaches for logging, testing, access control, human review, fallback behavior, documentation, and monitoring.
- Influence architecture and delivery decisions so governance is built into systems rather than applied after the fact.
Inventory, Documentation, and Evidence Management
- Maintain current inventories of AI systems, models, vendors, prompts, datasets, and related technical dependencies as required by company governance standards.
- Ensure documentation is complete and usable across lifecycle stages, including design intent, data usage, review outcomes, testing artifacts, and operational controls.
- Improve the tooling and process model for collecting, maintaining, and retrieving governance evidence.
Control Automation and Operational Scale
- Identify opportunities to automate governance activities within engineering and product workflows, including intake routing, policy checks, documentation capture, control verification, and evidence collection.
- Partner with engineering teams to embed governance checks into existing delivery systems and lifecycle tooling.
- Scale governance operations in a way that increases control coverage without creating unnecessary process overhead.
What you need to make the cut:
Education
- Bachelor’s degreerequired in Computer Science, Information Security, Software Engineering, Information Systems, Engineering, or a related technical field.
- Master’s degree preferred in Cybersecurity, Computer Science, Engineering, Information Assurance, Artificial Intelligence, or a related discipline.
- Ongoing professional development in AI governance, secure software delivery, privacy engineering, compliance frameworks, and model risk management expected.
Ready to apply?
This job is active. Apply now to get in early.