Job Title: AI Risk & Governance Specialist – AI Stewardship Squad
Location
: Remote from LATAM |
Type
: Full-time Vendor
Company
:
About the Role
As part of Inallmedia's AI Stewardship Squad, you will be responsible for designing and running an end-to-end
AI Risk Management program
that goes beyond checklists.
This role is focused on
operationalizing AI responsibility
at scale—helping enterprise teams navigate the complexities of deploying generative and predictive models across sensitive, regulated, or high-impact environments.
You'll work closely with engineering, legal, product, and AppSec to implement frameworks such as
NIST AI RMF 1.0
,
ISO/IEC 42001
, and
ISO/IEC 23894
in real-world settings—balancing risk, usability, and compliance.
You'll also prepare organizations to meet emerging global obligations, including the
EU AI Act
and
U.S. privacy regulations
.
If you've led model risk efforts, collaborated across functions, and know how to turn policy into practice, this is your opportunity to build governance structures that scale with AI adoption.
Key Responsibilities
- Maintain an AI System Inventory and Risk Register covering internal and third-party systems (SaaS/LLMs), including key risk domains: bias, robustness, privacy, hallucinations, misuse, drift, and security.
- Execute the full risk lifecycle (identify → assess → treat → monitor) mapped to NIST RMF functions (Map / Measure / Manage / Govern).
- Define and document AI-specific control objectives, mapped to ISO/IEC 42001 clauses (roles, policies, internal audits, continuous improvement), referencing ISO/IEC 23894 for risk processes.
- Lead technical assurance activities: bias/fairness testing, LLM safety evaluations, prompt injection/jailbreak testing, and rollback protocols.
- Align red teaming practices to OWASP Top 10 for LLM Applications; coordinate independent validation of high-risk or regulated models (e.g., under SR 11‑7).
- Embed risk due diligence in vendor onboarding and model acquisition processes; align with CISO (cybersecurity) and Legal/Privacy for PII and cross-border data flow compliance.
- Ensure U.S. privacy obligations are addressed (CCPA/CPRA, Colorado CPA); coordinate readiness for EU AI Act deployer duties including logging, incident handling, and human oversight.
- Run AI risk, ethics, and incident committees; publish dashboards and executive-level reporting (Audit, Risk Committee, BoD).
- Codify usage policies (prompt hygiene, PII, approvals); deliver role-based training to internal teams and promote safe, compliant AI usage across business units.
Ideal Candidate
- 7–10+ years in risk management, governance, or GRC, including 3+ years in AI/ML or model risk.
- Hands-on experience with NIST AI RMF 1.0; working familiarity with ISO/IEC 42001 and ISO/IEC 23894.
- Demonstrated implementation of AI-specific controls: telemetry for drift/performance, bias audits, privacy-by-design, prompt safety evaluation.
- Familiarity with SR 11‑7 / OCC 2011‑12 if coming from financial services or high-risk model environments.
- Proven experience working with U.S.-based companies and engaging with U.S. legal, compliance, and engineering stakeholders.
- Excellent command of English (written and spoken); able to drive cross-functional conversations with clarity and credibility.
- Availability for time zone overlap with U.S. (ET or PT); accustomed to near-shore delivery models and distributed collaboration.
Recommended Stack
Frameworks
: NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894
Red Teaming
: OWASP LLM Top 10, adversarial testing, prompt-injection simulators
Telemetry & Controls
: MLflow, Prometheus, OpenTelemetry, model registries
GRC Tools
: Archer, ServiceNow, internal risk registers
Languages
: Python or SQL (for audit traceability & telemetry queries)
Compliance Contexts
: SR 11‑7, OCC 2011‑12, CCPA/CPRA, EU AI Act (deployer role)
Infrastructure
: Cloud environments with VPN/VPC, RBAC, encryption-at-rest, audit logging
Infrastructure & Environment
- 100% remote across LATAM
- Secure infrastructure: MFA, VPN, RBAC, encrypted storage
- Git-based version control for policies, templates, and evidence documentation
- Integrated with DevOps/ML teams for real-time telemetry and control enforcement
- Cross-collaboration with Legal, Compliance, InfoSec, and Product Governance teams
What We're
Not
Looking For
- Candidates focused solely on legal/policy without experience operationalizing risk controls
- Profiles limited to academic or theoretical frameworks with no deployment exposure
- GRC professionals unfamiliar with AI system behavior, telemetry, or model risk specifics
- Anyone without hands-on experience engaging with U.S.-based organizations
Next Steps
If you're ready to move beyond theory and help
build responsible AI practices at scale
, we'd love to hear from you.
Apply now and help shape the future of
AI governance and risk management
in live production environments.