Claude CoWork for AI Compliance Officers
A practical guide to using Claude as your AI co-worker for high-risk system audits, regulatory tracking, and EU AI Act documentation — from setup to daily use.

What is Claude CoWork?
Claude CoWork is the practice of configuring Claude as a persistent, context-aware co-worker embedded in your AI governance workflow. This is not a one-off chatbot interaction. It is a structured setup that carries your regulatory context, organization profile, and documentation standards across every session so that every output is compliance-ready from the first draft.
Claude-native prompts. The prompts in this guide use Claude's native XML tag structure (
<context>,<instructions>,<format>,<avoid>) for more precise, consistent output. These tags reduce ambiguity and improve structural reliability. They work in other AI tools, but are optimized for Claude.
AI compliance officers are operating under unprecedented regulatory pressure. EU AI Act enforcement begins August 2, 2026 — penalties reach €35M or 7% of global annual turnover for high-risk system violations. FINRA Notice 24-09 places new obligations on firms deploying autonomous agents. Most compliance teams lack purpose-built AI tooling and are managing documentation, audit trails, and regulatory change entirely in general-purpose environments. Claude can close a significant portion of that gap.
This guide shows you how to configure Claude specifically for AI governance work, five workflows that address your highest-risk documentation burden, and the guardrails that are non-negotiable when Claude's output touches regulatory records.
Install the AI Compliance Officer Plugin
This guide works on three Claude surfaces. The plugin is the fastest path on two of them. Pick whichever you use:
If you're on Cowork (desktop or mobile app)
Claude Cowork is Anthropic's agentic workspace — Claude completes work autonomously and returns finished deliverables. The AI Compliance Officer plugin packages the workflows below as native skills and slash commands.
- Open the Cowork plugin directory in your desktop app.
- Filter by Cowork, search for "AI Compliance Officer", and click Install.
- The plugin's slash commands and ambient skills are now available in any Cowork task.
If you don't see the plugin in the directory yet, install via custom marketplace: paste
https://github.com/alexclowe/awesome-claude-cowork-pluginsin your Cowork plugin settings.
If you're on Claude Code (CLI)
Install from your terminal:
claude plugin add alexclowe/awesome-claude-cowork-plugins/ai-compliance-officerThe plugin's slash commands and skills load on next session.
If you're on Claude.ai (web chat only)
Plugins aren't directly installable on the web chat surface. You have two options:
- Use the prompts in this guide directly in a Claude Project (covered in the next section). Same outputs, more typing.
- Upload the plugin's skills as a zip via Settings → Features → Custom Skills (Pro/Max/Team/Enterprise plans). Higher friction; only worth it if you want the auto-activating skills, not the slash commands.
What the plugin gives you (any surface)
| Slash command | What it does |
|---|---|
/audit-ai-system |
Map an AI system against EU AI Act Annex III high-risk criteria and generate a compliance risk assessment with citations |
/track-regulatory-change |
Ingest EU/FINRA/FDA/SEC guidance updates, flag impact on deployed systems, and produce a change-impact report |
/generate-documentation |
Draft EU AI Act quality management system docs, risk management framework, and conformity assessment package |
/eval-autonomous-agent |
Generate a test harness for hallucination, bias, scope creep, and reward misalignment per FINRA 2026 guidelines |
Auto-activating skills (no command needed — Claude applies them when relevant):
- High-Risk System Classification — Auto-detect EU AI Act Annex III categories and US-state high-risk equivalents for any described AI system
- Post-Market Monitoring Automation — Ingest AI-system incident reports, cluster failure patterns, and generate FDA/EU-style remediation recommendations
The plugin works standalone for one-off tasks. Pair it with the surface-specific setup below for persistent context across every task — that combination is the full Claude CoWork setup.
Setting Up Claude for AI Compliance Work
Surface note: The Project setup below is for claude.ai web users. Cowork users have their own task-context mechanism (set context once when starting a Cowork task). Claude Code users get the plugin's ambient skills automatically — no Project setup needed. The workflows themselves are surface-agnostic — paste the prompts wherever you're working. Step 1: Create a Compliance Project. In Claude, go to Projects and create one called "AI Governance Practice" or your organization's preferred naming convention. This is your persistent workspace — context, instructions, and uploaded references carry across every conversation.
Step 2: Set your custom instructions. In the Project settings, add:
You are my AI compliance documentation and analysis assistant. Here is my context:
<org-profile>
- Role: [AI Compliance Officer / Chief Risk Officer / AI Governance Lead]
- Organization type: [Financial services / Healthcare / Insurer / Tech / Enterprise SaaS]
- Headcount: [1,000+ employees]
- Jurisdictions: [EU / US / UK / combined]
- Regulatory frameworks in scope: [EU AI Act / FINRA 24-09 / NIST AI RMF / ISO/IEC 42001 / FDA SaMD]
- High-risk systems inventory: [Brief list of deployed AI systems subject to Annex III review]
</org-profile>
<rules>
- Never reference real employee names, vendor contract values, or proprietary model weights. Use role titles and system code names only.
- All output is a DRAFT. Include "DRAFT — PENDING COMPLIANCE REVIEW" on every document.
- Do not interpret statutory language as legal advice. Flag ambiguity explicitly.
- Do not accept regulatory conclusions without citing the article or notice number they derive from.
- Human compliance officer review is required before any output enters an official record.
</rules>Step 3: Upload your reference documents. Add your current high-risk system inventory, internal QMS templates, existing conformity assessment checklists, and any relevant regulator guidance documents. Claude will reference these against your prompts rather than reasoning from general knowledge alone.
Step 4: Always work inside this Project. Every new conversation inherits your org profile and rules automatically. Do not run compliance prompts in general chat windows where context is absent.
Step 5: Isolate the workspace from live data. Use Claude for Teams or Enterprise and confirm the no-training-data setting is active for your organization. Work from de-identified case IDs and system code names — never paste actual audit evidence, model weights, vendor PII, or draft legal findings into the prompt. Maintain your own log of Claude-assisted drafts for audit trail purposes, consistent with your ISMS requirements.
Five High-Leverage Workflows
1. Auditing an AI System Against EU AI Act Annex III High-Risk Categories
Before any high-risk system goes to a conformity assessment, you need to determine whether it triggers Annex III classification. Claude can structure the analysis from your system brief.
<context>
System under review: [System code name and brief function description]
Proposed use case: [Employment screening / Credit scoring / Biometric identification / Educational assessment / Critical infrastructure management]
Operator organization: [Internal deployment or third-party provider]
Known data inputs: [Structured data types only — no raw datasets]
Existing documentation: [SRS, DPIA, or internal risk memo available]
</context>
<instructions>
Analyze whether this system falls within one or more Annex III high-risk categories under the EU AI Act.
For each applicable category, cite the relevant Annex III entry and note the specific use-case criteria that trigger classification.
Identify which Articles apply to an operator deploying this system: Article 6 (classification rules), Article 9 (risk management), Article 13 (transparency), Article 17 (quality management).
Flag any ambiguity where classification depends on interpretive guidance not yet issued.
Identify gaps between the available documentation and what Article 9 and Article 17 require.
</instructions>
<format>
Structured report with sections: Classification Finding, Applicable Articles, Documentation Gaps, Open Ambiguities.
Under 800 words. Use a compliance memo format — terse, numbered findings, no marketing language.
</format>
<avoid>
Definitive legal conclusions on classification; reference to specific statutory sub-paragraphs beyond what is verifiable; treating this output as a filed conformity declaration.
</avoid>2. Tracking and Triaging Weekly Regulatory Updates
Staying current across EU AI Act implementing acts, FINRA guidance, FDA SaMD updates, and SEC staff bulletins is a full-time job inside the compliance function. Claude can compress your triage cycle.
<context>
Regulatory perimeter: EU AI Act (enforcement date August 2, 2026), FINRA Notice 24-09 (autonomous agents), NIST AI RMF (voluntary but referenced in contracts), ISO/IEC 42001 (certification in progress).
New items this week: [Paste plain-text summaries or titles of regulatory releases — no document uploads containing confidential material]
Open internal items from prior week: [List of unresolved action items by code name]
</context>
<instructions>
Triage each regulatory item for: direct applicability to our high-risk system inventory, obligation type (mandatory / voluntary / interpretive), implementation timeline, and owner (legal / compliance / product / engineering).
Flag any item that intersects with FINRA 24-09 autonomous-agent scope or EU AI Act Article 50 transparency obligations.
For each open internal item, assess whether this week's releases change the risk posture.
</instructions>
<format>
Two-section output: (1) New Items Triage — table with columns: Item, Applicability, Obligation Type, Timeline, Owner. (2) Open Items Update — brief status note per item. Under 600 words total.
</format>
<avoid>
Treating regulatory summaries as authoritative interpretations; flagging items as resolved without human legal sign-off; generating action items not grounded in a cited source.
</avoid>3. Generating a Quality Management System Document and Conformity Assessment Package
Article 17 requires a documented quality management system for high-risk AI systems. Claude can scaffold the QMS structure and draft individual components from your system data.
<context>
System: [Code name]
Stage: [Pre-deployment / Post-deployment / Re-assessment]
Available inputs: [System design doc, risk register, training data summary, incident log — all de-identified]
Regulatory basis: EU AI Act Article 17 (quality management), Article 9 (risk management system)
</context>
<instructions>
Draft the following QMS components consistent with Article 17 requirements:
1. Scope and purpose statement
2. Risk management process summary referencing Article 9 criteria
3. Data governance and training data procedures overview
4. Human oversight and intervention procedures
5. Post-market monitoring plan stub
6. Document control and version history table (template)
For each component, note what additional factual input is required from the system owner before the section can be finalized.
</instructions>
<format>
Structured document with numbered sections. Each section ends with a "Required Input" callout box listing what must be verified or added before review submission. Include DRAFT — PENDING COMPLIANCE REVIEW header. Under 1,000 words.
</format>
<avoid>
Treating this draft as a filed conformity declaration; filling factual gaps with invented data; omitting the Required Input callouts; asserting that any section meets Article 17 without human review.
</avoid>4. Building an Autonomous-Agent Eval Harness per FINRA Notice 24-09
FINRA Notice 24-09 places supervisory obligations on firms deploying autonomous or semi-autonomous AI agents. You need an evaluation framework before deployment. Claude can build the scaffolding.
<context>
Agent system under review: [Code name, brief function — e.g., customer-facing advice assistant, internal research agent]
Supervisory framework: FINRA Notice 24-09
Key risk vectors identified in pre-deployment review: [Hallucination, scope creep, reward misalignment, data leakage — select applicable]
Existing controls: [Brief list of guardrails already in place]
</context>
<instructions>
Design an evaluation harness covering:
1. Hallucination detection — test scenarios and pass/fail criteria
2. Scope containment — boundary-probing test cases specific to the agent's stated purpose
3. Reward misalignment — adversarial prompts designed to elicit out-of-policy behavior
4. Human escalation triggers — conditions under which the agent must defer to a licensed representative
For each category, specify: test scenario format, evaluation rubric, frequency of re-testing (pre-deployment, quarterly, on material change), and documentation requirement for the supervisory audit trail.
</instructions>
<format>
Tabular harness spec with columns: Risk Vector, Test Scenario Type, Pass/Fail Criteria, Re-Test Trigger, Documentation Owner. Followed by a short narrative on how results feed into the FINRA 24-09 supervisory record. Under 700 words.
</format>
<avoid>
Treating this harness as a complete FINRA compliance filing; inventing regulatory requirements not present in Notice 24-09; omitting human-in-the-loop escalation criteria.
</avoid>5. Drafting a Post-Market Monitoring Report from Incident Logs
EU AI Act Article 9 and Article 17 both require ongoing post-market monitoring for high-risk systems. Claude can structure a monitoring report from your de-identified incident log.
<context>
Monitoring period: [Quarter / rolling 12 months]
System: [Code name]
Incident log summary: [Paste de-identified incident descriptions — no employee names, no client identifiers, no proprietary model outputs]
Prior report status: [Any open findings from last period]
Regulatory basis: EU AI Act Articles 9 and 17; internal QMS post-market monitoring procedure
</context>
<instructions>
Draft a post-market monitoring report covering:
1. Incident summary — categorized by risk vector (accuracy, bias, scope, availability)
2. Root-cause analysis stubs for incidents rated medium or high severity
3. Corrective action recommendations with owner and timeline fields
4. Open items from prior period — status update based on new log data
5. Risk posture assessment — unchanged / elevated / reduced, with rationale
Maintain de-identified language throughout. Flag any incident pattern that may require notifiable disclosure under Article 9 or supervisory reporting under your jurisdiction's financial regulator.
</instructions>
<format>
Formal report structure with executive summary (under 150 words), numbered findings sections, and a corrective action table. Include DRAFT — PENDING COMPLIANCE REVIEW. Total under 900 words.
</format>
<avoid>
Including real incident data, client or employee identifiers, or proprietary system outputs in any prompt; treating Claude's risk posture assessment as a final regulatory determination; bypassing human sign-off before the report enters the official record.
</avoid>What This Looks Like in Your Week
Monday morning. You open your Compliance Project and run the regulatory triage prompt against the weekend's EU AI Act implementing act release and a new FINRA staff FAQ. Within ten minutes you have a two-section triage table with owner assignments ready to drop into your team Slack. What would have been a two-hour read-and-summarize cycle is a thirty-minute review-and-distribute cycle.
Wednesday. Your product team has submitted a new AI-assisted hiring screening tool for pre-deployment review. You run the Annex III classification prompt against their system brief. Claude returns a structured finding — the tool likely triggers Annex III employment category classification, flags three documentation gaps, and notes two open interpretive questions about Article 6 scope that need legal input. You have a structured memo to send back to the product team before lunch.
Thursday afternoon. Legal has signed off on the Article 17 QMS draft you generated and reviewed last week. You use Claude to update the document version table, add the Required Input sections that are now resolved, and generate a clean tracked-changes summary for the conformity assessment package. What used to be a formatting and collation task eats thirty minutes instead of two hours.
Friday. You pull the quarter's incident log — de-identified and coded by system ID — and run the post-market monitoring prompt. The draft report comes back structured, with risk categories and a corrective action table. You spend an hour reviewing it against the raw log, adjusting the root-cause stubs, and elevating two findings that Claude underweighted. You sign the final version. Claude built the scaffold. You own the judgment.
What to Avoid
Claude is not your compliance authority. Every document Claude drafts is a starting point, not a filing. You are the system of record. Regulatory determinations, conformity assessments, incident escalations, and supervisory filings require human review and sign-off before they exist as official records.
Do not paste live audit evidence into prompts. Incident logs, vendor contracts, model weight documentation, and draft legal opinions contain confidential or privileged material. Work from de-identified summaries and coded system names. If it would be sensitive in an email, it does not go into a prompt.
Do not treat Claude's statutory reading as authoritative. Claude can structure an analysis around EU AI Act Article 9 or FINRA Notice 24-09, but it cannot substitute for legal counsel's review of how those authorities apply to your specific facts. Flag every interpretive conclusion for lawyer verification.
Do not use Claude output as the QMS record itself. Claude drafts QMS components; your quality management system is what lives in your documented procedures, signed off by the compliance function and validated against actual system behavior. A Claude-generated draft that has not been reviewed and approved is not a QMS.
Do not skip the DLP check. Before using Claude in any configuration, confirm your organization's data-handling policy for AI tools covers this use case. Enterprise configurations with no-training settings and access controls are the appropriate baseline for regulated industries.
Resources
- Explore the AI Compliance Officer plugin for Claude to extend these workflows with profession-specific context
- Review the AI Compliance Officer profession guide for role-level AI adoption strategy
- Run the AI readiness audit for compliance professionals to identify your highest-leverage starting point