Skip to content
Back to Resources
ClaudeGovernanceBeginnerCheat Sheet

AI Governance Checklist for Small Teams

A practical checklist for small teams implementing AI tools. Covers tool approval, data handling, quality review, training, and compliance.


How to Use This Checklist

This checklist covers the core governance areas that any professional team should address before and during AI adoption. Work through each section in order. Check off items as you complete them. Items marked with an asterisk (*) are critical — do not skip these regardless of your team size or industry.

1. Tool Evaluation and Approval

  • * Inventory all AI tools currently in use across the team (including personal accounts)
  • * Evaluate each tool's data handling practices, privacy policy, and terms of service
  • Determine whether each tool offers enterprise or team plans with appropriate data protections
  • Check whether the tool vendor provides a Business Associate Agreement (BAA), Data Processing Agreement (DPA), or equivalent
  • Verify the tool's SOC 2, ISO 27001, or equivalent security certifications
  • Create an approved tools list with specific permitted use cases for each tool
  • Establish a process for requesting and evaluating new tools
  • Block or restrict access to unapproved AI tools on company devices where feasible

2. Data Classification

  • * Define data categories for your organization (e.g., public, internal, confidential, restricted)
  • * Map each data category to permitted AI tool usage (which tools, which tiers, which restrictions)
  • Identify the specific data types your team handles that require special protection (PHI, PII, privileged communications, trade secrets)
  • Create a reference document listing examples of each data category relevant to your work
  • * Establish a clear rule: when in doubt about data classification, treat it as restricted

3. Usage Policies

  • * Write an AI acceptable use policy covering approved tools, prohibited uses, and data handling requirements
  • Define which tasks AI may be used for and which require fully manual work
  • Address the use of AI for client-facing deliverables, including review and disclosure requirements
  • Establish rules for using AI outputs in regulated activities (filings, clinical documentation, financial reports)
  • Set expectations for AI use during billable work, including disclosure to clients where appropriate
  • Address ownership and intellectual property considerations for AI-generated work product

4. Quality Review Process

  • * Require human review of all AI-generated outputs before external use
  • Define who is qualified to review AI outputs for each type of work product (by credential, role, or seniority)
  • Create a review checklist covering accuracy, completeness, tone, and compliance for common output types
  • Establish a policy for verifying factual claims, citations, legal references, and numerical calculations in AI outputs
  • * Prohibit submitting AI outputs to clients, courts, regulators, or third parties without documented review
  • Define what "review" means operationally — reading the full output, spot-checking, or both

5. Training Requirements

  • * Require all team members to complete AI policy training before using approved tools
  • Cover the basics: what AI tools can and cannot do, common failure modes, hallucination risks
  • Train on your specific policies: approved tools, data classification, review requirements
  • Train on the redaction process: what to remove from data before using AI tools
  • Schedule annual refresher training
  • Document training completion for each team member
  • Designate a team member to stay current on AI developments and update training materials

6. Incident Handling

  • * Define what constitutes an AI-related incident (data exposure, compliance violation, unreviewed output used, material error)
  • * Establish a reporting channel and response timeline for AI incidents
  • Create a simple incident report template (date, tool, what happened, data involved, corrective action)
  • Define escalation procedures for incidents involving client data or regulatory implications
  • Encourage near-miss reporting without blame to improve policies over time
  • Review incidents quarterly to identify patterns and update policies

7. Compliance and Documentation

  • Identify the regulations applicable to your profession and jurisdiction (HIPAA, ABA Rules, FINRA, state laws)
  • Map your AI governance policies to specific regulatory requirements
  • Determine whether your profession requires disclosure of AI use to clients or in filings
  • Maintain records of AI use for regulated work products as required by applicable rules
  • Include AI governance in your existing compliance review or audit cycle
  • * Review and update AI policies at least annually or when regulations change

8. Ongoing Maintenance

  • Schedule a quarterly review of the approved tools list
  • Monitor for changes in tool vendor terms of service or data handling practices
  • Track regulatory developments affecting AI use in your profession
  • Collect team feedback on policy friction points and adjust where appropriate
  • Review AI incident reports and near-misses to identify policy gaps
  • Update training materials when policies change

Get weekly AI prompts for Governance professionals

Join professionals already saving hours every week. Free. No spam.