AI Acceptable Use Policy Template for Professional Teams
A customizable AI acceptable use policy template for small teams, practices, and firms. Covers approved tools, prohibited uses, review requirements, and compliance.
Why Your Team Needs an AI Acceptable Use Policy
Most professional teams are already using AI tools. The question is whether they are using them with guardrails or without. An acceptable use policy does not slow your team down — it prevents the mistakes that do: data breaches, compliance violations, malpractice exposure, and inconsistent quality.
Without a written policy, every team member makes their own judgment calls about what data is safe to paste into a chatbot, which outputs need human review, and what constitutes appropriate use. Those judgment calls will be inconsistent, and at least one of them will be wrong.
A policy also protects your organization if something goes wrong. Regulators, insurers, and courts will ask whether you had documented procedures for AI use. "We told people to be careful" is not an adequate answer.
The Template
Copy and customize the policy below for your team. Bracketed items indicate fields you need to fill in for your specific organization.
AI ACCEPTABLE USE POLICY
[Organization Name]
Effective Date: [Date]
Last Reviewed: [Date]
Policy Owner: [Name/Title]
1. PURPOSE
This policy establishes requirements for the use of artificial intelligence
tools by [Organization Name] staff. It applies to all employees, contractors,
and affiliates who use AI tools in connection with their professional duties.
2. APPROVED TOOLS
The following AI tools are approved for use with the restrictions noted:
Tool Approved Uses Restrictions
---------------------------------------------------------------------------
[Tool Name] [e.g., drafting, research] [e.g., no client data]
[Tool Name] [e.g., coding assistance] [e.g., internal code only]
[Tool Name] [e.g., data analysis] [e.g., de-identified only]
Tools not listed above require written approval from [Approving Authority]
before use. Free-tier or consumer-grade AI tools are not approved for any
use involving client, patient, or proprietary data.
3. PROHIBITED USES
The following uses of AI tools are prohibited without exception:
a. Inputting protected health information (PHI) into any tool without a
signed Business Associate Agreement (BAA).
b. Inputting personally identifiable information (PII) including Social
Security numbers, financial account numbers, or government IDs.
c. Inputting attorney-client privileged communications or work product.
d. Inputting trade secrets, proprietary algorithms, or confidential
business information belonging to clients or the organization.
e. Submitting AI-generated output to clients, courts, regulators, or
third parties without human review and approval.
f. Using AI to make final decisions on matters affecting client rights,
benefits, treatment, or legal standing.
g. Representing AI-generated work as solely human-authored when
disclosure is required by applicable rules or regulations.
4. DATA HANDLING REQUIREMENTS
a. All data must be classified before use with AI tools:
- PUBLIC: May be used with any approved tool.
- INTERNAL: May be used with approved tools that have enterprise
agreements and appropriate data processing terms.
- CONFIDENTIAL: Must be de-identified or redacted before use.
See the Redaction Checklist for specific requirements.
- RESTRICTED: May not be used with AI tools under any circumstances.
b. Staff must not upload files containing metadata that could identify
clients or patients. Strip metadata before uploading documents.
c. AI-generated outputs containing potentially sensitive information must
be stored according to existing data retention policies.
5. REVIEW REQUIREMENTS
All AI-generated work product must be reviewed before use:
a. CLINICAL/MEDICAL outputs: Reviewed by a licensed clinician before
any use in patient care or documentation.
b. LEGAL outputs: Reviewed by a licensed attorney before filing,
submission, or client communication.
c. FINANCIAL outputs: Reviewed by a qualified professional before
inclusion in reports, filings, or client deliverables.
d. GENERAL outputs: Reviewed by the responsible staff member for
accuracy, completeness, and appropriateness.
Reviewers must verify all factual claims, citations, calculations,
and references to specific laws, regulations, or standards.
6. COMPLIANCE OBLIGATIONS
a. All AI use must comply with applicable regulations including but
not limited to: [HIPAA / ABA Model Rules / FINRA / state-specific
regulations — customize for your profession].
b. Staff must complete the required AI training before using any
approved tool. See Section 7.
c. Any use of AI in connection with regulated activities must be
documented per Section 8.
7. TRAINING REQUIREMENTS
a. All staff must complete initial AI acceptable use training before
using approved tools.
b. Annual refresher training is required for all staff who use AI tools.
c. Training must cover: this policy, data classification, the
redaction checklist, tool-specific procedures, and incident reporting.
d. Training completion records must be maintained by [Department/Person].
8. DOCUMENTATION AND RECORD-KEEPING
a. Staff should maintain records of AI use for regulated work products
including: the tool used, the nature of the input, the date, and
the reviewer who approved the output.
b. Documentation requirements may vary by profession and jurisdiction.
Consult [Compliance Officer/Managing Partner] for specific guidance.
9. INCIDENT REPORTING
a. Any suspected data breach involving AI tools must be reported
immediately to [Contact Person/Department] via [Method].
b. Any AI output that was used without required review, or that
contained material errors discovered after use, must be reported
within [timeframe].
c. Near-miss events — situations where a problem was caught before
causing harm — should also be reported to support ongoing
policy improvement.
10. ENFORCEMENT
Violations of this policy may result in disciplinary action up to and
including termination. Violations involving client data may also trigger
regulatory reporting obligations.
11. POLICY REVIEW
This policy will be reviewed [quarterly/semi-annually/annually] and
updated as tools, regulations, and best practices evolve.
Approved by: ________________________ Date: ____________
Title: ________________________How to Customize by Profession
Healthcare practices. Emphasize the BAA requirement in Section 3. Add specific PHI categories to your prohibited uses. Reference HIPAA, HITECH, and any state health privacy laws. Require documentation of AI use in clinical workflows for audit purposes.
Law firms. Strengthen the privilege protections in Section 3. Add references to your jurisdiction's Rules of Professional Conduct, particularly competence (1.1), confidentiality (1.6), and supervision (5.1/5.3) obligations. Address AI use in court filings explicitly.
Financial advisory firms. Add FINRA, SEC, and state securities regulations to Section 6. Address the use of AI in client communications, which may constitute investment advice. Include record-keeping requirements that align with existing books-and-records obligations.
HR and recruiting teams. Address AI bias and discrimination risk explicitly. Reference EEOC guidance on automated hiring tools and any applicable state AI hiring laws. Prohibit using AI as the sole decision-maker in hiring, promotion, or termination decisions.
Real estate practices. Address fair housing implications of AI-generated marketing content. Include state real estate commission requirements. Prohibit AI-generated property valuations without licensed appraiser review.
Implementation Tips
-
Start with a conversation, not a memo. Introduce the policy in a team meeting where people can ask questions. A policy that lands in an inbox gets ignored.
-
Name a point person. Someone on the team should be the go-to for questions about whether a specific use case is covered. This does not need to be a full-time role — it just needs to be someone willing to answer questions promptly.
-
Build in a feedback loop. After 30 days, ask the team what is unclear, what is too restrictive, and what gaps they have encountered. Update the policy based on real usage, not assumptions.
-
Make the approved tools list easy to find. Post it where your team will actually see it — a pinned Slack message, a shared doc, a printed sheet by the printer. The policy document itself may live in a binder, but the tool list should be visible.
-
Connect it to existing workflows. If your team already has a review process for work product, add the AI review step to that existing process rather than creating a separate one. The fewer new habits people need to build, the better.
-
Review the policy when you add new tools. Every time someone requests a new AI tool, treat it as a trigger to review and update the approved tools list and any relevant restrictions.