Legal Services

SRA Compliance and AI: Building Governance Frameworks

24 December 2025
10 min
Ben Gale
SRA Compliance and AI: Building Governance Frameworks

The Regulatory Landscape

The Solicitors Regulation Authority (SRA) hasn't banned AI—far from it. But they've made clear that firms using AI must do so responsibly, with appropriate governance in place.

The SRA's core message: you can use AI tools, but you remain responsible for your work product. If AI helps you draft a document that's wrong, you can't blame the AI. You're accountable.

SRA
Requires appropriate governance
You
Remain accountable
Now
Time to establish framework

What the SRA Expects

Competence (Principle 3)

You must only act if you're competent to do so. This extends to understanding:

  • What AI tools actually do
  • Their limitations and potential for error
  • When human review is essential
  • How to verify AI outputs

Using AI you don't understand violates competence requirements.

Client Interest (Principle 7)

You must act in the best interest of clients. This means:

  • AI shouldn't compromise client outcomes
  • Client data used in AI must be protected
  • Efficiency gains should benefit clients too
  • Appropriate transparency about AI use

Proper Governance (Code of Conduct)

The Code requires appropriate systems and controls. For AI, this includes:

  • Policies governing AI use
  • Risk assessments for AI tools
  • Training for staff using AI
  • Supervision arrangements
Info

The SRA takes a principles-based approach. They haven't prescribed specific AI rules because technology changes too fast. Instead, existing principles apply to AI use just as they apply to everything else.

Building Your AI Governance Framework

You don't need a 50-page policy document. Small firms need practical governance that actually gets followed.

Element 1: AI Usage Policy

Purpose: Define what's allowed and what isn't

Key Content:

Permitted Uses:

  • Document drafting assistance (with review)
  • Research assistance (with verification)
  • Administrative automation
  • Communication drafting (with review)

Prohibited or Restricted Uses:

  • Advice to clients without human review
  • Submission of AI text as your own work product without checking
  • Input of confidential client data to tools without appropriate safeguards
  • Reliance on AI for areas outside your competence

Review Requirements:

  • All AI outputs must be reviewed before use
  • Legal advice must be verified against authoritative sources
  • Client communications must be checked for accuracy and tone
  • Documents must be proofread for AI-typical errors

Element 2: Tool Approval Process

Purpose: Ensure you understand what you're using

Before Using Any AI Tool:

  1. Vendor Assessment

    • Who provides it?
    • Where is data stored?
    • What are their security certifications?
    • What happens to data input to the tool?
  2. Functionality Understanding

    • What does it actually do?
    • What are known limitations?
    • What verification is recommended?
    • What training is available?
  3. Risk Assessment

    • What could go wrong?
    • What's the impact if it fails?
    • What controls mitigate risks?
    • Is the residual risk acceptable?
  4. Documentation

    • Record the assessment
    • Note approval decision and rationale
    • Set review date
Solicitor reviewing documents at desk
Human review of AI outputs remains essential for professional responsibility

Element 3: Data Protection Considerations

Purpose: Protect client confidentiality

Key Questions for Each Tool:

QuestionWhy It Matters
Where is data processed?Jurisdiction and access issues
Is data used to train models?Client confidentiality risk
Who can see input data?Unauthorised access risk
How long is data retained?Data minimisation compliance
Can specific data be deleted?Subject access and erasure rights

Practical Guidance:

  • Public AI tools (ChatGPT, Claude): Don't input identifiable client information
  • Enterprise AI tools: Review data handling terms carefully
  • Legal-specific AI: Usually designed with confidentiality in mind, but verify
  • All tools: Consider what would happen if input data were disclosed
Warning

If you input client information to an AI tool and it becomes public or is used to train the model, you've potentially breached confidentiality. The SRA won't accept "I didn't know" as an excuse.

Element 4: Training and Supervision

Purpose: Ensure staff use AI appropriately

Training Should Cover:

  • What tools are approved for use
  • What uses are permitted/prohibited
  • How to review and verify AI outputs
  • How to handle errors or concerns
  • Where to find more information

Supervision Requirements:

  • Appropriate checking of AI-assisted work
  • Enhanced review for less experienced staff
  • Regular discussion of AI use in team meetings
  • Feedback mechanism for issues

Element 5: Incident Management

Purpose: Handle problems when they arise

When to Escalate:

  • AI output that was incorrect and affected client work
  • Data potentially exposed through AI use
  • Staff using AI in prohibited ways
  • Client complaints related to AI use

Response Process:

  1. Immediate containment (if needed)
  2. Assessment of impact
  3. Client communication (if appropriate)
  4. Regulatory notification (if required)
  5. Root cause analysis
  6. Control improvement

Template: Small Firm AI Policy

Here's a starting template you can adapt:


[FIRM NAME] AI Usage Policy

Version: 1.0 Approved by: [Partners] Date: [Date] Review Date: [Date + 12 months]

1. Purpose

This policy governs the use of artificial intelligence tools by [FIRM] staff to ensure compliance with SRA requirements and protection of client interests.

2. Scope

This policy applies to all staff using any AI tool for firm business, whether firm-provided or personal tools used for work purposes.

3. Approved Tools

The following AI tools are approved for use: [List approved tools]

No other AI tools may be used without partner approval.

4. Permitted Uses

AI may be used for:

  • Drafting assistance (letters, documents, emails) with human review
  • Research assistance with verification against authoritative sources
  • Administrative tasks (scheduling, reminders, document management)
  • Proofreading and editing assistance

5. Prohibited Uses

AI must NOT be used for:

  • Generating advice to clients without solicitor review
  • Processing identifiable client information in non-approved tools
  • Submitting AI text as work product without verification
  • Any use that compromises client confidentiality

6. Review Requirements

All AI outputs must be reviewed by a qualified solicitor before:

  • Sending to clients
  • Filing with courts or regulators
  • Relying on for legal advice
  • Including in firm work product

7. Data Protection

Client-identifying information must not be entered into any AI tool unless:

  • The tool is specifically approved for client data
  • The matter has been assessed for data protection compliance
  • Appropriate safeguards are documented

8. Training

All staff will receive training on this policy before using AI tools. Training records will be maintained.

9. Reporting Concerns

Staff must report any concerns about AI use, including errors, data issues, or policy breaches, to [designated person] immediately.

10. Review

This policy will be reviewed annually and updated as technology and guidance evolves.


Demonstrating Compliance

When the SRA asks (and they might), you should be able to show:

Documentation

  • AI policy document
  • Tool assessments and approvals
  • Training records
  • Incident log (even if empty)

Understanding

  • Staff can explain how AI is used
  • Supervision arrangements are clear
  • Review processes are documented and followed

Continuous Improvement

  • Regular policy reviews
  • Adaptation to new tools and guidance
  • Learning from any incidents
Pro Tip

Documentation isn't bureaucracy for its own sake. If something goes wrong, your documentation demonstrates you took reasonable steps. Without it, you're hoping for the best.

The Proportionality Principle

Your governance should be proportionate to:

  • Size of your firm
  • Extent of AI use
  • Type of work you do
  • Risks specific to your practice

A two-partner firm using basic document automation needs less elaborate governance than a large firm building custom AI applications. But both need something.


Need help establishing AI governance for your practice? We help law firms build practical, proportionate governance frameworks that satisfy regulatory requirements without excessive bureaucracy.

Book a consultation to discuss your specific needs.

Ben Gale

Ben Gale

25 years IT and leadership experience. Based in Woodley, Reading. Helping Thames Valley businesses automate workflows and reduce admin overhead.

Learn more about Ben →

Frequently Asked Questions

What does the SRA require for law firms using AI tools?

The SRA requires law firms to maintain competence in understanding AI tools and their limitations, protect client interests when using AI, and have appropriate governance including policies, risk assessments, staff training, and supervision arrangements.

Can I input client information into AI tools like ChatGPT?

For public AI tools like ChatGPT or Claude, you should not input identifiable client information. For enterprise or legal-specific AI tools, carefully review data handling terms and ensure appropriate safeguards are documented before processing any client data.

What should be included in a law firm AI usage policy?

A law firm AI policy should include approved tools, permitted and prohibited uses, review requirements for AI outputs, data protection guidelines, training requirements, and a process for reporting concerns or incidents.

How much AI governance documentation do small law firms need?

Governance should be proportionate to your firm size and AI usage. Small firms using basic tools need less elaborate documentation than larger firms, but all firms need some documented framework including an AI policy, tool assessments, training records, and incident procedures.

Related Articles

Legal Services

77% Increase in Law Firm Cyber Attacks: AI-Powered Security

Small and mid-size law firms are prime targets for cyber criminals. Here's how AI security tools help protect your practice and meet SRA requirements on a budget.

10 min
Legal Services

The Rural-Urban AI Divide in Legal: What Small-Town Solicitors Need

Only 15% of small rural law firms have adopted AI compared to 34% in urban areas. Here's how small-town solicitors can access affordable starting points.

9 min
Legal Services

AI Hallucinations and Legal Ethics: Managing the Risk

AI tools can confidently generate incorrect information. Here's how solicitors can manage the professional risk while still benefiting from AI assistance.

9 min

Want Help Implementing This?

Book a free 15-minute discovery call and we'll discuss how to apply these concepts to your business.

Book Your Free Discovery Call