The Regulatory Landscape
The Solicitors Regulation Authority (SRA) hasn't banned AI—far from it. But they've made clear that firms using AI must do so responsibly, with appropriate governance in place.
The SRA's core message: you can use AI tools, but you remain responsible for your work product. If AI helps you draft a document that's wrong, you can't blame the AI. You're accountable.
What the SRA Expects
Competence (Principle 3)
You must only act if you're competent to do so. This extends to understanding:
- What AI tools actually do
- Their limitations and potential for error
- When human review is essential
- How to verify AI outputs
Using AI you don't understand violates competence requirements.
Client Interest (Principle 7)
You must act in the best interest of clients. This means:
- AI shouldn't compromise client outcomes
- Client data used in AI must be protected
- Efficiency gains should benefit clients too
- Appropriate transparency about AI use
Proper Governance (Code of Conduct)
The Code requires appropriate systems and controls. For AI, this includes:
- Policies governing AI use
- Risk assessments for AI tools
- Training for staff using AI
- Supervision arrangements
The SRA takes a principles-based approach. They haven't prescribed specific AI rules because technology changes too fast. Instead, existing principles apply to AI use just as they apply to everything else.
Building Your AI Governance Framework
You don't need a 50-page policy document. Small firms need practical governance that actually gets followed.
Element 1: AI Usage Policy
Purpose: Define what's allowed and what isn't
Key Content:
Permitted Uses:
- Document drafting assistance (with review)
- Research assistance (with verification)
- Administrative automation
- Communication drafting (with review)
Prohibited or Restricted Uses:
- Advice to clients without human review
- Submission of AI text as your own work product without checking
- Input of confidential client data to tools without appropriate safeguards
- Reliance on AI for areas outside your competence
Review Requirements:
- All AI outputs must be reviewed before use
- Legal advice must be verified against authoritative sources
- Client communications must be checked for accuracy and tone
- Documents must be proofread for AI-typical errors
Element 2: Tool Approval Process
Purpose: Ensure you understand what you're using
Before Using Any AI Tool:
-
Vendor Assessment
- Who provides it?
- Where is data stored?
- What are their security certifications?
- What happens to data input to the tool?
-
Functionality Understanding
- What does it actually do?
- What are known limitations?
- What verification is recommended?
- What training is available?
-
Risk Assessment
- What could go wrong?
- What's the impact if it fails?
- What controls mitigate risks?
- Is the residual risk acceptable?
-
Documentation
- Record the assessment
- Note approval decision and rationale
- Set review date
Element 3: Data Protection Considerations
Purpose: Protect client confidentiality
Key Questions for Each Tool:
| Question | Why It Matters |
|---|---|
| Where is data processed? | Jurisdiction and access issues |
| Is data used to train models? | Client confidentiality risk |
| Who can see input data? | Unauthorised access risk |
| How long is data retained? | Data minimisation compliance |
| Can specific data be deleted? | Subject access and erasure rights |
Practical Guidance:
- Public AI tools (ChatGPT, Claude): Don't input identifiable client information
- Enterprise AI tools: Review data handling terms carefully
- Legal-specific AI: Usually designed with confidentiality in mind, but verify
- All tools: Consider what would happen if input data were disclosed
If you input client information to an AI tool and it becomes public or is used to train the model, you've potentially breached confidentiality. The SRA won't accept "I didn't know" as an excuse.
Element 4: Training and Supervision
Purpose: Ensure staff use AI appropriately
Training Should Cover:
- What tools are approved for use
- What uses are permitted/prohibited
- How to review and verify AI outputs
- How to handle errors or concerns
- Where to find more information
Supervision Requirements:
- Appropriate checking of AI-assisted work
- Enhanced review for less experienced staff
- Regular discussion of AI use in team meetings
- Feedback mechanism for issues
Element 5: Incident Management
Purpose: Handle problems when they arise
When to Escalate:
- AI output that was incorrect and affected client work
- Data potentially exposed through AI use
- Staff using AI in prohibited ways
- Client complaints related to AI use
Response Process:
- Immediate containment (if needed)
- Assessment of impact
- Client communication (if appropriate)
- Regulatory notification (if required)
- Root cause analysis
- Control improvement
Template: Small Firm AI Policy
Here's a starting template you can adapt:
[FIRM NAME] AI Usage Policy
Version: 1.0 Approved by: [Partners] Date: [Date] Review Date: [Date + 12 months]
1. Purpose
This policy governs the use of artificial intelligence tools by [FIRM] staff to ensure compliance with SRA requirements and protection of client interests.
2. Scope
This policy applies to all staff using any AI tool for firm business, whether firm-provided or personal tools used for work purposes.
3. Approved Tools
The following AI tools are approved for use: [List approved tools]
No other AI tools may be used without partner approval.
4. Permitted Uses
AI may be used for:
- Drafting assistance (letters, documents, emails) with human review
- Research assistance with verification against authoritative sources
- Administrative tasks (scheduling, reminders, document management)
- Proofreading and editing assistance
5. Prohibited Uses
AI must NOT be used for:
- Generating advice to clients without solicitor review
- Processing identifiable client information in non-approved tools
- Submitting AI text as work product without verification
- Any use that compromises client confidentiality
6. Review Requirements
All AI outputs must be reviewed by a qualified solicitor before:
- Sending to clients
- Filing with courts or regulators
- Relying on for legal advice
- Including in firm work product
7. Data Protection
Client-identifying information must not be entered into any AI tool unless:
- The tool is specifically approved for client data
- The matter has been assessed for data protection compliance
- Appropriate safeguards are documented
8. Training
All staff will receive training on this policy before using AI tools. Training records will be maintained.
9. Reporting Concerns
Staff must report any concerns about AI use, including errors, data issues, or policy breaches, to [designated person] immediately.
10. Review
This policy will be reviewed annually and updated as technology and guidance evolves.
Demonstrating Compliance
When the SRA asks (and they might), you should be able to show:
Documentation
- AI policy document
- Tool assessments and approvals
- Training records
- Incident log (even if empty)
Understanding
- Staff can explain how AI is used
- Supervision arrangements are clear
- Review processes are documented and followed
Continuous Improvement
- Regular policy reviews
- Adaptation to new tools and guidance
- Learning from any incidents
Documentation isn't bureaucracy for its own sake. If something goes wrong, your documentation demonstrates you took reasonable steps. Without it, you're hoping for the best.
The Proportionality Principle
Your governance should be proportionate to:
- Size of your firm
- Extent of AI use
- Type of work you do
- Risks specific to your practice
A two-partner firm using basic document automation needs less elaborate governance than a large firm building custom AI applications. But both need something.
Need help establishing AI governance for your practice? We help law firms build practical, proportionate governance frameworks that satisfy regulatory requirements without excessive bureaucracy.
Book a consultation to discuss your specific needs.
