The Hallucination Problem
AI "hallucination" is a polite term for AI making things up. It's not a bug—it's a feature of how large language models work. They predict plausible text, and sometimes plausible isn't true.
In legal contexts, this creates real professional risk. Cases that don't exist, statutes that were never enacted, precedents with incorrect holdings—AI can generate all of these with complete confidence.
The US has seen multiple high-profile cases where lawyers cited AI-generated case law that turned out to be fabricated. UK regulators and courts are watching closely.
Why AI Hallucinations Happen
Understanding the mechanism helps manage the risk:
How Language Models Work
Large language models don't "know" facts like a database. They predict what text should come next based on patterns learned from training data. When asked a legal question, the model generates text that looks like a legal answer—whether or not the content is accurate.
Confidence Without Knowledge
AI doesn't have uncertainty in the human sense. It generates text with the same confidence whether the content is:
- Clearly established law
- Disputed or uncertain
- Completely made up
There's no warning label on fabricated content.
The Specificity Trap
Paradoxically, AI can be more wrong when it's more specific:
- Asked for "a case about contract formation," it might find a real one
- Asked for "a 2019 Court of Appeal case about email formation of contracts," it might fabricate one that fits the specification
The more specific and plausible-sounding an AI citation is, the more important it is to verify. AI fabrications often include convincing details—that's what makes them dangerous.
Real-World Examples
US Cases
Multiple US lawyers have faced sanctions for citing AI-fabricated cases:
- New York attorney fined $5,000 for fake citations in federal court
- Texas attorney sanctioned after AI-generated brief contained non-existent cases
- Several other cases in various jurisdictions
UK Implications
While no UK cases of AI citation sanctions have been widely reported yet, the risk is identical:
- SRA would likely treat it as competence failure
- Courts would not be sympathetic
- Professional reputation damage would be severe
Appropriate Use Cases
AI can be helpful for legal work when used appropriately:
Lower Risk Uses
| Use | Why Lower Risk |
|---|---|
| Drafting standard letters | Content can be verified against your own knowledge |
| Summarising documents you've read | You can check the summary against the original |
| Suggesting structure for arguments | You know if the structure makes sense |
| Proofreading and editing | Errors are visible on review |
| Administrative tasks | Don't involve legal content |
Higher Risk Uses
| Use | Why Higher Risk |
|---|---|
| Legal research | AI may fabricate sources |
| Citing authorities | Specific citations often wrong |
| Explaining unfamiliar law | You can't verify what you don't know |
| Drafting legal arguments | Premises may be fabricated |
| Advice in specialist areas | Beyond your verification capability |
Verification Protocols
For any AI-generated legal content, establish verification procedures:
Case Citations
Every citation must be verified:
- Check the case exists in a legal database (Westlaw, LexisNexis, BAILII)
- Verify the citation format is correct
- Confirm the holding matches what AI claimed
- Check the current status (not overruled, distinguished)
If you can't verify it, don't cite it.
Statutory References
For legislation:
- Confirm the statute exists
- Check the section/provision is real
- Verify the wording matches (AI paraphrases can subtly change meaning)
- Check for amendments since AI training data
Procedural Statements
For practice and procedure:
- Verify against current rules
- Check practice directions
- Confirm local variations if applicable
- Be alert to outdated procedures
General Statements of Law
For legal propositions:
- Consider whether it sounds correct based on your knowledge
- If uncertain, research independently
- Don't rely on AI for areas outside your competence
- Document your verification
A useful rule: if you wouldn't stake your practising certificate on a proposition without verification, don't stake it on AI output without verification either.
Training Staff on AI Risks
Everyone using AI needs to understand:
The Nature of AI
- AI generates plausible text, not verified truth
- Confidence doesn't indicate accuracy
- Specific details can be entirely fabricated
- Legal content requires verification
Professional Obligations
- You're responsible for work product, regardless of how generated
- SRA competence requirements apply
- Courts expect accurate citations
- "AI told me" is not a defence
Practical Protocols
- Always verify citations before use
- Use approved tools and methods
- Escalate if uncertain
- Document verification performed
Red Flags to Watch For
- Cases with suspiciously perfect facts
- Obscure courts or reporters
- Citations that can't be found
- Holdings that seem too good to be true
Managing the Workflow
Separation of Drafting and Verification
Don't verify as you draft—it's easy to skip or rush:
- Use AI for initial draft
- Complete the draft
- Perform systematic verification
- Document verification completed
Verification Checklist
For legal content, verify:
- All case citations checked in legal database
- All statutory references confirmed
- General legal statements validated
- Procedural points checked against current rules
- Content within my competence to verify
Time Budget for Verification
Verification takes time. AI saves drafting time but adds verification time:
- Factor verification into matter planning
- Don't assume AI makes everything faster
- Budget time for independent research if needed
The Competence Question
A fundamental question for AI use:
Can you verify the AI output within your competence?
If AI generates content about employment law discrimination tests and you don't practice employment law:
- You can't verify whether the law stated is correct
- You can't spot subtle errors in application
- Using it would be practicing outside your competence
AI doesn't extend your competence—it amplifies what you can do within it.
When AI Isn't Appropriate
Be cautious using AI for:
Unfamiliar Areas
If you wouldn't feel confident drafting it yourself, AI assistance doesn't change that.
High-Stakes Matters
When errors have severe consequences, the verification burden may outweigh AI benefits.
Time-Pressured Situations
When there isn't time for proper verification, AI shortcuts become dangerous.
Complex, Novel Issues
AI trained on historical data may not reflect recent developments or novel situations accurately.
Professional Responsibility Bottom Line
The SRA has been clear: you remain responsible. Using AI tools doesn't delegate your professional judgment or liability.
Firms that use AI well will:
- Understand its limitations
- Verify outputs systematically
- Use it within their competence
- Document their processes
- Train staff appropriately
Firms that use AI poorly will eventually face:
- Regulatory action
- Professional negligence claims
- Court sanctions
- Reputational damage
The choice is how you use it, not whether to use it.
Need guidance on managing AI risk in your practice? We help law firms implement AI tools with appropriate safeguards and verification protocols.
Book a consultation to discuss your specific needs.
