The trust gap
The numbers tell a contradictory story. According to LawSites coverage of the Clio 2024 Legal Trends Report, "usage of artificial intelligence by legal professionals has skyrocketed from 19% in 2023 to 79% this year." That's the fastest adoption curve in legal technology history.
But even among those using AI, hesitation is widespread. Clio's data found that 44% of lawyers holding back on AI don't trust it, and 34% think it's unreliable.
This isn't irrational. Attorneys have watched colleagues get sanctioned for submitting AI-generated briefs with fabricated citations. The fear isn't about technology; it's about malpractice liability. And in family law, where the stakes are children's lives and safety, the bar for trust is even higher.
So the question isn't whether AI is coming to family law. It's already here. The question is: what actually works, and what should you stay away from?
The jagged frontier
Stanford's 2026 AI Index Report captures this contradiction with a concept called the jagged frontier: "AI models can win a gold medal at the International Mathematical Olympiad but cannot reliably tell time." The same report notes that "Gemini Deep Think earned a gold medal at IMO, yet the top model reads analog clocks correctly just 50.1% of the time."
The lesson for family law: AI is astonishingly capable at some tasks and surprisingly bad at others, and the line between the two is not obvious. A tool that summarizes a 40-page declaration brilliantly might fail at basic chronological sorting. The key is knowing which side of the frontier a given task sits on before you rely on the output.
Two categories of legal AI
Most confusion about AI in legal practice comes from lumping fundamentally different approaches into one category. There are two distinct types, and the risk profiles are completely different.
Generative AI: the hallucination risk
This is what most people think of when they hear "AI for lawyers." Tools like ChatGPT, Harvey, and CoCounsel that generate text: draft motions, write research memos, produce summaries from scratch.
The core problem with generative AI in legal practice is well documented: these models can confidently produce citations, case references, and legal arguments that don't exist. They don't "know" the law. They predict what legal text should look like based on patterns. When the pattern leads to a plausible-sounding but fictional citation, the model has no mechanism to catch it.
For research assistance and first-draft generation, generative AI can save time, if you verify every output. For anything that goes before a court without human review, the risk is real and the consequences are severe.
Document reading AI: extraction with citations
This is a fundamentally different use case. Instead of generating new text, the AI reads existing documents and extracts what's already there (claims, dates, parties, key passages) and links every extraction back to its source.
The distinction matters because:
- There's nothing to hallucinate.The AI isn't inventing content. It's reading a declaration and saying, "On page 3, the opposing party claims X." You can click through and verify the exact passage.
- The source is always visible. Every extracted claim links back to the specific document and passage it came from. If the AI misreads something, you can see exactly where and why.
- The human decides what matters.The AI surfaces what's in the documents. The attorney reviews, accepts, rejects, and strategizes. The AI never makes legal judgments.
This is the difference between "AI that writes for you" and "AI that reads for you." The risk profiles are not comparable.
What AI can actually do for custody cases
Family law has characteristics that make it both well-suited and poorly suited for AI, depending on the task.
Well-suited: document reading and organization
A typical contested custody case involves dozens of documents: court orders, declarations, motions, text message screenshots, emails, financial records. Each document contains claims, counter-claims, dates, and assertions that need to be tracked.
Clio's 2024 Legal Trends Report found that up to 74% of hourly billable tasks, such as information gathering and data analysis, could be automated with AI. In custody cases, the information gathering phase is massive: reading every document, identifying what each party claims, figuring out which evidence supports or contradicts each claim, and organizing everything by theme.
AI that reads documents and extracts structured information, with source citations, can compress hours of manual review into minutes. Not by replacing your judgment, but by doing the reading and sorting so you can focus on analysis.
Well-suited: claim categorization
Custody cases cluster around recurring themes: safety concerns, communication patterns, financial issues, parenting time disputes, children's wellbeing. AI can read a declaration and recognize that a paragraph about missed pickups belongs in "parenting time" while a paragraph about substance use belongs in "safety."
This isn't legal reasoning. It's pattern matching against categories you define. The AI doesn't decide what matters. It sorts what's there so you can find it.
Well-suited: connecting evidence to claims
One of the most time-consuming parts of custody case preparation is connecting specific pieces of evidence to specific claims. "They say I missed pickups. Here are the text messages showing I was there." AI can suggest these connections by reading both the claims and the evidence documents, flagging potential matches for your review.
Not suited: legal strategy and judgment
No AI tool should tell an attorney which claims matter most, how to frame an argument, or what outcome to pursue. Those are human decisions that require contextual understanding no model has. The attorney's judgment is the product. AI just helps you get to the judgment faster by reducing the clerical overhead.
Not suited: generating court filings
Using AI to draft a motion or declaration without meticulous human review remains risky. The hallucination problem is real, and family court is not the place to discover it. If you use generative AI for drafting, treat every output as a first draft that needs full verification, not a finished product.
How to evaluate AI tools for your practice
If you're considering any AI tool for family law work, here's what to look for, and what to watch out for.
1. Source linking is non-negotiable
Every claim, extraction, or summary the AI produces should link back to the specific document and passage it came from. If you can't click through to the source, you can't verify the output. And if you can't verify the output, you can't trust it.
This is the single most important criterion. A tool that gives you a summary without citations is asking you to trust a machine with your malpractice risk. Don't.
2. The attorney must be the reviewer, not the AI
Look for tools where AI surfaces information and the human decides what to do with it. The workflow should be: AI reads, attorney reviews, attorney decides. If the tool makes legal judgments (recommending strategy, predicting outcomes, assessing claim strength), it's overstepping, and the risk is yours.
This is not just best practice. It is being codified into the rules of professional conduct. The State Bar of California's 2026 proposed amendmentsto Rule 1.1 (Competence) include language requiring that "a lawyer must independently review, verify, and exercise professional judgment regarding any output generated by the technology." Whatever tools you adopt, the duty of independent verification stays with you.
3. Ask about hallucination risk directly
Any honest vendor will tell you exactly where hallucination risk exists in their product and how they mitigate it. Document reading with source linking has fundamentally lower hallucination risk than generative drafting. But even extraction-based tools can misread documents or miscategorize claims. The question isn't "is it perfect?" It's "can I verify every output easily?"
4. Data security and client confidentiality
You're uploading sensitive family law documents. Ask:
- Where is the data stored? What jurisdiction?
- Is the data used to train AI models? (The answer should be no.)
- Who can access client documents? Is there role-based access control?
- What happens to the data when you close the case?
- Does the vendor carry cybersecurity insurance?
Your ethical obligations around client confidentiality don't change because you're using a new tool. Vet the vendor the same way you'd vet a cloud-based practice management system.
5. Does it fit your existing workflow?
The best AI tool in the world is useless if it adds another system to learn, another login to manage, and another interface that doesn't talk to your existing tools. Look for tools that reduce complexity, not add to it.
The context reconstruction problem
Beyond first-pass document review, there's a deeper efficiency problem that AI can help solve: context reconstruction.
Every time you pick up a custody case (for a hearing, a call, a meeting), you're spending cognitive effort rebuilding your understanding of the case. With 20-40 active cases, that reconstruction happens dozens of times per week.
AI that organizes claims by party and category, with evidence linked and client responses written, doesn't just save time on first review. It eliminates reconstruction entirely. You open the case and the structure is there: what they claim, what your client claims, where the evidence is, where the gaps are. Every time.
Industry research shows that solo and small firm attorneys spend nearly half their time on administrative tasks. Much of that admin isn't filing or billing. It's the invisible work of re-learning your own cases.
What's coming vs. what works today
It's worth being honest about where AI in family law stands right now, as opposed to where vendors promise it's going.
Works today
- Document reading and extraction: AI can reliably read court orders, declarations, and motions, extracting claims and key passages with source citations
- Categorization: sorting claims by theme (safety, finances, communication) based on content
- Evidence linking: suggesting connections between claims and supporting documents
- Timeline construction: extracting dates and events from documents to build case chronologies
- Research assistance: finding relevant statutes and case law (with human verification required)
Getting better but not reliable yet
- Drafting assistance: useful for first drafts of routine filings, but every output needs full review
- Cross-document analysis: identifying contradictions across multiple filings from the same party
- Handwriting and poor-quality scan reading: improving but still error-prone
Not there yet (despite vendor claims)
- Outcome prediction: no AI can reliably predict how a family court judge will rule
- Strategic recommendations:legal strategy requires contextual judgment that models don't have
- Autonomous case management:AI that "handles" your cases without oversight is a liability, not a feature
The "AI surfaces, human decides" model
The most useful framing for AI in family law is this: AI should surface information. Humans should make decisions.
That means AI reads a 30-page declaration and tells you, "The opposing party makes 14 distinct claims, organized into these categories, with source passages here." You review those claims, decide which ones matter, identify where your client has evidence, and build your strategy.
The AI didn't make a legal judgment. It saved you three hours of reading and sorting. You still did the thinking. Your clients are still getting your expertise. They're just getting it faster because you're not spending the first half of every case on clerical work.
How Casefold approaches this
We built Casefold around the "AI surfaces, human decides" principle. Parents upload their custody documents: court orders, declarations, text screenshots, photos. AI reads every document and surfaces what each side is claiming, organized by category, with every claim linked back to the exact passage in the source document.
The parent writes responses to the other side's claims and attaches proof. The attorney opens the shared workspace and sees claims surfaced, evidence linked, and responses written, ready for review and strategy.
There's no generative drafting. No outcome prediction. No strategic recommendations. The AI reads documents and organizes what it finds. The attorney and the parent decide what matters.
Every claim links to its source. If the AI misreads something, you see exactly where it went wrong. There's nothing to hallucinate because the AI isn't generating content. It's extracting and organizing what's already in the documents.
The parent workspace is free. Attorney plans start with a free case, no credit card required. Because the biggest barrier to AI adoption isn't technology; it's trust. And trust is built by showing your work, not by asking for faith.