By James Whittington, Assistant Director, Client Technology Solutions
At Gravity Stack, we’ve been in the eDiscovery trenches long enough to know when a technology is going to stick. For over a decade, we’ve used technology-assisted review (TAR) to scale document review in large cases. But in the last year, we’ve begun using generative AI. The top providers are all building AI integrations, but our team’s experience comes primarily using Relativity aiR.
From TAR to Generative AI: A Real Shift
Unlike traditional TAR, which relied on supervised learning and validation models like the illusion test, aiR is powered by large language models that provide rationales, highlight relevant sections, and classify documents into tiers: very responsive, responsive, borderline, and not responsive. This means that rather than simply assisting reviewers, the AI is simulating the review itself—and doing it remarkably well.
We’ve used AI for review in over ten matters already this year. In one case, it classified 31,000 documents in under eight hours – a task that would have taken a human reviewer, working at 40 documents per hour, roughly 775 hours to complete. That’s not an abstract claim about future potential. That’s happening today.
Use Cases That Actually Work
aiR is especially strong at first-pass review. For a typical 10,000-document dataset, we now rely on aiR to bucket documents for responsiveness, then have humans validate the smaller sets. Although we still have humans-in-the-loop, in many cases the output is accurate enough to skip manual review entirely for non-responsive documents—particularly in investigations or data subject access requests.
It’s also excellent for foreign-language documents. The model outputs rationales in English even when the source text is in Turkish, French, or Japanese—cutting down on expensive translation workflows and enabling quicker insight across jurisdictions.
And we’re just starting to integrate it with other workflows, including:
- QC of manual review: Validating human-reviewed sets.
- Production analysis: Analyzing what’s been received from opposing counsel.
- Training active learning models: Using aiR output to kickstart traditional supervised learning.
What AI Doesn’t (Yet) Replace
Right now, aiR for case strategy and aiR for privilege are still in testing. We expect both to be impactful—especially for creating case chronologies and extracting key issues—but those are coming in Q2.
And while aiR has been transformative in first pass review, we still rely on Microsoft Purview and client-side tools for upstream phases like identification, collection, and legal hold. The EDRM model still holds, but what’s changing is how much of that model we can now run faster, cheaper, and in-house.
What’s Next?
Looking ahead, I’m confident that document review as we know it, will ultimately be fully handled by AI, but when it is, we will have an entirely new approach to review. Our teams are already adapting to this hybrid era, where AI is a reviewer, a trainer, and a QC partner all in one.
Later this year, I’ll be speaking at Legal Innovators London on the state of AI in eDiscovery. Between now and then, I’ll continue testing these tools, refining our workflows, and pushing our teams forward. If you’re interested in learning more about what our team is seeing and evaluating in the blossoming AI discovery market, get in touch and schedule a meeting with our team.