SHARE
For years, CLMs and legaltech products relied on keyword search so lawyers could find contracts, clauses, and prior work product. It worked well enough when the UX was a list of results and the user knew the right keywords.
In the world of AI assistants, that is no longer enough.
Lawyers now ask conversational questions like:
“Show me MSAs where limitation of liability excludes consequential damages.”
“Do we have any customer contracts with a 30 day termination for convenience?”
“What is our standard position on assignment, and where have we accepted deviations?”
Keyword search is not dead, but it needs a reranker
The good news: you do not need to throw out your existing search stack.
Some teams prefer to keep their current keyword retrieval (BM25, Elasticsearch, OpenSearch) to avoid switching costs. Then they add a reranker to dramatically boost precision, especially for conversational queries. Others, decide to invest in improving the accuracy and latency of the search stack more deeply, and switch the search infra to companies like ZeroEntropy.
A simple pattern:
Keyword or hybrid retrieval pulls the top 50 to 200 candidates quickly.
A reranker reorders those candidates so the top results actually match the user’s intent.
This upgrade is usually the highest ROI change you can make to legal retrieval because it improves quality without requiring reindexing, new infra, or a full semantic rewrite.
What a reranker does
A reranker is a model that reads the query and each candidate document together, then assigns a relevance score and reorders the list.
Instead of asking “does this document contain the same words,” it asks:
does this document answer the question?
is it the right clause or section?
does it match the jurisdiction, definition, and legal meaning the user implied?
That deep understanding is what keyword and vector search alone often fail to capture.
Why reranking improves everything, not just relevance
A reranker is not just an accuracy add on. It changes the economics of your whole pipeline.
When ranking improves, you need fewer tokens downstream.
The chain reaction:
Fewer tokens: you pass fewer chunks to the LLM because the top K is actually good
Better tokens: the LLM sees the right clauses and authorities instead of near matches
Lower latency: less context in the prompt reduces end to end time
Lower cost: fewer input tokens and fewer retries
Better results: fewer hallucinations, more grounded answers, better user trust
In legal AI, that is the difference between a demo and a product lawyers rely on.
Where ZeroEntropy's zerank-2 is uniquely strong
zerank-2 is designed for modern legal UX, where queries are conversational and the system needs to behave consistently.
It stands out in three ways:
Instruction following
You can steer ranking with short context like definitions, preferences, and constraints.
Example instruction you can attach:
“Prefer California authorities. Prefer post 2010. Prefer appellate over trial. If this is a contract clause query, prefer the actual clause text over commentary.”
This is extremely useful when legal terms collide across jurisdictions or internal contract standards.
Multilingual robustness
If you support cross border legal work, your corpus and queries are not English only. zerank-2 is built for multilingual and code switched queries, so relevance does not collapse outside English.
Calibrated signals for safe behavior
In assistant workflows, you need to know when retrieval is weak. Calibrated scores and confidence let you do simple product logic:
if confidence is low, ask a clarifying question instead of answering
if the top two results are close, include both in context
if nothing clears a threshold, expand the candidate set
This directly reduces hallucination risk.
Three concrete use cases that map to most legal products
1. Conversational contract search inside a CLM
User asks:
“Do we have any customer agreements where we accepted termination for convenience with less than 30 days notice?”
Keyword search returns a mix of termination clauses, some for cause, some convenience, some irrelevant.
Reranking fixes this by pulling clauses that actually express:
termination for convenience
notice period
the counterparty type implied by “customer”
Result: top 5 is useful, not top 50.
2. Clause analytics and comparison
User asks:
“Show me limitation of liability clauses that exclude consequential damages, and highlight the carve outs.”
Without reranking, you get clauses that mention consequential damages but are not actually exclusions, or you get definitions sections.
With reranking, the system consistently surfaces the right clauses, which makes your analytics and extraction reliable, and makes highlighting far more accurate.
3. Legal research assistant grounded answering
User asks:
“What cases say a non compete is unenforceable in California?”
First stage keyword retrieval often returns:
cases that mention non competes but are about trade secrets
cases from other states referencing California in dicta
secondary sources
Reranking pushes to the top:
cases where the holding directly addresses enforceability under California law
controlling authority over persuasive mentions
fact patterns that match employment agreements, if implied
How teams integrate it
Most teams keep their existing retrieval system and add zerank-2 as a second stage:
retrieve top N candidates with keyword or hybrid search
rerank top N with zerank-2
send only top K into the LLM or into the UI
This is a drop in upgrade that improves quality immediately.
Conclusion
Legal search is shifting from keyword boxes to conversational assistants. When queries carry intent, nuance, and constraints, you need semantic understanding plus reranking.
zerank-2 upgrades your current retrieval into an assistant ready stack: higher precision at the top, fewer tokens downstream, lower latency and cost, and outputs lawyers can trust.
Explore the solution at ZeroEntropy.dev and bring accuracy to your legal document workflows.
Get started with
RELATED ARTICLES





