How do AI-driven authorized analysis platforms — particularly those who present direct solutions to authorized questions — stack up towards one another?
That was the query at a Feb. 8 assembly of the Southern California Affiliation of Regulation Libraries, the place a panel of three legislation librarians reported on their comparability of the AI solutions delivered by three main platforms – Lexis+AI, Westlaw Precision AI, and vLex’s Vincent AI.
Whereas all three platforms demonstrated competency in answering primary authorized analysis questions, the panel discovered, every confirmed distinct strengths and occasional inconsistencies of their responses to the authorized queries the librarians put to them.
However after testing the platforms utilizing three separate authorized analysis situations, the panelists’ broad takeaway was that, whereas AI-assisted authorized analysis instruments can present fast preliminary solutions, they nonetheless must be considered as beginning factors relatively than definitive sources for authorized analysis.
“This can be a place to begin, not an ending level,” stated Mark Gediman, senior analysis analyst, Alston & Fowl, who was one of many three panelists, stressing the continued significance of utilizing conventional authorized analysis expertise to confirm the AI’s outcomes.
Evaluating Three Authorized Questions
I didn’t attend the panel. Nevertheless, I used to be supplied with an audio recording and transcript, along with the slides. This report is predicated on these supplies.
Along with Gediman, the opposite two legislation librarians who in contrast the platforms and introduced their findings have been:
Cindy Guyer, senior data and analysis analyst, O’Melveny & Myers.
Tanya Livshits, supervisor, analysis companies, DLA Piper.
Utilizing every of the platforms, they researched three questions, all centered on California state legislation or federal legislation inside the ninth U.S. Circuit Court docket of Appeals, which covers California:
What’s the time-frame to hunt certification of an interlocutory enchantment from the district courtroom in Ninth Circuit?
Is there a personal proper of motion below the California Reproductive Depart Loss for Staff Act?
What’s the normal for interesting class certification? (California)
They evaluated the outcomes delivered by every platform utilizing six components:
Accuracy of reply.
Depth of reply.
Cited main sources.
Secondary sources cited.
Format of reply.
Iterative choices.
First Query: Attraction Timeframe
For the primary query, concerning the timeframe for interlocutory enchantment certification within the Ninth Circuit, all three platforms recognized the right reply, which is 10 days after entry of the certification order.
The reply given by Lexis+ AI correctly included a quotation to the controlling federal statute, in addition to associated case legislation, and put the citations immediately within the textual content of the reply, in addition to itemizing them individually. When the panelist requested it a follow-up query about how the ten days is calculated, it gave what she thought of to be a very good rationalization.
Westlaw Precision AI carried out just about the identical as Lexis, answering the query accurately and offering the reply in considerably the identical kind and citing considerably the identical authorities. It additionally included the citations immediately inside the textual content of the reply.
Vincent AI, whereas additionally offering the right reply, cited completely different circumstances than the Lexis and Westlaw. It additionally confirmed some inconsistency in the way it introduced authorities, with the important thing governing statute showing within the reply however not within the accompanying checklist of authorized authorities.
Second Query: Non-public Proper of Motion
The second query, regarding personal a proper of motion below California’s 2024 Reproductive Depart Loss Act, revealed extra vital variations among the many platforms. This query was significantly difficult, panelist Gediman stated, in that it concerned laws that took impact simply over a 12 months in the past, on Jan. 1, 2024.
Lexis+ AI and Westlaw Precision AI reached comparable conclusions, each discovering no specific personal proper of motion. Nevertheless, Vincent AI reached the other conclusion, discovering that there’s a personal proper of motion. Notably, it discovered a related regulatory provision that the opposite platforms missed. In arriving at its reply, it additionally interpreted statutory language in a method that the panelist stated was “an debatable assumption however not explicitly said.”
“Now that doesn’t imply that Westlaw and Lexis have been flawed and Vincent was proper or vice versa,” Gediman stated. “It simply factors out the truth that AI is difficult, and … it doesn’t matter … what number of occasions you place a query in, every reply goes to be a bit of bit completely different every time you get it, even when it’s on the identical system.”
Nonetheless, Gediman was impressed with Vincent’s efficiency on this query, discovering related language in a regulation that was not particular to the legislation in query.
“It managed … to seek out relevancy in a barely broader body of reference to use, and I assumed that was fairly, fairly cool,” Gediman stated. “I wouldn’t have discovered this alone, fairly truthfully, and I prefer to assume I’m a fairly first rate researcher.”
Third Query: Customary of Attraction
For the third query, concerning the usual for interesting class certification in California, all three AI instruments accurately recognized the abuse of discretion normal, however their shows different considerably. A key distinction amongst them associated to one thing known as the “dying knell” doctrine, which requires denials of sophistication actions in California to be appealed instantly.
On this query, Lexis+ AI offered the right reply as to the usual, and Guyer, the panelist who examined it, thought the primary paragraph of its reply did a very good job of setting out the necessary info, together with the components courts look to in figuring out abuse of discretion. However she thought subsequent paragraphs turned redundant, citing the identical circumstances a number of occasions.
Importantly, nonetheless, the Lexis+ AI reply didn’t point out the “dying knell” difficulty concerning the necessity to file a direct enchantment. “That was sort of necessary to me,” Guyer stated.
Westlaw Precision AI additionally bought the usual proper and included the essential warning concerning the rapid enchantment requirement. However Guyer took difficulty with the best way it introduced its reply in an inventory format that would have been complicated to a researcher and may not have alerted them concerning the dying knell difficulty. She additionally discovered that most of the secondary sources cited in help of the reply weren’t related, typically drawing on federal legislation when the query concerned a state statute.
Vincent AI supplied maybe essentially the most well-rounded response, Guyer thought, calling it a “nice reply.” It offered each a concise preliminary reply and an in depth rationalization, together with a singular “exceptions and limitations” part harking back to follow guides that highlighted the dying knell warning.
Though Vincent’s preliminary reply additionally cited to some irrelevant secondary sources, Guyer favored that Vincent has a characteristic whereby the researcher can verify containers subsequent to sources and remove them, after which regenerate the reply primarily based solely on the remaining sources. “I really like that management that they provide the person to be a part of this gen AI expertise when it comes to what you need,” she stated.
Guyer additionally favored that the Vincent AI reply may very well be exported and shared for collaboration with others. Offered in addition they have a Vincent AI subscription, they will click on on a hyperlink and go in and examine the complete AI reply, in addition to manipulate and regenerate the question.
Summing It All Up
The panelists summed up their analysis of the three platforms with the chart you see beneath, evaluating every of the six components I listed above.
Normally phrases throughout all three questions, every platform demonstrated distinctive strengths in how they introduced and supported their solutions, the panelists stated.
Lexis+ AI persistently confirmed sturdy integration with Shepard’s citations and supplied a number of report codecs. Westlaw Precision AI’s integration with KeyCite and clear supply validation instruments made verification easy, although the platform’s latest shift to extra concise solutions was notable within the responses. Vincent AI stood out for its person management options, permitting researchers to filter sources and regenerate solutions, in addition to its distinctive relevancy rating system.
For the panelists, the variations in responses, significantly for the extra advanced questions and up to date laws, underscored that these AI reply instruments must be considered as beginning factors relatively than definitive sources.
On the subject of vendor transparency, the panelists stated that not one of the distributors at the moment disclose which particular AI fashions they use. Whereas distributors might not share their underlying expertise, they’ve been notably attentive to person suggestions and fast to implement enhancements, the panelists stated.
The panel emphasised that regardless of advances in AI expertise, these instruments require cautious oversight and validation. “Our customers are likely to assume that AI is the answer to all of their life’s issues,” stated panelist Livshits. “I spent a whole lot of time explaining that it’s a software, not an answer, and explaining the restrictions of it proper now.”
Stated Gediman: “Each time I give the outcomes to an legal professional, I at all times embody a disclaimer that this must be the start of your analysis, and you must evaluate the outcomes for relevance and applicability previous to utilizing it, however you shouldn’t depend on it as is.”