Decision by machine – artificial intelligence and the judicial process
Auckland District Law Society President Brian Keene QC voiced a note of concern in Law News Issue 26 (12 August 2016), based on Professor Richard Susskind’s suggestion that super-computers programmed with appropriate “fuzzy logic” could deliver answers to individual disputes in a way that would be an improvement on the current “judge-made law” system of the courts.
Let’s have a brief look at the issue of artificial intelligence or “AI” and the law. Viewed dispassionately, the proposals are not “Orwellian” nor do they suggest the elevation of “Terminator J” to the Bench.
Putting the matter very simplistically, legal information either in the form of statutes or case law is data which has meaning when properly analysed or interpreted. Apart from the difficulties in location of such data, the analytical process is done by lawyers or other trained professionals.
I suggest a “Law as Data” approach using data analysis and analytics which match fact situations with existing legal rules.
Already a form of data analysis or AI variant is available in the form of databases such as LexisNexis, Westlaw, NZLii, Austlii or Bailii. LexisNexis and Westlaw have applied natural language processing (NLP) techniques to legal research for ten-plus years. The core NLP algorithms were all published in academic journals long ago and are readily available. The hard (very hard) work is practical implementation. Legal research innovators like Fastcase and RavelLaw have done that hard work, and added visualisations to improve the utility of results.
Using LexisNexis or Westlaw, the usual process involves the construction of a search which, depending upon the parameters used, will return a limited or extensive dataset. It is at that point that human analysis takes over.
What if the entire corpus of legal information is reduced to a machine readable dataset? This would be a form of “big data” with a vengeance, but it is a necessary starting point. The issue then is to:
a) reduce the dataset to information that is relevant and manageable; and
b) deploy tools that would measure the returned results against the facts or a particular case to predict a likely outcome.
Part (a) is relatively straightforward. There are a number of methodologies and software tools that are deployed in the e-discovery space that perform this function. Technology-assisted review (“TAR”, or predictive coding) uses natural language and machine learning techniques against the gigantic data sets of e-discovery. TAR has been proven to be faster, better, cheaper and much more consistent than human-powered review (HPR). It is assisted review, in two senses.
First, the technology needs to be assisted; it needs to be trained by senior lawyers very knowledgeable about the case. Second, the lawyers are assisted by the technology, and the careful statistical thinking that must be done to use it wisely. Thus, lawyers are not replaced, though they will be fewer in number. TAR is the success story of machine learning in the law. It would be even bigger but for the slow pace of adoption by both lawyers and their clients (see Michael Mills “Artificial Intelligence in Law: The State of the Play 2016 (Part 2)”, 23 February 2016 Thomson Reuters Legal Executive Institute (last accessed 16 August 2016).
Part (b) would require the development of the necessary algorithms that could undertake the comparative and predictive analysis, together with a form of probability analysis to generate an outcome that would be useful and informative. There are already variants at work now in the field of what is known as Outcome Prediction utilising cognitive technologies.
There are a number of examples of legal analytics tools. “Lex Machina”, having developed a set of intellectual property (IP) case data, uses data mining and predictive analytics techniques to forecast outcomes of IP litigation. Recently, it has extended the range of data it is mining to include court dockets, enabling new forms of insight and prediction (last accessed 16 August 2016)).
“LexPredict” developed systems to predict the outcome of Supreme Court cases, at accuracy levels which challenge experienced Supreme Court practitioners (last accessed 16 August 2016)).
“Premonition” uses data mining, analytics and other AI techniques “to expose, for the first time ever, which lawyers win the most before which Judge” (last accessed 16 August 2016)).
These proposals, of course, immediately raise issues of whether or not we are approaching the situation where we have decision by machine.
As I envisage the deployment of AI systems, the analytical process would be seen as a part of the triaging or early case assessment process in the online court model, rather than as part of the decision-making process. The advantages of the process are in the manner in which the information is reduced to a relevant dataset performed automatically and faster than could be achieved by human means. Within the context of the online court process it could be seen as facilitative rather than determinative.
If the case reached the decision-making process it would, of course, be open to a judge to consider utilising the “Law as Data” approach with, of course, the ultimate sign-off. The judge would find the relevant facts. The machine would process the facts against the existing database that is the law and present the judge with a number of possible options with supporting material. In that way, the decision would still be a human one, albeit machine-assisted.