Artificial Intelligence and the Legal Profession

Artificial Intelligence is going to have a disruptive effect on the legal profession.  The question is how soon, how much, and what areas of law come first.  This kind of disruptive change builds up slowly, but once it hits a tipping point, it happens quickly.

Futurist Richard Worzel wrote an article titled Three Things You Need to Know About Artificial Intelligence  that is worth a read.  Here are some excerpts:

Every once in while, something happens that tosses a huge rock into the pond of human affairs. Such rocks include things like the discovery of fire, the invention of the wheel, written language, movable type, the telegraph, computers, and the Internet. These kinds of massive disturbances produce pronounced, remarkable, unexpected changes, and radically alter human life.

Artificial Intelligence is just such a rock, and will produce exactly those kinds of disturbances. We’re not prepared for the tsunami that AI is going to throw at us.

But now AI is becoming a reality, and it is going to hit us far faster than we now expect. This will lead to an avalanche of effects that will reach into all aspects of our lives, society, the economy, business, and the job market. It will lead to perhaps the most dramatic technological revolution we have yet experienced – even greater than the advent of computers, smartphones, or the Internet.

The legal profession seems to be particularly susceptible to early occupation by AIs:

“At JPMorgan Chase & Co., a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours. The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of work each year by lawyers and loan officers.”

So, before June of 2017, lawyers and loan officers spent 360,000 hours a year interpreting commercial loan agreements for JPMorgan Chase. Since June, that specific kind of work has vanished.

Cross-posted to Slaw

Big data privacy challenges

Big data and privacy was one of the topics discussed at the Canadian IT Law Association conference this week.  Some of the issues worth pondering include:

  • Privacy principles say one should collect only what you need, and keep only as long as needed.  Big data says collect and retain as much as possible in case it is useful.
  • Accuracy is a basic privacy principle – but with big data accuracy is being replaced by probability.
  • A fundamental privacy notion is informed consent for the use of one’s personal information.  How do you have informed consent and control for big data uses when you don’t know what it might be used for or combined with?
  • Probability means that the inferences drawn may not always be accurate.  How do we deal with that if we as individuals are faced with erroneous inferences about us?
  • If based on information that may itself be questionable, the results may be questionable.  (The old garbage in, garbage out concept.)  It has been proposed that for big data and AI, we might want to add to Asimov’s 3 laws of robotics that it won’t discriminate, and that it will disclose its algorithm.
  • If AI reaches conclusions that lead to discriminatory results, is that going to be dealt with by privacy regulators, or human rights regulators, or some combination?
  • Should some of this be dealt with by ethical layers on top of privacy principles? Perhaps no go zones for things felt to be improper, such as capturing audio and video without notice, charging to remove or amend information, or re-identifying anonymized information.

Cross-posted to Slaw

Should lawyers learn to code?

There have been many articles written suggesting that lawyers should learn how to code software.  This Wolfram Alpha article is a good one, although many of the articles are far more adamant that every lawyer needs to learn how to code.  The rationale is that because software will have an increasing effect on how lawyers practice, and who will be competing with us to provide our services, we should learn to code.

So should we learn how to code?  For most lawyers, probably not.

I knew how to code before law school, and for me it has been very useful.  Since my practice is largely around IT issues, it has helped me understand those issues and discuss them with clients.  It has also influenced my drafting style for both contract drafting and the way I communicate with clients.

But the thought that learning how to code will give us a leg up against competitors who are developing or adopting intelligent solutions to replace our services, or will help us develop our own systems to compete or make us more efficient, is flawed.  The systems that are going to have the biggest impact are based on artificial intelligence.  That is very sophisticated, cutting edge stuff, and learning how to code is not going to help with that.  It is something that we need to leave to the experts, or hire experts to do.

Lawyers interested in this can find resources discussing artificial intelligence and where it is headed (such as the artificial lawyer site and twitter feed that posted the Wolfram Alpha article).   Looking at where this is headed, and how it might effect the practice of law would be more productive than learning how to code.

Cross posted to Slaw