Should Artificial Intelligence be regulated?


Elon Musk spoke at SXSW and emphasized his concerns about artificial intelligence and why it needs to be regulated.

What is the issue?

Elon says AI is more dangerous than nuclear warheads.

Right now, AI is created for specific tasks, such as driving a car, playing a game, responding to our voice commands, or providing personal recommendations. AI today is nowhere near as capable in general than even a moth brain, and most people think general artificial intelligence is a long way off. But Elon says “I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me, … It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.” He is concerned that the advent of digital super intelligence is much closer than we think.

Why does it matter?

Because Skynet. General broad AI that is no longer task specific could be more prone to abuse by humans. And the intelligence of AI could outpace the ability of humans to manage it. So it is important to develop AI safely to ensure it doesn’t get out of control.

What’s next?

The ethics of AI have been debated for years, and that debate will continue. Tech companies are working with various disciplines (scientific, futurists, ethics specialists) to try to come to grips with ethical standards for AI. The key question – if Elon is right – is whether this needs to be escalated to lawmakers, and if so, how soon? Should this be dealt with on a worldwide treaty basis – such as banning the use of AI for weapons?

Cross-posted to Slaw

8 Legal/Tech Issues for 2018

Blockchain (the technology behind Bitcoin) is in a hype phase. It has been touted as the solution to many issues around trust. To some extent blockchain is still a solution in search of a problem. Blockchain will, however, become an important technology, and perhaps during 2018 we will begin to see some practical uses.

CASL, Canada’s anti-spam legislation, has been under review. It is a horrible law where the cost / benefit ratio is way off. Most small businesses simply don’t have the resources to comply. And no matter how hard they try, larger businesses have a difficult time complying with all the technical and record keeping requirements. To me CASL is like using a sledgehammer to kill a fly in a china shop. You may or may not kill the fly, but the collateral damage simply isn’t worth it. The House of Commons Standing Committee on Industry, Science and Technology recently presented its report entitled Canada’s Anti-Spam Legislation: Clarifications are in Order. The report recommends changes, but I fear the changes we will end up with won’t go far enough.

Mandatory breach notification under PIPEDA (the federal privacy legislation that governs in most provinces) should be in effect sometime in 2018. It will require mandatory notice to the privacy commissioner and/or possible victims when there is a serious privacy breach. It will also require entities to keep records of all privacy breaches, even if they are not reportable under the act’s thresholds.

Security and privacy breaches will continue to be a problem. Sometimes these occur because of intensive attacks, but sometimes they are caused by stupid decisions or errors. Authentication by passwords can work to reduce the risks if done right, but it is a very difficult thing to do right. Another solution is needed – might blockchain come to the rescue here?

We will continue to hear about security issues around the internet of things, or IOT. IOT devices can be a gateway to mayhem. IOT things include such disparate devices as thermostats, light switches, home appliances, door locks, and baby monitors. The problem is that far too often IOT device designers don’t design them with security in mind. That makes it easy for malfeasants to use these devices to break into whatever networks they are connected to.

Artificial Intelligence is now employed in many things we use – ranging from google translate to semi-autonomous cars. Voice controlled screen and non-screen interactions – which use AI – are on the rise. In the short term, AI will continue to creep in behind the scenes with things we interact with regularly. In the long term, it will have disruptive effects for many, including the legal profession.

Bitcoin and other crypto-currencies have moved from the geek phase to get more mainstream attention. Crypto-currencies will be ripe for fraud as more people dip their toes in. There has already been ICO (Initial Coin Offering) fraud. And “drive by currency mining” where software gets surreptitiously installed on PC’s and phones to mine currency.

Another thing to keep an eye on is whether people’s “freaky line” will move. That’s the line that people refuse to cross because of privacy concerns about their information. Will, for example, the advantages of the automated home (which combines IOT and AI) lead people to adopt it in spite of privacy and security concerns?

Cross-posted to Slaw

Artificial Intelligence and the Legal Profession

Artificial Intelligence is going to have a disruptive effect on the legal profession.  The question is how soon, how much, and what areas of law come first.  This kind of disruptive change builds up slowly, but once it hits a tipping point, it happens quickly.

Futurist Richard Worzel wrote an article titled Three Things You Need to Know About Artificial Intelligence  that is worth a read.  Here are some excerpts:

Every once in while, something happens that tosses a huge rock into the pond of human affairs. Such rocks include things like the discovery of fire, the invention of the wheel, written language, movable type, the telegraph, computers, and the Internet. These kinds of massive disturbances produce pronounced, remarkable, unexpected changes, and radically alter human life.

Artificial Intelligence is just such a rock, and will produce exactly those kinds of disturbances. We’re not prepared for the tsunami that AI is going to throw at us.

But now AI is becoming a reality, and it is going to hit us far faster than we now expect. This will lead to an avalanche of effects that will reach into all aspects of our lives, society, the economy, business, and the job market. It will lead to perhaps the most dramatic technological revolution we have yet experienced – even greater than the advent of computers, smartphones, or the Internet.

The legal profession seems to be particularly susceptible to early occupation by AIs:

“At JPMorgan Chase & Co., a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours. The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of work each year by lawyers and loan officers.”

So, before June of 2017, lawyers and loan officers spent 360,000 hours a year interpreting commercial loan agreements for JPMorgan Chase. Since June, that specific kind of work has vanished.

Cross-posted to Slaw

Big data privacy challenges

Big data and privacy was one of the topics discussed at the Canadian IT Law Association conference this week.  Some of the issues worth pondering include:

  • Privacy principles say one should collect only what you need, and keep only as long as needed.  Big data says collect and retain as much as possible in case it is useful.
  • Accuracy is a basic privacy principle – but with big data accuracy is being replaced by probability.
  • A fundamental privacy notion is informed consent for the use of one’s personal information.  How do you have informed consent and control for big data uses when you don’t know what it might be used for or combined with?
  • Probability means that the inferences drawn may not always be accurate.  How do we deal with that if we as individuals are faced with erroneous inferences about us?
  • If based on information that may itself be questionable, the results may be questionable.  (The old garbage in, garbage out concept.)  It has been proposed that for big data and AI, we might want to add to Asimov’s 3 laws of robotics that it won’t discriminate, and that it will disclose its algorithm.
  • If AI reaches conclusions that lead to discriminatory results, is that going to be dealt with by privacy regulators, or human rights regulators, or some combination?
  • Should some of this be dealt with by ethical layers on top of privacy principles? Perhaps no go zones for things felt to be improper, such as capturing audio and video without notice, charging to remove or amend information, or re-identifying anonymized information.

Cross-posted to Slaw