10 things to watch for at the intersection of Tech and Law in 2017

  1. CASL, Canada’s anti-spam legislation, has been with us since July 2014. It’s a terrible piece of legislation for many reasons. In July 2017 a private right of action becomes effective that will allow anyone who receives spam as defined by CASL to sue the sender. CASL sets out statutory damages, so the complainant does not have to prove any damages. Class actions will no doubt be launched. The sad part is that breaches of CASL are to a large extent breaches of the very technical requirements of the statute, rather than the sending of what most people would call spam. At some point in 2017 we may see a court decision that ponders CASL’s legality.
  2. Pipeda, Canada’s general privacy law, has been amended to require mandatory notice to the privacy commissioner and/or possible victims when there is a serious privacy breach. This is on hold pending finalization of the regulations – and may be in effect before the end of 2017.
  3. Privacy in general will continue to be put under pressure by politicians and law enforcement officials who desire to advance the surveillance state. The good news is that there is continuing pressure being put forth by privacy advocates. A UK court, for example, decided that some recent UK surveillance legislation went too far. The Snowden revelations have spurred most IT businesses to use more effective encryption. Unfortunately, I don’t think it is safe to predict that President Obama will pardon Snowden.
  4. Canada’s trademark registration process will undergo substantive change in 2018 – some good, some not so good. In 2017 the regulations and processes should be finalized, giving us more detail about how it will work in practice.
  5. We will hear a lot about security issues around the internet of things, or IOT. IOT devices can be a gateway to mayhem. IOT things include such disparate devices as thermostats, light switches, home appliances, door locks, and baby monitors. The problem is that far too often designers of IOT devices don’t design security into them. That makes it easy for malfeasants to use these devices to break into whatever networks they are connected to.
  6. Artificial Intelligence, or AI, will continue to creep in everywhere. AI is now employed in many things we use – ranging from google translate to semi-autonomous cars. Voice controlled screen and non-screen interactions – which use AI – are on the rise.
  7. AI is starting to be used in tools that lawyers use, and for tools that will replace lawyers in some areas. In 2017, we will start to see some major upheavals in the practice of law, and how people get their legal needs met. At some point every lawyer (and knowledge workers in general) will have a holy cow moment when they realize the impact of AI on their profession. AI will make inroads in things like legal research, and contract generation. It will also foster the provision of legal services online by non-lawyers to a vast underserved market that won’t pay lawyers on the current business model. These services may not be quite as good as those provided by lawyers, but consumers will be happy to pay less for what they perceive as good enough. And the quality, breadth, and sophistication of these services will continue to improve as AI improves.
  8. Another AI issue we will hear about in 2017 is embedded bias and discrimination. AI makes decisions not on hard coded algorithms, but rather learns from real world data and how things react to it. That includes how humans make decisions and respond and react to things. It thus tends to pick up whatever human bias and discrimination exists. That is a useful thing if the purpose is to predict human reactions or outcomes, like an election. But it is a bad thing if the AI makes decisions that directly affect people such as who to hire or promote, who might be criminal suspects, and who belongs on a no-fly list.
  9. The cloud has finally matured and will be adopted by more businesses in 2017. Most major international players now have data centres in Canada, which helps to raise the comfort level for Canadian businesses. Many CIOs now realize that putting everything in the cloud means that life is easier as a result, as it can make business continuity, scalability, mobility, upgrades, and security easier. Care must be taken to make sure that the right solutions are chosen, and it is being done right – but there are compelling reasons why it can be better than doing it yourself.
  10. The youngest generation in the workforce is always online, connected, and communicating, and expects their workplace to fit their lifestyle and not the other way around. Firms that embrace that will get the best and the brightest of the rising stars. It used to be that business tech was ahead of consumer tech, but that trend has been reversing for some time. More workers will get frustrated when they can do more with their own devices and apps than their corporate ones. That can lead to business challenges in areas such as security – but these challenges around rogue tech in the workplace have been around for decades.

Cross posted to Slaw

Even Santa needs to get it in writing

1654x920_yorkdale_biker_lr_logo1

The 2016 Fashion Santa.  photo source:  yorkdale.com

A Toronto mall and its former “Fashion Santa” are having a snowball fight over the character.  The mall hired a new Fashion Santa this year instead of the person who played the role before. The dispute is over who owns the character and name. They even have duelling trademark applications for Fashion Santa. 

In the end it comes down to the facts (including whether the individual is an employee or an independent contractor, and who developed the character) and the nature of any agreement that might exist.

While disputes over public characters makes for good press, this kind of dispute is actually not that rare.  Disputes often occur between individuals and the entity that hires them over who has rights to intellectual property. 

Typically the individual claims they created something before they were hired, or that they are really independent contractors providing a service.  The business claims either that it created it on its own, the employee merely had an idea that it then developed, or the employee developed it as part of the employee’s duties.  These issues can be difficult to sort out, as the facts are often fluid, and subject to different points of view. 

The best and easiest time to sort out ownership issues is at the beginning, and put it in writing.  But it may not be on the parties’ minds then.  Ownership and rights issues often get controversial only when something becomes successful and money gets involved – such as the publicity and success of Fashion Santa. 

Cross-posted to Slaw

 

SCC renders practical privacy decision on mortgage information

The Supreme Court of Canada, in Royal Bank v Trang, made a privacy decision that will bring a sigh of relief to lenders and creditors.

A judgment creditor asked the sheriff to seize and sell a house to satisfy the judgment.  To do that, the sheriff needed to know how much was owed on the mortgage on the house.  The mortgage lender didn’t have express consent to provide the information, and said PIPEDA prevented it from giving it.  Lower courts agreed.

But the SCC took a more practical approach.  The issue was whether there was implied consent to release that personal information.  The SCC said there was.

They interpreted implied consent in a broader perspective, looking at the entire situation, including the legitimate business interests of other creditors.  Financial information is considered to be sensitive personal information, and thus in general faces a higher threshold for implied consent.  But in this context, they held that it is a reasonable expectation of a debtor for a mortgage lender to provide a discharge statement to another creditor wanting to enforce its rights against that property.

Cross-posted to Slaw

Big data privacy challenges

Big data and privacy was one of the topics discussed at the Canadian IT Law Association conference this week.  Some of the issues worth pondering include:

  • Privacy principles say one should collect only what you need, and keep only as long as needed.  Big data says collect and retain as much as possible in case it is useful.
  • Accuracy is a basic privacy principle – but with big data accuracy is being replaced by probability.
  • A fundamental privacy notion is informed consent for the use of one’s personal information.  How do you have informed consent and control for big data uses when you don’t know what it might be used for or combined with?
  • Probability means that the inferences drawn may not always be accurate.  How do we deal with that if we as individuals are faced with erroneous inferences about us?
  • If based on information that may itself be questionable, the results may be questionable.  (The old garbage in, garbage out concept.)  It has been proposed that for big data and AI, we might want to add to Asimov’s 3 laws of robotics that it won’t discriminate, and that it will disclose its algorithm.
  • If AI reaches conclusions that lead to discriminatory results, is that going to be dealt with by privacy regulators, or human rights regulators, or some combination?
  • Should some of this be dealt with by ethical layers on top of privacy principles? Perhaps no go zones for things felt to be improper, such as capturing audio and video without notice, charging to remove or amend information, or re-identifying anonymized information.

Cross-posted to Slaw

Should lawyers learn to code?

There have been many articles written suggesting that lawyers should learn how to code software.  This Wolfram Alpha article is a good one, although many of the articles are far more adamant that every lawyer needs to learn how to code.  The rationale is that because software will have an increasing effect on how lawyers practice, and who will be competing with us to provide our services, we should learn to code.

So should we learn how to code?  For most lawyers, probably not.

I knew how to code before law school, and for me it has been very useful.  Since my practice is largely around IT issues, it has helped me understand those issues and discuss them with clients.  It has also influenced my drafting style for both contract drafting and the way I communicate with clients.

But the thought that learning how to code will give us a leg up against competitors who are developing or adopting intelligent solutions to replace our services, or will help us develop our own systems to compete or make us more efficient, is flawed.  The systems that are going to have the biggest impact are based on artificial intelligence.  That is very sophisticated, cutting edge stuff, and learning how to code is not going to help with that.  It is something that we need to leave to the experts, or hire experts to do.

Lawyers interested in this can find resources discussing artificial intelligence and where it is headed (such as the artificial lawyer site and twitter feed that posted the Wolfram Alpha article).   Looking at where this is headed, and how it might effect the practice of law would be more productive than learning how to code.

Cross posted to Slaw

Cloud computing: It’s all Good – or Mostly Good

A ZDNet article entitled Cloud computing: Four reasons why companies are choosing public over private or hybrid clouds makes a case for the value of the public cloud.

The reasons:

  • Innovation comes as standard with the public cloud
  • Flexibility provides a business advantage
  • External providers are the experts in secure provision
  • CIOs can direct more attention to business change

This is all good – or mostly good.

The caveat is that the use of the cloud can fail if a business adopts the cloud without thinking it through from the perspectives of mission criticality, security, privacy, and continuity.  If a business runs mission critical systems in the cloud, and that system fails, the business could be out of business.

The IT Manager no longer has to consider day to day issues around keeping software and security up to date.  But they still have to consider higher level issues.

It is important to understand what the needs are for the situation at hand.  A system that is not mission critical, or does not contain sensitive information, for example, would not require as much scrutiny as a system that runs an e-commerce site.

Issues to consider include:

  • how mission critical the system is
  • what the consequences are of a short term and long term outage
  • how confidential or personal the information is in the system
  • can the information be encrypted in transit and at rest
  • how robust the vendor’s continuity plan is
  • the need for the business to have its own continuity plan – such as a local copy of the data
  • how robust the vendor’s security is
  • does the vendor have third party security validation to accepted standards
  • does the vendor’s agreement have provisions that back these issues up with contractual terms and service levels with meaningful remedies

Cross-posted to Slaw

CASL still confusing

CASL, the Canadian anti-spam legislation, came into force on July 1, 2014. July 1, 2017 will be an important date for CASL, as a private right of action will become available. Anyone (class actions are likely) will be able to sue CASL violators. Statutory damages means that it won’t be necessary to prove actual damages.

CASL is a complex, illogical statute. Many businesses don’t comply because they don’t think emails they send could possibly be considered spam. After all, spam is about illicit drugs, diets and deals scams, right? Not according to CASL.

Nor do they understand they must keep detailed records to prove they have implied or express consent for each person they send an email to. Or they may be rolling the dice that they will be a low priority for CRTC enforcement. (That approach risks personal liability for directors and officers.)

Once the private right of action kicks in, the enforcement landscape changes. If a business has not yet come to grips with CASL, the spectre of private suits for violations may offer an incentive to comply.

In the long term, the private right of action could provide a couple of silver linings.

Getting CASL in front of the courts may provide some badly needed guidance on how to interpret and apply it in practice. So far, the handful of cases the CRTC has made public have not provided enough detail to help with that.

There is some thought that CASL could be struck down on constitutional grounds. Any business sued under the private right of action should include that in its defence.

The possibility of CASL being struck down should not, however, be a reason not to comply with CASL. It could take years before an action gets far enough to see that result. And that result is by no means assured.

Cross-posted to Slaw

CRTC advisory on CASL consent record keeping

The CRTC recently issued a media advisory entitled Enforcement Advisory – Notice for businesses and individuals on how to keep records of consent.  It doesn’t add anything new – but reinforces what the CRTC is looking for.  This is important because CASL requires a business to prove that they have consent to send a CEM (Commercial Electronic Message).  CASL has a complex regime of express and implied consent possibilities.

The advisory states: “Commission staff has observed that some businesses and individuals are unable to prove they have obtained consent before sending CEMs. The purpose of this Enforcement Advisory is to remind those involved, including those who send CEMs, of the requirements under CASL pertaining to record keeping.”

The problem in practice is that keeping those records can be a herculean task.  I’m concerned that the difficulty of getting this right will make many businesses fodder for CASL breach class action lawsuits when that right becomes available in 2017.

My personal view continues to be that the prime effect of CASL is to add a huge compliance burden to legitimate businesses.   It may give some tools to attack actual spam, but its approach is fundamentally flawed, and the cost/benefit is way out of whack.

Cross-posted to Slaw

Privacy by Design is Crucial to avoid IoT Disasters

network-782707_1280

If anyone doubts that Privacy by Design is not a fundamentally important principle, consider these two recent articles.

This Wired article describes a hack being detailed at the upcoming Defcon conference that can easily read and type keystrokes from wireless keyboards that are not Bluetooth.  So you might want to consider replacing any non-Bluetooth wireless keyboards you have.

Security expert Bruce Schneier wrote this article entitled The Internet of Things Will Turn Large-Scale Hacks into Real World Disasters that explains the IoT risks. The fundamental problem is that not enough attention is being paid to security for IoT devices.  This leaves a door open to situations where a hacker can, for example, easily get in to your thermostat and then use that as a connection point to your network.  Cory Doctorow of Boing Boing refers to this as a coming IoT security dumpster-fire.

Bruce describes it this way:

The Internet of Things is a result of everything turning into a computer. This gives us enormous power and flexibility, but it brings insecurities with it as well. As more things come under software control, they become vulnerable to all the attacks we’ve seen against computers. But because many of these things are both inexpensive and long-lasting, many of the patch and update systems that work with computers and smartphones won’t work. Right now, the only way to patch most home routers is to throw them away and buy new ones. And the security that comes from replacing your computer and phone every few years won’t work with your refrigerator and thermostat: on the average, you replace the former every 15 years, and the latter approximately never. A recent Princeton survey found 500,000 insecure devices on the internet. That number is about to explode.

 

Cross-posted to Slaw

Rio Olympics Social Media guidelines

It seems that dubbing major sporting events the “largest social media event ever” is even trendier than the social networking platforms themselves, and Rio 2016 is no exception. All hype aside, the Rio Olympics haven’t reinvented the wheel, and seem to impose similar restrictions as their predecessors.

The IOC describes appropriate uses and prohibitions in their Social and Digital Media Guidelines. All accredited individuals (athletes, coaches, and officials) who are not accredited as media are allowed to “share their experience at the Games through internet or any other type of social and digital media, provided that it is done in a first-person, diary-type format”. Individuals posting must “conform to the Olympic values of excellence, respect and friendship” and “should be within the bounds of dignity and good taste”.

Those restrictions are similar to many corporate social media policies. But it gets more restrictive and allows accredited persons to share only “still” images to social and digital media taken within the Olympic venues. Audio or video taken in Olympic venues can’t be shared on social media without IOC consent. There are also “no picture areas”.

Restrictions exist for spectators pursuant to the Ticket Holder Policy (there are 19 pages of conditions attached to a spectator ticket) which says in part:

12.6.3 Ticket Holders may capture, record and/or transmit still images and/or data taken within venues including by sharing such still images and/or data on social media and the internet provided such capture, recording or transmission is made solely for personal, private, non-commercial and nonpromotional purposes.

12.6.4 Ticket Holders may capture, record and/or transmit audio or video taken from venues, solely for personal, private, non-commercial and non-promotional purposes, with the exclusion of licensing, broadcasting and/or publishing any such video and/or sound recordings including on social media and the internet.

Frankly, I don’t know what that last one means – it seems to give permission and take it away at the same time.

Many of the restrictions are well intentioned – for reasons such as athlete security and privacy. Much of it will be to satisfy mainstream media and sponsors that pay huge amounts of money for exclusive rights. But some of it seems unrealistic. It will be interesting to see how aggressively they will be enforced.

I wonder what the IOC will think about athletes and spectators playing Pokemon Go at Olympic venues?

Cross-posted to Slaw.