Are you ready for PIPEDA’s privacy breach recording obligation?

In a recent blog post I talked about the new privacy breach notification requirements coming under PIPEDA this November 1. I said that perhaps the most challenging aspect is a requirement to maintain a “record of every breach of security safeguards involving personal information under its control.”

Why is that so challenging?

Many large companies already have this kind of procedure in place. But most business do not. Maintaining a record sounds easy. But this is not so simple when you think it through. First, the business must create a procedure and educate its staff to recognize breaches and report them to its privacy officer, even if they are not significant. No longer can the business rely on staff recognizing a breach because it is serious and obvious, or someone complains.

Then for each one the privacy officer must go through the analysis required under PIPEDA to determine if there is a “real risk of significant harm” that triggers a reporting requirement. The rationale for that decision must be recorded.

Why does it matter?

The Privacy Commissioner has the right to inspect any business’s breach record at any time. If a business does not report a breach when it is supposed to, or if they don’t keep a breach record, they can be subject to a fine of up to $100,000.

What you need to do about it.

Before November 1, every business subject to PIPEDA should put a breach recording procedure in place, educate their staff what a breach is, and how to report it to the privacy officer.

Cross-posted to Slaw

Home builder liability for IoT?

I was on a “Smart panel” discussing smart technology yesterday at the LSTAR Economic / Smart Technology Summit. Our panel had a good discussion around the benefits and privacy aspects of smart tech and the internet of things.

The context was in part the inclusion of smart tech in new homes. I made a brief comment about the possibility of liability to home builders, and thought it might be worth exploring that in more detail. IoT devices are notorious for their potential to be hacked. So much so that I’ve referred to them before as a gateway to mayhem. For various reasons, IoT devices (such as security cameras, door bells, water leak sensors) are often not properly secured. Once a hacker gets through one of these devices, they are inside the network and can do many nefarious things (such as stealing information, ransomware, and using the system to mine cryptocurrency).

How could a builder be liable if a breach happened on a house/condo/apartment they built?

Privacy torts are an emerging area, and the possible result is not certain. But it would be plausible for a class action to name the builder amongst the defendants. The cost to defend can be significant even if there is ultimately no liability.

How can builders reduce the risk?

Builder sale agreements include limitation clauses that limit their liability. But they may not be drafted broadly enough to limit liability for this kind of exposure. After all, they were drafted by real estate lawyers with physical building materials and equipment in mind, not this kind of risk. Builders should have their counsel review and revise if needed their limitation clauses to try to limit their liability for this risk.

New home warranty plans also tend to apply to physical items. Perhaps those plans should consider how this risk should be addressed.

Cyber risk insurance is also something to be considered.

On the practical front, builders should choose devices wisely. They should educate their buyers on the security and privacy issues around whatever devices and services are included. Either set up the devices properly, or instruct or help the buyers do it. That will both reduce the chances of a problem happening, and if a problem does occur, will reduce the chances that buyers will blame the builder.

PIPEDA privacy breach notification coming Nov 1

Effective Nov 1, 2018, businesses that have a privacy breach must give notice of the breach under PIPEDA – the privacy legislation affecting the private sector in most Canadian provinces. The final regulations containing the details are about to be published.

Here are the highlights.

When do I have to report?

If there is a privacy breach that “creates a real risk of significant harm to an individual”. That includes bodily harm, humiliation, damage to reputation, financial loss, identity theft. Risk factors to decide the reporting threshold are provided.  The report must be made “as soon as feasible after the organization determines that the breach has occurred.”

What do I have to report?

Circumstances of the breach, when it happened, what information was breached, steps taken to reduce the risk of harm, steps individuals can take to reduce risk, contact information.

Who do I have to report to?

The Privacy Commissioner, the individuals, and third parties that “may be able to reduce the risk of harm.” That third party requirement will require some pondering.

But wait, there’s more

Perhaps the most challenging aspect is a requirement to maintain a “record of every breach of security safeguards involving personal information under its control.” That must be shown to the Privacy Commissioner on request. The challenge is that there is no threshold, and every breach, even trivial ones, must be recorded.

What are the penalties?

Failure to report when required, and failure to keep the breach records can result in a penalty of up to $100,000.

What do I need to do now?

Businesses should review their privacy policies and processes and amend as needed. Record keeping systems must be put in place for recording all breaches. A breach reporting and incident response process should be put in place.

 

Cross-posted to Slaw

New Stuff & Old Laws

A common issue for new technology is the application of existing laws that were created before the new tech was contemplated. Examples include fintech (financial applications), fitness and health applications, and ridesharing services (such as Uber).

What is the issue?

Some activities and services are highly regulated. Financial services and the taxi industry are good examples. New entrants create innovative applications and services that compete with incumbents, but may or may not be regulated the same.

In some areas the entity may be regulated rather than the activity (often the case in fintech).

Laws sometimes prescribe a specific solution, rather than a desired result. Regulations around car headlights, for example, tend to specify how they must be built rather than how they must perform.

New tech may start out unregulated, but may as it develops creep into areas that are regulated. Fitness and health devices can easily become subject to medical device regulations (under the Food and Drugs Act) that impose certain requirements or licensing.

Why does it matter?

These issues for new tech have always been around – but the pace of change and innovation is getting much faster. Tech like cheap sensors, cheap connectivity, the increased power of smartphones, autonomous cars, blockchain, and artificial intelligence can be disruptive. Rapid, disruptive change makes it more difficult to get regulation right.

If you are the innovator, you may have legal issues to address that are not immediately apparent. The playing field may not be even, and can unfairly favour new players or incumbents. It can stifle or slow innovation – such as better headlight technology.

What to do about it?

Anyone developing new technology needs to think about where it fits within existing laws. Then either comply, make it different so it doesn’t need to comply, work with an incumbent, work with the regulators, or perhaps take some calculated risk.

Lawmakers face some tough issues. They should focus on evidenced based regulation rather than sticking with partisan or historical perspectives. Do existing regulations have the wrong focus and unintentionally distort the playing field? Does the new tech solve a problem in a different way than the regulations contemplate? Do existing regulations make sense in the modern context? Do they properly address a real issue? Do existing or proposed regulations help, or do they cause more problems than they solve?

 

Cross-posted to Slaw

Apply for trademarks now to save money?

Canada has made significant changes to the Trademarks Act, mostly to make it more consistent with international practice. Anyone considering applying for a trademark might want to file before the new rules come into force.

What is the issue?

In early 2019 the trademark application process will undergo significant changes. The changes include:

  • Not having to state first use dates or declare actual use
  • Registration term reduced from the current 15 years to 10
  • Adoption of the class system and a class based fee structure
  • Proof of distinctiveness needed for some types of marks

Why does it matter?

CIPO fees are now $450 per application no matter how many classes of goods and services are listed. The new fees will be $330 for the first class, plus $100 for each additional class. So any more than 2 classes will cost more. It is not unusual, depending on the nature of the goods and services, and whether include promotional items are included (eg if you sell hats or t-shirts that have your brand on it) to have several classes. Add to that the effective increase caused by getting only 10 years of protection vs 15. It is not clear yet how the proof of distinctiveness will work in practice, other than it will take more time and effort when required.

What to do about it

Businesses should ponder their trademark situation over the coming months and whether they might want to file for new marks or expanded uses at some point.   If so, they might save some money by applying before the new rules take effect.

Cross-posted to Slaw

Should Artificial Intelligence be regulated?


Elon Musk spoke at SXSW and emphasized his concerns about artificial intelligence and why it needs to be regulated.

What is the issue?

Elon says AI is more dangerous than nuclear warheads.

Right now, AI is created for specific tasks, such as driving a car, playing a game, responding to our voice commands, or providing personal recommendations. AI today is nowhere near as capable in general than even a moth brain, and most people think general artificial intelligence is a long way off. But Elon says “I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me, … It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.” He is concerned that the advent of digital super intelligence is much closer than we think.

Why does it matter?

Because Skynet. General broad AI that is no longer task specific could be more prone to abuse by humans. And the intelligence of AI could outpace the ability of humans to manage it. So it is important to develop AI safely to ensure it doesn’t get out of control.

What’s next?

The ethics of AI have been debated for years, and that debate will continue. Tech companies are working with various disciplines (scientific, futurists, ethics specialists) to try to come to grips with ethical standards for AI. The key question – if Elon is right – is whether this needs to be escalated to lawmakers, and if so, how soon? Should this be dealt with on a worldwide treaty basis – such as banning the use of AI for weapons?

Cross-posted to Slaw

What happens to cryptocurrencies when you die?

Blockchain removes intermediaries from transactions. For the most part that’s a good thing – but it can also have unintended consequences. For example, cryptocurrencies like Bitcoin flow between people much like paper money would be handed over. No financial institution is involved in the transaction. The same is true for other assets being tracked by blockchain technology, such as corporate shares.

When someone dies or becomes incapacitated, trustees or attorneys typically get control of that person’s assets through the intermediary. For example, if a trustee knows that the person has a bank account at bank X, they merely contact the bank, prove they have authority, and the bank co-operates to transfer the assets.

If a trustee wants access to social media and other online accounts such as email, they need to have the person’s logon credentials. Some social media platforms have procedures in place to allow trustee access through authentication processes designed for that situation much like traditional assets.

But what happens if a person dies or becomes incapacitated owning Bitcoin or other assets tracked by blockchain? Some people use third party wallet and exchange services to track their cryptocurrency, which may offer a solution for a trustee. But not everyone uses those, and there may be no intermediary to contact. If the person used a pseudonym for their credentials, it would make it even more difficult to prove who owned the account.

There have been stories about people who have lost their bitcoin private keys and have been unable to access their own money. A trustee would be in the same position if they don’t have the person’s private key. Potentially huge amounts of money or assets could be unrecoverable.

Blockchain and cryptocurrency holders might want to store their logon credentials and private keys in a safe place and let a family member know where it is. Or they might keep these credentials and private keys in a password manager, store the access details somewhere, and let a family member know where to find that.

Does blockchain itself perhaps provide a solution to this? Smart contracts execute automatically based on the happening of an event. Such as a market price threshold or temperature. Is there a smart contract solution that transfers access to cryptocurrency or other blockchain tracked assets of a person based on proof of a trustee or attorney’s authority to act? What would that proof look like? It is not, after all, a simple objective event such as a market price threshold.

Cross-posted to Slaw

Data Privacy Day

January 28 is Data Privacy Day.

Privacy is becoming more challenging with new tech such as artificial intelligence, quantum computing, blockchain, autonomous cars, the internet of things, drones, and government agencies recording massive amounts of data in the name of security.  Basic privacy concepts such as consent as we now know it may no longer be adequate to deal with some of these challenges.  And the sheer number of ways our information gets used makes it almost impossible to truly understand, let alone trust, what others are doing with our information.

The IAPP is hosting Privacy After Hours events in a number of cites around the world on Thursday Jan 25 to recognize Data Privacy Day.

Cross-posted to Slaw

Privacy event in London

The IAPP (International Association of Privacy Professionals) is providing “Privacy After Hours” events on Thursday January 25th in recognition of Data Privacy Day.

Privacy professionals in London Ontario are welcome to attend the event being held at McGinnis Landing restaurant.  Harrison Pensa is pleased to provide the appetizers for the event.

You can sign up for the event on the IAPP website.

8 Legal/Tech Issues for 2018

Blockchain (the technology behind Bitcoin) is in a hype phase. It has been touted as the solution to many issues around trust. To some extent blockchain is still a solution in search of a problem. Blockchain will, however, become an important technology, and perhaps during 2018 we will begin to see some practical uses.

CASL, Canada’s anti-spam legislation, has been under review. It is a horrible law where the cost / benefit ratio is way off. Most small businesses simply don’t have the resources to comply. And no matter how hard they try, larger businesses have a difficult time complying with all the technical and record keeping requirements. To me CASL is like using a sledgehammer to kill a fly in a china shop. You may or may not kill the fly, but the collateral damage simply isn’t worth it. The House of Commons Standing Committee on Industry, Science and Technology recently presented its report entitled Canada’s Anti-Spam Legislation: Clarifications are in Order. The report recommends changes, but I fear the changes we will end up with won’t go far enough.

Mandatory breach notification under PIPEDA (the federal privacy legislation that governs in most provinces) should be in effect sometime in 2018. It will require mandatory notice to the privacy commissioner and/or possible victims when there is a serious privacy breach. It will also require entities to keep records of all privacy breaches, even if they are not reportable under the act’s thresholds.

Security and privacy breaches will continue to be a problem. Sometimes these occur because of intensive attacks, but sometimes they are caused by stupid decisions or errors. Authentication by passwords can work to reduce the risks if done right, but it is a very difficult thing to do right. Another solution is needed – might blockchain come to the rescue here?

We will continue to hear about security issues around the internet of things, or IOT. IOT devices can be a gateway to mayhem. IOT things include such disparate devices as thermostats, light switches, home appliances, door locks, and baby monitors. The problem is that far too often IOT device designers don’t design them with security in mind. That makes it easy for malfeasants to use these devices to break into whatever networks they are connected to.

Artificial Intelligence is now employed in many things we use – ranging from google translate to semi-autonomous cars. Voice controlled screen and non-screen interactions – which use AI – are on the rise. In the short term, AI will continue to creep in behind the scenes with things we interact with regularly. In the long term, it will have disruptive effects for many, including the legal profession.

Bitcoin and other crypto-currencies have moved from the geek phase to get more mainstream attention. Crypto-currencies will be ripe for fraud as more people dip their toes in. There has already been ICO (Initial Coin Offering) fraud. And “drive by currency mining” where software gets surreptitiously installed on PC’s and phones to mine currency.

Another thing to keep an eye on is whether people’s “freaky line” will move. That’s the line that people refuse to cross because of privacy concerns about their information. Will, for example, the advantages of the automated home (which combines IOT and AI) lead people to adopt it in spite of privacy and security concerns?

Cross-posted to Slaw