Are you ready for PIPEDA’s privacy breach recording obligation?

In a recent blog post I talked about the new privacy breach notification requirements coming under PIPEDA this November 1. I said that perhaps the most challenging aspect is a requirement to maintain a “record of every breach of security safeguards involving personal information under its control.”

Why is that so challenging?

Many large companies already have this kind of procedure in place. But most business do not. Maintaining a record sounds easy. But this is not so simple when you think it through. First, the business must create a procedure and educate its staff to recognize breaches and report them to its privacy officer, even if they are not significant. No longer can the business rely on staff recognizing a breach because it is serious and obvious, or someone complains.

Then for each one the privacy officer must go through the analysis required under PIPEDA to determine if there is a “real risk of significant harm” that triggers a reporting requirement. The rationale for that decision must be recorded.

Why does it matter?

The Privacy Commissioner has the right to inspect any business’s breach record at any time. If a business does not report a breach when it is supposed to, or if they don’t keep a breach record, they can be subject to a fine of up to $100,000.

What you need to do about it.

Before November 1, every business subject to PIPEDA should put a breach recording procedure in place, educate their staff what a breach is, and how to report it to the privacy officer.

Cross-posted to Slaw

New Stuff & Old Laws

A common issue for new technology is the application of existing laws that were created before the new tech was contemplated. Examples include fintech (financial applications), fitness and health applications, and ridesharing services (such as Uber).

What is the issue?

Some activities and services are highly regulated. Financial services and the taxi industry are good examples. New entrants create innovative applications and services that compete with incumbents, but may or may not be regulated the same.

In some areas the entity may be regulated rather than the activity (often the case in fintech).

Laws sometimes prescribe a specific solution, rather than a desired result. Regulations around car headlights, for example, tend to specify how they must be built rather than how they must perform.

New tech may start out unregulated, but may as it develops creep into areas that are regulated. Fitness and health devices can easily become subject to medical device regulations (under the Food and Drugs Act) that impose certain requirements or licensing.

Why does it matter?

These issues for new tech have always been around – but the pace of change and innovation is getting much faster. Tech like cheap sensors, cheap connectivity, the increased power of smartphones, autonomous cars, blockchain, and artificial intelligence can be disruptive. Rapid, disruptive change makes it more difficult to get regulation right.

If you are the innovator, you may have legal issues to address that are not immediately apparent. The playing field may not be even, and can unfairly favour new players or incumbents. It can stifle or slow innovation – such as better headlight technology.

What to do about it?

Anyone developing new technology needs to think about where it fits within existing laws. Then either comply, make it different so it doesn’t need to comply, work with an incumbent, work with the regulators, or perhaps take some calculated risk.

Lawmakers face some tough issues. They should focus on evidenced based regulation rather than sticking with partisan or historical perspectives. Do existing regulations have the wrong focus and unintentionally distort the playing field? Does the new tech solve a problem in a different way than the regulations contemplate? Do existing regulations make sense in the modern context? Do they properly address a real issue? Do existing or proposed regulations help, or do they cause more problems than they solve?

 

Cross-posted to Slaw

Apply for trademarks now to save money?

Canada has made significant changes to the Trademarks Act, mostly to make it more consistent with international practice. Anyone considering applying for a trademark might want to file before the new rules come into force.

What is the issue?

In early 2019 the trademark application process will undergo significant changes. The changes include:

  • Not having to state first use dates or declare actual use
  • Registration term reduced from the current 15 years to 10
  • Adoption of the class system and a class based fee structure
  • Proof of distinctiveness needed for some types of marks

Why does it matter?

CIPO fees are now $450 per application no matter how many classes of goods and services are listed. The new fees will be $330 for the first class, plus $100 for each additional class. So any more than 2 classes will cost more. It is not unusual, depending on the nature of the goods and services, and whether include promotional items are included (eg if you sell hats or t-shirts that have your brand on it) to have several classes. Add to that the effective increase caused by getting only 10 years of protection vs 15. It is not clear yet how the proof of distinctiveness will work in practice, other than it will take more time and effort when required.

What to do about it

Businesses should ponder their trademark situation over the coming months and whether they might want to file for new marks or expanded uses at some point.   If so, they might save some money by applying before the new rules take effect.

Cross-posted to Slaw

What happens to cryptocurrencies when you die?

Blockchain removes intermediaries from transactions. For the most part that’s a good thing – but it can also have unintended consequences. For example, cryptocurrencies like Bitcoin flow between people much like paper money would be handed over. No financial institution is involved in the transaction. The same is true for other assets being tracked by blockchain technology, such as corporate shares.

When someone dies or becomes incapacitated, trustees or attorneys typically get control of that person’s assets through the intermediary. For example, if a trustee knows that the person has a bank account at bank X, they merely contact the bank, prove they have authority, and the bank co-operates to transfer the assets.

If a trustee wants access to social media and other online accounts such as email, they need to have the person’s logon credentials. Some social media platforms have procedures in place to allow trustee access through authentication processes designed for that situation much like traditional assets.

But what happens if a person dies or becomes incapacitated owning Bitcoin or other assets tracked by blockchain? Some people use third party wallet and exchange services to track their cryptocurrency, which may offer a solution for a trustee. But not everyone uses those, and there may be no intermediary to contact. If the person used a pseudonym for their credentials, it would make it even more difficult to prove who owned the account.

There have been stories about people who have lost their bitcoin private keys and have been unable to access their own money. A trustee would be in the same position if they don’t have the person’s private key. Potentially huge amounts of money or assets could be unrecoverable.

Blockchain and cryptocurrency holders might want to store their logon credentials and private keys in a safe place and let a family member know where it is. Or they might keep these credentials and private keys in a password manager, store the access details somewhere, and let a family member know where to find that.

Does blockchain itself perhaps provide a solution to this? Smart contracts execute automatically based on the happening of an event. Such as a market price threshold or temperature. Is there a smart contract solution that transfers access to cryptocurrency or other blockchain tracked assets of a person based on proof of a trustee or attorney’s authority to act? What would that proof look like? It is not, after all, a simple objective event such as a market price threshold.

Cross-posted to Slaw

Data Privacy Day

January 28 is Data Privacy Day.

Privacy is becoming more challenging with new tech such as artificial intelligence, quantum computing, blockchain, autonomous cars, the internet of things, drones, and government agencies recording massive amounts of data in the name of security.  Basic privacy concepts such as consent as we now know it may no longer be adequate to deal with some of these challenges.  And the sheer number of ways our information gets used makes it almost impossible to truly understand, let alone trust, what others are doing with our information.

The IAPP is hosting Privacy After Hours events in a number of cites around the world on Thursday Jan 25 to recognize Data Privacy Day.

Cross-posted to Slaw

8 Legal/Tech Issues for 2018

Blockchain (the technology behind Bitcoin) is in a hype phase. It has been touted as the solution to many issues around trust. To some extent blockchain is still a solution in search of a problem. Blockchain will, however, become an important technology, and perhaps during 2018 we will begin to see some practical uses.

CASL, Canada’s anti-spam legislation, has been under review. It is a horrible law where the cost / benefit ratio is way off. Most small businesses simply don’t have the resources to comply. And no matter how hard they try, larger businesses have a difficult time complying with all the technical and record keeping requirements. To me CASL is like using a sledgehammer to kill a fly in a china shop. You may or may not kill the fly, but the collateral damage simply isn’t worth it. The House of Commons Standing Committee on Industry, Science and Technology recently presented its report entitled Canada’s Anti-Spam Legislation: Clarifications are in Order. The report recommends changes, but I fear the changes we will end up with won’t go far enough.

Mandatory breach notification under PIPEDA (the federal privacy legislation that governs in most provinces) should be in effect sometime in 2018. It will require mandatory notice to the privacy commissioner and/or possible victims when there is a serious privacy breach. It will also require entities to keep records of all privacy breaches, even if they are not reportable under the act’s thresholds.

Security and privacy breaches will continue to be a problem. Sometimes these occur because of intensive attacks, but sometimes they are caused by stupid decisions or errors. Authentication by passwords can work to reduce the risks if done right, but it is a very difficult thing to do right. Another solution is needed – might blockchain come to the rescue here?

We will continue to hear about security issues around the internet of things, or IOT. IOT devices can be a gateway to mayhem. IOT things include such disparate devices as thermostats, light switches, home appliances, door locks, and baby monitors. The problem is that far too often IOT device designers don’t design them with security in mind. That makes it easy for malfeasants to use these devices to break into whatever networks they are connected to.

Artificial Intelligence is now employed in many things we use – ranging from google translate to semi-autonomous cars. Voice controlled screen and non-screen interactions – which use AI – are on the rise. In the short term, AI will continue to creep in behind the scenes with things we interact with regularly. In the long term, it will have disruptive effects for many, including the legal profession.

Bitcoin and other crypto-currencies have moved from the geek phase to get more mainstream attention. Crypto-currencies will be ripe for fraud as more people dip their toes in. There has already been ICO (Initial Coin Offering) fraud. And “drive by currency mining” where software gets surreptitiously installed on PC’s and phones to mine currency.

Another thing to keep an eye on is whether people’s “freaky line” will move. That’s the line that people refuse to cross because of privacy concerns about their information. Will, for example, the advantages of the automated home (which combines IOT and AI) lead people to adopt it in spite of privacy and security concerns?

Cross-posted to Slaw

I was a Messenger spoof victim

A few days ago I returned to my office after a meeting to find emails and voicemails telling me that someone was sending facebook messenger messages pretending they were from me.  The first message sent was an innocuous “Hello, how are you doing?” But if the recipient engaged it quickly turned into how I got a $300,000 government grant to pay off my bills, and tried to convince the recipient to send an email to “the agent in charge” to see if they were eligible.   I suspect if followed through it would either ask for payment of a loan application fee, or ask for credit card or other personal details.

Fortunately, it didn’t take long for my followers to realize it was a scam and not me.

This government grant scam is a known scam approach.  Typically one of two things has happened.  Either the malfeasant has hacked into my facebook account, or they took info from my public facebook presence and set up a spoof.

Some digging into my facebook profile, history, and security settings showed it was more likely a spoof than a hack.  I use strong passwords generated by a password manager for each account I have.  So it is unlikely that my password was compromised, unless there was some weakness in an app I have allowed to access facebook.  (For that very reason I allow very few apps to connect with facebook.)

But just in case, I changed my password, set up 2 factor authentication, and an email alert to notify me of questionable login attempts.  I have that set up on other platforms, but had not on facebook.  I hadn’t bothered before because I have very little personal info on facebook.  The mistake I made that allowed the spoofer to send messages to my friend list was to have my friend list open for everyone to see.  Too late for this scam, but I changed that anyway.

I also posted a message on facebook letting people know it was not me.

It is frustrating how difficult it is to report this to facebook in case they can stop (or at least make life difficult for) the spoofer.  Facebook has lots of ways to report various things – but they are all set up for very specific things – none of which worked in my situation.  Recipients can report it (there is a “report spam or abuse” option on the gear icon beside the sender’s name) – but I can’t.  There used to be a basic way to report things when they didn’t fit the methods provided – but that seems to be gone.  And it’s not just facebook that does that.  The thread one of my friends sent includes a gmail address for the “agent in charge”.  But reporting that to gmail to try and disable the address isn’t easy.  Their spoof/scam reporting method works only if you have received an email from the address – as the email header is a required field.

So how do you tell when you get a fake message, and what do you do about it?

Typical scam/phishing warnings apply. The messages are often out of character for the sender.  Or they are grammatically strange.  Or a gmail or similar generic email address is given rather than a corporate one.  Another flag is if it tries to get info or money.  If in doubt, contact the sender in another way to find out.  Facebook and other messaging platforms often have ways to report malicious communication attempts.  The victim will appreciate if you can take a minute to let them know and report it.

Cross-posted to Slaw

CRTC Compufinder decision lowers CASL spam penalty

The CRTC recently released 2 CASL decisions on Compufinder.  If this sounds familiar, it is because this is an appeal from an initial finding in 2015 that levied a $1.1 million penalty.

Compufinder took the position that CASL is unconstitutional.  Many legal experts have questioned the ability of the Federal Government to pass this legislation.  The CRTC decided that CASL is constitutional.  But this is not the last word. Inevitably this will be argued in court.  This decision is required reading for anyone who finds themselves in a position to challenge the act in the courts.  Ironically, the delay of the private right of action may have delayed getting the constitutionality issue to the appeal level.

In the substantive decision the penalty was reduced to $200,000.  This decision is required reading for anyone facing sanctions under CASL.

Topics covered include:

  • what the business to business exemption means (Compufinder failed to convince them that the exemption applied)
  • the conspicuously published implied consent, including who published it and message relevance
  • what is needed to show a diligence defence (it’s not easy)
  • factors in determining the size of the penalty

The decision shows that the CRTC will examine the CEM’s sent in individual detail, and that the business has a high onus of proof to show that they have done everything necessary to comply with the act for each and every one of them.

IMHO most small businesses simply don’t have the resources to meet the requirements.  And no matter how hard they try, larger businesses will have a difficult time attaining them.  To me CASL is like using a sledgehammer to kill a fly in a china shop.  You may or may not kill the fly, but the collateral damage simply isn’t worth it.

Hopefully changes will be made to CASL as a result of the current review of the statute.

Cross-posted to Slaw

Will quantum computing cause encryption’s Y2K?

At the Can-Tech (formerly known as IT.Can) conference this week Mike Brown of Isara Corporation spoke about quantum computing and security.  Within a few short years quantum computing will become commercially viable.  Quantum computing works differently than the binary computing we have today.  It will be able to do things that even today’s super computers can’t.

For the most part that is a good thing.  The downside is that quantum computers will be able to break many current forms of encryption.  So it will be necessary to update current encryption models with something different.

That may not be a simple or quick exercise, given the layers and complexity of encryption.  His message was that we need to start planning for this now, and it may take an effort greater and more challenging than the one that fixed the Y2K problem.

For the record, Isara sells security solutions that are designed to be quantum computer safe.  For some validation that this really is a thing, take a look at this Wikipedia article on Post-quantum cryptography.  

Cross-posted to Slaw

Cars and the data they share

Anyone interested in cars and the data they will increasingly collect should read the article in the November Automobile magazine titled The Big Data Boom – How the race to monetize the connected car will drive change in the auto industry.

It talks about how much data might be generated (4,000 GB per day), how that sheer volume will be handled, and how it might be monetized. And the challenges of cybersecurity and privacy.

Auto makers are well aware of the privacy issues.  Challenges will include how to deal with privacy laws that vary dramatically around the world.  Will they default to the highest standard? Or will the data be valuable enough to make it worth their while to deal with information differently in different countries?

How will auto makers give drivers comfort that their information will be secure and won’t be misused?  How will they explain what info will be anonymized, and what will remain identified with the driver?

How many drivers will not be eager to share driving info with insurers and others either for privacy reasons or skepticism about what arbitrary decisions about them will be made based on that info?

For more about this topic, see this post I wrote a few months ago.  It is also on the agenda for the upcoming Canadian IT Law Association conference.

Cross-posted to Slaw