Privacy professionals in London Ontario are welcome to attend the event being held at McGinnis Landing restaurant. Harrison Pensa is pleased to provide the appetizers for the event.
Blockchain (the technology behind Bitcoin) is in a hype phase. It has been touted as the solution to many issues around trust. To some extent blockchain is still a solution in search of a problem. Blockchain will, however, become an important technology, and perhaps during 2018 we will begin to see some practical uses.
CASL, Canada’s anti-spam legislation, has been under review. It is a horrible law where the cost / benefit ratio is way off. Most small businesses simply don’t have the resources to comply. And no matter how hard they try, larger businesses have a difficult time complying with all the technical and record keeping requirements. To me CASL is like using a sledgehammer to kill a fly in a china shop. You may or may not kill the fly, but the collateral damage simply isn’t worth it. The House of Commons Standing Committee on Industry, Science and Technology recently presented its report entitled Canada’s Anti-Spam Legislation: Clarifications are in Order. The report recommends changes, but I fear the changes we will end up with won’t go far enough.
Mandatory breach notification under PIPEDA (the federal privacy legislation that governs in most provinces) should be in effect sometime in 2018. It will require mandatory notice to the privacy commissioner and/or possible victims when there is a serious privacy breach. It will also require entities to keep records of all privacy breaches, even if they are not reportable under the act’s thresholds.
Security and privacy breaches will continue to be a problem. Sometimes these occur because of intensive attacks, but sometimes they are caused by stupid decisions or errors. Authentication by passwords can work to reduce the risks if done right, but it is a very difficult thing to do right. Another solution is needed – might blockchain come to the rescue here?
We will continue to hear about security issues around the internet of things, or IOT. IOT devices can be a gateway to mayhem. IOT things include such disparate devices as thermostats, light switches, home appliances, door locks, and baby monitors. The problem is that far too often IOT device designers don’t design them with security in mind. That makes it easy for malfeasants to use these devices to break into whatever networks they are connected to.
Artificial Intelligence is now employed in many things we use – ranging from google translate to semi-autonomous cars. Voice controlled screen and non-screen interactions – which use AI – are on the rise. In the short term, AI will continue to creep in behind the scenes with things we interact with regularly. In the long term, it will have disruptive effects for many, including the legal profession.
Bitcoin and other crypto-currencies have moved from the geek phase to get more mainstream attention. Crypto-currencies will be ripe for fraud as more people dip their toes in. There has already been ICO (Initial Coin Offering) fraud. And “drive by currency mining” where software gets surreptitiously installed on PC’s and phones to mine currency.
Another thing to keep an eye on is whether people’s “freaky line” will move. That’s the line that people refuse to cross because of privacy concerns about their information. Will, for example, the advantages of the automated home (which combines IOT and AI) lead people to adopt it in spite of privacy and security concerns?
A few days ago I returned to my office after a meeting to find emails and voicemails telling me that someone was sending facebook messenger messages pretending they were from me. The first message sent was an innocuous “Hello, how are you doing?” But if the recipient engaged it quickly turned into how I got a $300,000 government grant to pay off my bills, and tried to convince the recipient to send an email to “the agent in charge” to see if they were eligible. I suspect if followed through it would either ask for payment of a loan application fee, or ask for credit card or other personal details.
Fortunately, it didn’t take long for my followers to realize it was a scam and not me.
This government grant scam is a known scam approach. Typically one of two things has happened. Either the malfeasant has hacked into my facebook account, or they took info from my public facebook presence and set up a spoof.
Some digging into my facebook profile, history, and security settings showed it was more likely a spoof than a hack. I use strong passwords generated by a password manager for each account I have. So it is unlikely that my password was compromised, unless there was some weakness in an app I have allowed to access facebook. (For that very reason I allow very few apps to connect with facebook.)
But just in case, I changed my password, set up 2 factor authentication, and an email alert to notify me of questionable login attempts. I have that set up on other platforms, but had not on facebook. I hadn’t bothered before because I have very little personal info on facebook. The mistake I made that allowed the spoofer to send messages to my friend list was to have my friend list open for everyone to see. Too late for this scam, but I changed that anyway.
I also posted a message on facebook letting people know it was not me.
It is frustrating how difficult it is to report this to facebook in case they can stop (or at least make life difficult for) the spoofer. Facebook has lots of ways to report various things – but they are all set up for very specific things – none of which worked in my situation. Recipients can report it (there is a “report spam or abuse” option on the gear icon beside the sender’s name) – but I can’t. There used to be a basic way to report things when they didn’t fit the methods provided – but that seems to be gone. And it’s not just facebook that does that. The thread one of my friends sent includes a gmail address for the “agent in charge”. But reporting that to gmail to try and disable the address isn’t easy. Their spoof/scam reporting method works only if you have received an email from the address – as the email header is a required field.
So how do you tell when you get a fake message, and what do you do about it?
Typical scam/phishing warnings apply. The messages are often out of character for the sender. Or they are grammatically strange. Or a gmail or similar generic email address is given rather than a corporate one. Another flag is if it tries to get info or money. If in doubt, contact the sender in another way to find out. Facebook and other messaging platforms often have ways to report malicious communication attempts. The victim will appreciate if you can take a minute to let them know and report it.
The CRTC recently released 2 CASL decisions on Compufinder. If this sounds familiar, it is because this is an appeal from an initial finding in 2015 that levied a $1.1 million penalty.
Compufinder took the position that CASL is unconstitutional. Many legal experts have questioned the ability of the Federal Government to pass this legislation. The CRTC decided that CASL is constitutional. But this is not the last word. Inevitably this will be argued in court. This decision is required reading for anyone who finds themselves in a position to challenge the act in the courts. Ironically, the delay of the private right of action may have delayed getting the constitutionality issue to the appeal level.
In the substantive decision the penalty was reduced to $200,000. This decision is required reading for anyone facing sanctions under CASL.
Topics covered include:
- what the business to business exemption means (Compufinder failed to convince them that the exemption applied)
- the conspicuously published implied consent, including who published it and message relevance
- what is needed to show a diligence defence (it’s not easy)
- factors in determining the size of the penalty
The decision shows that the CRTC will examine the CEM’s sent in individual detail, and that the business has a high onus of proof to show that they have done everything necessary to comply with the act for each and every one of them.
IMHO most small businesses simply don’t have the resources to meet the requirements. And no matter how hard they try, larger businesses will have a difficult time attaining them. To me CASL is like using a sledgehammer to kill a fly in a china shop. You may or may not kill the fly, but the collateral damage simply isn’t worth it.
Hopefully changes will be made to CASL as a result of the current review of the statute.
At the Can-Tech (formerly known as IT.Can) conference this week Mike Brown of Isara Corporation spoke about quantum computing and security. Within a few short years quantum computing will become commercially viable. Quantum computing works differently than the binary computing we have today. It will be able to do things that even today’s super computers can’t.
For the most part that is a good thing. The downside is that quantum computers will be able to break many current forms of encryption. So it will be necessary to update current encryption models with something different.
That may not be a simple or quick exercise, given the layers and complexity of encryption. His message was that we need to start planning for this now, and it may take an effort greater and more challenging than the one that fixed the Y2K problem.
For the record, Isara sells security solutions that are designed to be quantum computer safe. For some validation that this really is a thing, take a look at this Wikipedia article on Post-quantum cryptography.
Anyone interested in cars and the data they will increasingly collect should read the article in the November Automobile magazine titled The Big Data Boom – How the race to monetize the connected car will drive change in the auto industry.
It talks about how much data might be generated (4,000 GB per day), how that sheer volume will be handled, and how it might be monetized. And the challenges of cybersecurity and privacy.
Auto makers are well aware of the privacy issues. Challenges will include how to deal with privacy laws that vary dramatically around the world. Will they default to the highest standard? Or will the data be valuable enough to make it worth their while to deal with information differently in different countries?
How will auto makers give drivers comfort that their information will be secure and won’t be misused? How will they explain what info will be anonymized, and what will remain identified with the driver?
How many drivers will not be eager to share driving info with insurers and others either for privacy reasons or skepticism about what arbitrary decisions about them will be made based on that info?
I just signed up to attend the fall IT-Can conference, and thought the conference was worth mentioning. It is a consistent high quality conference for lawyers practicing in the IT/IP fields, and for others such as CIO’s.
Topics this year include fintech, quantum computing, blockchain and smart contracts, connected vehicles, big data, health care tech, cybersecurity, and control over online content.
Perhaps I’ll see you there in Toronto on Oct 23.
Cross-posted to Slaw
The draft privacy breach regulations under PIPEDA have just been published. They are open for comment for 30 days.
These regulations detail the mechanics of notifying the Privacy Commissioner and individuals when there is a privacy breach. PIPEDA was amended some time ago to require mandatory notification when there is a breach that results in “real risk of significant harm”. Those provisions will come into force after the regulations are passed.
The draft regulations are about what were expected. They are similar to those under Alberta privacy legislation.
I agree with David Fraser’s view that section 4(a) that says notification to individuals can be sent “by email or any other secure form of communication if the affected individual has consented to receiving information from the organization in that manner” is uncalled for. A notice of this nature is not spam, and it does not make sense to require that an individual has given consent for communication in that manner to notify of a privacy breach. These notifications are for the benefit of the individual, so why make it harder for organizations to send it?
The amendments and regulations have provisions requiring organizations to keep records of all privacy breaches, including information that allows the Privacy Commissioner to determine if the organization properly considered the notice threshold tests. In other words, organizations must be able to prove that any decision not to notify was justified.
Artificial Intelligence is going to have a disruptive effect on the legal profession. The question is how soon, how much, and what areas of law come first. This kind of disruptive change builds up slowly, but once it hits a tipping point, it happens quickly.
Futurist Richard Worzel wrote an article titled Three Things You Need to Know About Artificial Intelligence that is worth a read. Here are some excerpts:
Every once in while, something happens that tosses a huge rock into the pond of human affairs. Such rocks include things like the discovery of fire, the invention of the wheel, written language, movable type, the telegraph, computers, and the Internet. These kinds of massive disturbances produce pronounced, remarkable, unexpected changes, and radically alter human life.
Artificial Intelligence is just such a rock, and will produce exactly those kinds of disturbances. We’re not prepared for the tsunami that AI is going to throw at us.
But now AI is becoming a reality, and it is going to hit us far faster than we now expect. This will lead to an avalanche of effects that will reach into all aspects of our lives, society, the economy, business, and the job market. It will lead to perhaps the most dramatic technological revolution we have yet experienced – even greater than the advent of computers, smartphones, or the Internet.
The legal profession seems to be particularly susceptible to early occupation by AIs:
“At JPMorgan Chase & Co., a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours. The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of work each year by lawyers and loan officers.”
So, before June of 2017, lawyers and loan officers spent 360,000 hours a year interpreting commercial loan agreements for JPMorgan Chase. Since June, that specific kind of work has vanished.
“I’ve got nothing to hide” is a common retort from people who are blasé about privacy. Their point is that they have done nothing wrong, so they don’t care how much of their information and habits are public.
The flaw in that retort is that information about us can be used in many ways and for many things that we might not expect. And things that we may think are normal and innocuous may be offensive to others who can make life difficult because of it. For example, the US Justice department is trying to get the names of over a million people who visited an anti-Trump website from Dreamhost. Using a VPN gets more attractive every day.
Cross-posted to Slaw